首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The use of domain-specific concepts in biomedical text summarization   总被引:3,自引:0,他引:3  
Text summarization is a method for data reduction. The use of text summarization enables users to reduce the amount of text that must be read while still assimilating the core information. The data reduction offered by text summarization is particularly useful in the biomedical domain, where physicians must continuously find clinical trial study information to incorporate into their patient treatment efforts. Such efforts are often hampered by the high-volume of publications. This paper presents two independent methods (BioChain and FreqDist) for identifying salient sentences in biomedical texts using concepts derived from domain-specific resources. Our semantic-based method (BioChain) is effective at identifying thematic sentences, while our frequency-distribution method (FreqDist) removes information redundancy. The two methods are then combined to form a hybrid method (ChainFreq). An evaluation of each method is performed using the ROUGE system to compare system-generated summaries against a set of manually-generated summaries. The BioChain and FreqDist methods outperform some common summarization systems, while the ChainFreq method improves upon the base approaches. Our work shows that the best performance is achieved when the two methods are combined. The paper also presents a brief physician’s evaluation of three randomly-selected papers from an evaluation corpus to show that the author’s abstract does not always reflect the entire contents of the full-text.  相似文献   

2.
In every text, some words have frequency appearance and are considered as keywords because they have a strong relationship with the subjects of their texts, these words' frequencies change with time-series variation in a given period. However, in traditional text dealing methods and text search techniques, the importance of frequency change with time-series variation is not considered. Therefore, traditional methods could not correctly determine the index of word's popularity in a given period. In this paper, a new method is proposed to estimate automatically the stability classes (increasing, relatively constant, and decreasing) that indicate word's popularity with time-series variation based on the frequency change in past text data. At first, learning data were produced by defining five attributes to measure the frequency change of a word quantitatively. These five attributes were extracted automatically from electronic texts. These learning data were manual (human) classified into three stability classes. Then, these data were subjected to a decision tree to determine automatically stability classes of analysis data (test data). For learning data, we obtained the attribute values of 443 proper nouns that were extracted from 2216 articles of CNN newspapers (1997–1999) that discussed professional baseball. For testing data, 472 proper nouns that were extracted from 972 articles of CNN newspaper (1997–2000) then classified them automatically using decision tree. According to the comparison between the evaluation of the decision tree results and manually (human) results, F-measures of increasing, relatively constant and decreasing classes were 0.847, 0.851, and 0.768, respectively, and the effectiveness of this method is achieved.  相似文献   

3.
4.
The relationship among various technologies has been a major research theme in high-tech management and thus investigated from several approaches. However, the relationship among product attributes has been ignored chiefly due to product complexities. The dynamic relationship among product attributes is important in terms of both finding product drivers and generating new ideas. Nevertheless, previous approaches have either remained purely conceptual or been conducted case-by-case. In practice, although they have contributed considerably to the product concept generation, uncertainty and risk still remain to a great extent. In this research, we will suggest an exploratory method to bridge the gap between these two approaches. To begin with, focusing on the mobile phone product range by Nokia, user guides and manuals are collected. Then, we build four sets of time-series data using text mining under the guidance of the hierarchy of product attributes composed of three layers. A vector autoregression (VAR) model, usually applied to examine causal relationship among economic time-series variables, is employed to analyze the data and investigate the dynamic causal relationship among product attributes. Applying variance decomposition and impulse–response function, the main product drivers, emerging features and actively interacting sets of attributes are identified, along with the three layers of the hierarchy. Four hypotheses on dynamic relationship among product attributes are also developed and tested to clarify fundamental characteristics.  相似文献   

5.
Networked information retrieval aims at the interoperability of heterogeneous information retrieval (IR) systems. In this paper, we show how differences concerning search operators and database schemas can be handled by applying data abstraction concepts in combination with uncertain inference. Different data types with vague predicates are required to allow for queries referring to arbitrary attributes of documents. Physical data independence separates search operators from access paths, thus solving text search problems related to noun phrases, compound words and proper nouns. Projection and inheritance on attributes support the creation of unified views on a set of IR databases. Uncertain inference allows for query processing even on incompatible database schemas.  相似文献   

6.
A challenge for sentence categorization and novelty mining is to detect not only when text is relevant to the user’s information need, but also when it contains something new which the user has not seen before. It involves two tasks that need to be solved. The first is identifying relevant sentences (categorization) and the second is identifying new information from those relevant sentences (novelty mining). Many previous studies of relevant sentence retrieval and novelty mining have been conducted on the English language, but few papers have addressed the problem of multilingual sentence categorization and novelty mining. This is an important issue in global business environments, where mining knowledge from text in a single language is not sufficient. In this paper, we perform the first task by categorizing Malay and Chinese sentences, then comparing their performances with that of English. Thereafter, we conduct novelty mining to identify the sentences with new information. Experimental results on TREC 2004 Novelty Track data show similar categorization performance on Malay and English sentences, which greatly outperform Chinese. In the second task, it is observed that we can achieve similar novelty mining results for all three languages, which indicates that our algorithm is suitable for novelty mining of multilingual sentences. In addition, after benchmarking our results with novelty mining without categorization, it is learnt that categorization is necessary for the successful performance of novelty mining.  相似文献   

7.
In an environment full of disordered information, the media spreads fake or harmful information into the public arena with a speed which is faster than ever before. A news report should ideally be neutral and factual. Excessive personal emotions or viewpoints should not be included. News articles ought not to be intentionally or maliciously written or create a media framing. A harmful news is defined as those explicit or implicit harmful speech in news text that harms people or affects readers’ perception. However, in the current situation, it is difficult to effectively identify and predict fake or harmful news in advance, especially harmful news. Therefore, in this study, we propose a Bidirectional Encoder Representation from Transformers (BERT) based model which applies ensemble learning methods with a text sentiment analysis to identify harmful news, aiming to provide readers with a way to identify harmful news content so as to help them to judge whether the information provided is in a more neutral manner. The working model of the proposed system has two phases. The first phase is collecting harmful news and establishing a development model for analyzing the correlation between text sentiment and harmful news. The second phase is identifying harmful news by analyzing text sentiment with an ensemble learning technique and the BERT model. The purpose is to determine whether the news has harmful intentions. Our experimental results show that the F1-score of the proposed model reaches 66.3%, an increase of 7.8% compared with that of the previous term frequency-inverse document frequency approach which adopts a Lagrangian Support Vector Machine (LSVM) model without using a text sentiment. Moreover, the proposed method achieves a better performance in recognizing various cases of information disorder.  相似文献   

8.
胡吉明  钱玮  李雨薇  文鹏 《情报科学》2021,39(10):11-17
【目的/意义】本文以主题为核心,从外部属性和内容属性两个视角展开政策文本结构化解析,直观反映政 策核心内涵,挖掘政策文本语义,为政策内容解读提供新模式。【方法/过程】利用LDA2Vec主题模型实现基于上下 文的政策文本主题识别,同时借助位置和语法规律提取外部属性,以此构建政策文本结构化解析的描述框架。【结 果/结论】“互联网+”政策文本解读的实证分析发现,本文所提框架有助于直观展现政策要素,有效揭示政策文本主 题分布,以及进行大规模政策领域文本的批量分析和解读。【创新/局限】通过结构化解析框架展现政策文本的形式 化特征和主题性特征,帮助政策相关群体把握政策制定的特点和侧重点,目前深层次内容解读有待进一步研究。  相似文献   

9.
GPS-enabled devices and social media popularity have created an unprecedented opportunity for researchers to collect, explore, and analyze text data with fine-grained spatial and temporal metadata. In this sense, text, time and space are different domains with their own representation scales and methods. This poses a challenge on how to detect relevant patterns that may only arise from the combination of text with spatio-temporal elements. In particular, spatio-temporal textual data representation has relied on feature embedding techniques. This can limit a model’s expressiveness for representing certain patterns extracted from the sequence structure of textual data. To deal with the aforementioned problems, we propose an Acceptor recurrent neural network model that jointly models spatio-temporal textual data. Our goal is to focus on representing the mutual influence and relationships that can exist between written language and the time-and-place where it was produced. We represent space, time, and text as tuples, and use pairs of elements to predict a third one. This results in three predictive tasks that are trained simultaneously. We conduct experiments on two social media datasets and on a crime dataset; we use Mean Reciprocal Rank as evaluation metric. Our experiments show that our model outperforms state-of-the-art methods ranging from a 5.5% to a 24.7% improvement for location and time prediction.  相似文献   

10.
A vast number of user opinions are available from reviews posted on e-commerce websites. Although these opinions are a valuable source of knowledge for both manufacturers and customers, they provide volumes of information that exceeds the human cognitive processing capacity, which can be a major bottleneck for their effective use. To address this problem, a number of opinion-summarization methods have been proposed to organize these opinions by grouping them around aspects. However, these methods tend to generate an excessive number of aspect groups that are frequently overly generic and difficult to interpret. We argue that a superior alternative would be to organize opinions around product attributes as defined in a product catalog. Typically, product attributes correspond to the most important characteristics of the products. Furthermore, they are common to all products in a given category and thus, form a more stable set than aspects. In this paper, we propose a novel approach called OpinionLink to products in a catalog at the attribute granularity level with opinions extracted from product reviews. The proposed approach is divided into two phases. In the first phase, OpinionLink uses a classifier to identify opinionated sentences in the reviews on a particular product. In the second phase, another classifier is used to map the opinions that were previously extracted from the user reviews to the attributes of the products in the product catalog. We performed a series of experiments on these phases. For the first phase, our experiments indicated that using classifiers with the proposed features achieved an average of 0.87 in terms of F1 measure for the task of identifying opinionated sentences. In the second phase, the method we proposed for the opinion-mapping task achieved an average of 0.85 in terms of F1. Further, we verified the effectiveness of the proposed approach as a realistic end-to-end application, indicating that we can use OpinionLink in a real setting. Finally, we empirically demonstrate the feasibility of using the proposed approach with an extremely large volume of opinions available in a collection of more than 600,000 real reviews. We also set forth a number of directions for future research.  相似文献   

11.
Unstructured tweet feeds are becoming the source of real-time information for various events. However, extracting actionable information in real-time from this unstructured text data is a challenging task. Hence, researchers are employing word embedding approach to classify unstructured text data. We set our study in the contexts of the 2014 Ebola and 2016 Zika outbreaks and probed the accuracy of domain-specific word vectors for identifying crisis-related actionable tweets. Our findings suggest that relatively smaller domain-specific input corpora from the Twitter corpus are better in extracting meaningful semantic relationship than generic pre-trained Word2Vec (contrived from Google News) or GloVe (of Stanford NLP group). However, domain-specific quality tweet corpora during the early stages of outbreaks are normally scant, and identifying actionable tweets during early stages is crucial to stemming the proliferation of an outbreak. To overcome this challenge, we consider scholarly abstracts, related to Ebola and Zika virus, from PubMed and probe the efficiency of cross-domain resource utilization for word vector generation. Our findings demonstrate that the relevance of PubMed abstracts for the training purpose when Twitter data (as input corpus) would be scant during the early stages of the outbreak. Thus, this approach can be implemented to handle future outbreaks in real time. We also explore the accuracy of our word vectors for various model architectures and hyper-parameter settings. We observe that Skip-gram accuracies are better than CBOW, and higher dimensions yield better accuracy.  相似文献   

12.
Argumentation mining is a rising subject in the computational linguistics domain focusing on extracting structured arguments from natural text, often from unstructured or noisy text. The initial approaches on modeling arguments was aiming to identify a flawless argument on specific fields (Law, Scientific Papers) serving specific needs (completeness, effectiveness). With the emerge of Web 2.0 and the explosion in the use of social media both the diffusion of the data and the argument structure have changed. In this survey article, we bridge the gap between theoretical approaches of argumentation mining and pragmatic schemes that satisfy the needs of social media generated data, recognizing the need for adapting more flexible and expandable schemes, capable to adjust to the argumentation conditions that exist in social media. We review, compare, and classify existing approaches, techniques and tools, identifying the positive outcome of combining tasks and features, and eventually propose a conceptual architecture framework. The proposed theoretical framework is an argumentation mining scheme able to identify the distinct sub-tasks and capture the needs of social media text, revealing the need for adopting more flexible and extensible frameworks.  相似文献   

13.
Social emotion refers to the emotion evoked to the reader by a textual document. In contrast to the emotion cause extraction task which analyzes the cause of the author's sentiments based on the expressions in text, identifying the causes of social emotion evoked to the reader from text has not been explored previously. Social emotion mining and its cause analysis is not only an important research topic in Web-based social media analytics and text mining but also has a number of applications in multiple domains. As the focus of social emotion cause identification is on analyzing the causes of the reader's emotions elicited by a text that are not explicitly or implicitly expressed, it is a challenging task fundamentally different from the previous research. To tackle this, it also needs a deeper level understanding of the cognitive process underlying the inference of social emotion and its cause analysis. In this paper, we propose the new task of social emotion cause identification (SECI). Inspired by the cognitive structure of emotions (OCC) theory, we present a Cognitive Emotion model Enhanced Sequential (CogEES) method for SECI. Specifically, based on the implications of the OCC model, our method first establishes the correspondence between words/phrases in text and emotional dimensions identified in OCC and builds the emotional dimension lexicons with 1,676 distinct words/phrases. Then, our method utilizes lexicons information and discourse coherence for the semantic segmentation of document and the enhancement of clause representation learning. Finally, our method combines text segmentation and clause representation into a sequential model for cause clause prediction. We construct the SECI dataset for this new task and conduct experiments to evaluate CogEES. Our method outperforms the baselines and achieves over 10% F1 improvement on average, with better interpretability of the prediction results.  相似文献   

14.
[目的/意义]有效融合引文网络中的引用关系和文本属性等多元数据,增强文献节点间的语义关联,从而为数据挖掘和知识发现等任务提供有力的支撑。[方法/过程]提出了一种引文网络的知识表示方法,先利用神经网络模型学习引文网络中的k阶邻近结构;然后使用doc2vec模型学习标题、摘要等文本属性;最后给出了基于向量共享的交叉学习机制用于多元数据融合。[结果/结论]通过面向干细胞领域的CNKI引文数据集的测试,在链路预测上取得了较好的性能,证明了方法的有效性和科学性。  相似文献   

15.
Gene ontology (GO) consists of three structured controlled vocabularies, i.e., GO domains, developed for describing attributes of gene products, and its annotation is crucial to provide a common gateway to access different model organism databases. This paper explores an effective application of text categorization methods to this highly practical problem in biology. As a first step, we attempt to tackle the automatic GO annotation task posed in the Text Retrieval Conference (TREC) 2004 Genomics Track. Given a pair of genes and an article reference where the genes appear, the task simulates assigning GO domain codes. We approach the problem with careful consideration of the specialized terminology and pay special attention to various forms of gene synonyms, so as to exhaustively locate the occurrences of the target gene. We extract the words around the spotted gene occurrences and used them to represent the gene for GO domain code annotation. We regard the task as a text categorization problem and adopt a variant of kNN with supervised term weighting schemes, making our method among the top-performing systems in the TREC official evaluation. Furthermore, we investigate different feature selection policies in conjunction with the treatment of terms associated with negative instances. Our experiments reveal that round-robin feature space allocation with eliminating negative terms substantially improves performance as GO terms become specific.  相似文献   

16.
C4.5算法是数据分类的经典数据挖掘算法。整合并规范了临床确诊病例中的糖尿病并发症数据,同时进行了粗糙集约简,并运用C4.5算法实现了分类。研究表明,该方法能很好地实现决策分类,以辅助临床诊断。  相似文献   

17.
Polarity classification is one of the most fundamental problems in sentiment analysis. In this paper, we propose a novel method, Sound Cosine Similaritye Matching, for polarity classification of Twitter messages which incorporates features based on audio data rather than on grammar or other text properties, i.e., eliminates the dependency on external dictionaries. It is useful especially for correctly identifying misspelled or shortened words that are frequently encountered in text from online social media. Method performance is evaluated in two levels: i) capture rate of the misspelled and shortened words, ii) classification performance of the feature set. Our results show that classification accuracy is improved, compared to two other models in the literature, when the proposed features are used.  相似文献   

18.
刘玉林  菅利荣 《情报杂志》2021,40(4):101-108
[目的/意义]技术集群因对属性相同的技术集聚进行研究,能够识别新技术,发现技术发展规律,提前感知市场技术变化,引起学者和企业管理者越来越高的关注。[方法/过程]以文本挖掘和复杂网络技术方法对专利文本数据进行分析,进而构建动态专利有向网络,并从网络中基于Python程序提取巨片,通过巨片拓扑性质分析核心技术集群的演化历程。[结果/结论]以美国电商数据处理技术为例的实证研究表明:通过构建动态有向网络,并从网络中提取巨片分析技术演化,能够有效的识别出美国电商数据处理技术中"数据设备技术、数据信息技术、数据系统技术、数据应用技术和数据融合技术"5个核心技术集群,并进一步发现核心技术集群之间很好的更新替换机制。实证研究证明了基于动态专利有向网络的核心技术集群演化分析方法的有效性。  相似文献   

19.
Multimedia objects can be retrieved using their context that can be for instance the text surrounding them in documents. This text may be either near or far from the searched objects. Our goal in this paper is to study the impact, in term of effectiveness, of text position relatively to searched objects. The multimedia objects we consider are described in structured documents such as XML ones. The document structure is therefore exploited to provide this text position in documents. Although structural information has been shown to be an effective source of evidence in textual information retrieval, only a few works investigated its interest in multimedia retrieval. More precisely, the task we are interested in this paper is to retrieve multimedia fragments (i.e. XML elements having at least one multimedia object). Our general approach is built on two steps: we first retrieve XML elements containing multimedia objects, and we then explore the surrounding information to retrieve relevant multimedia fragments. In both cases, we study the impact of the surrounding information using the documents structure.  相似文献   

20.
Today, due to a vast amount of textual data, automated extractive text summarization is one of the most common and practical techniques for organizing information. Extractive summarization selects the most appropriate sentences from the text and provide a representative summary. The sentences, as individual textual units, usually are too short for major text processing techniques to provide appropriate performance. Hence, it seems vital to bridge the gap between short text units and conventional text processing methods.In this study, we propose a semantic method for implementing an extractive multi-document summarizer system by using a combination of statistical, machine learning based, and graph-based methods. It is a language-independent and unsupervised system. The proposed framework learns the semantic representation of words from a set of given documents via word2vec method. It expands each sentence through an innovative method with the most informative and the least redundant words related to the main topic of sentence. Sentence expansion implicitly performs word sense disambiguation and tunes the conceptual densities towards the central topic of each sentence. Then, it estimates the importance of sentences by using the graph representation of the documents. To identify the most important topics of the documents, we propose an inventive clustering approach. It autonomously determines the number of clusters and their initial centroids, and clusters sentences accordingly. The system selects the best sentences from appropriate clusters for the final summary with respect to information salience, minimum redundancy, and adequate coverage.A set of extensive experiments on DUC2002 and DUC2006 datasets was conducted for investigating the proposed scheme. Experimental results showed that the proposed sentence expansion algorithm and clustering approach could considerably enhance the performance of the summarization system. Also, comparative experiments demonstrated that the proposed framework outperforms most of the state-of-the-art summarizer systems and can impressively assist the task of extractive text summarization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号