首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.  相似文献   

2.
Neural Network Agents for Learning Semantic Text Classification   总被引:1,自引:0,他引:1  
The research project AgNeT develops Agents for Neural Text routing in the internet. Unrestricted potentially faulty text messages arrive at a certain delivery point (e.g. email address or world wide web address). These text messages are scanned and then distributed to one of several expert agents according to a certain task criterium. Possible specific scenarios within this framework include the learning of the routing of publication titles or news titles. In this paper we describe extensive experiments for semantic text routing based on classified library titles and newswire titles. This task is challenging since incoming messages may contain constructions which have not been anticipated. Therefore, the contributions of this research are in learning and generalizing neural architectures for the robust interpretation of potentially noisy unrestricted messages. Neural networks were developed and examined for this topic since they support robustness and learning in noisy unrestricted real-world texts. We describe and compare different sets of experiments. The first set of experiments tests a recurrent neural network for the task of library title classification. Then we describe a larger more difficult newswire classification task from information retrieval. The comparison of the examined models demonstrates that techniques from information retrieval integrated into recurrent plausibility networks performed well even under noise and for different corpora.  相似文献   

3.
ABSTRACT

Latent Dirichlet allocation (LDA) topic models are increasingly being used in communication research. Yet, questions regarding reliability and validity of the approach have received little attention thus far. In applying LDA to textual data, researchers need to tackle at least four major challenges that affect these criteria: (a) appropriate pre-processing of the text collection; (b) adequate selection of model parameters, including the number of topics to be generated; (c) evaluation of the model’s reliability; and (d) the process of validly interpreting the resulting topics. We review the research literature dealing with these questions and propose a methodology that approaches these challenges. Our overall goal is to make LDA topic modeling more accessible to communication researchers and to ensure compliance with disciplinary standards. Consequently, we develop a brief hands-on user guide for applying LDA topic modeling. We demonstrate the value of our approach with empirical data from an ongoing research project.  相似文献   

4.
[目的/意义] 概率主题模型算法在不断得到改进与扩展,本文对国内外已有的利用引文构建的主题模型进行研究,分析和对比不同模型的生成过程与算法,并探讨利用引文构建的主题模型在科技文本分析中的应用与可扩展的研究方向。[方法/过程] 通过Web of Science数据库和CNKI数据库获取国内外利用引文构建主题模型的相关文献,经人工判读后筛选出具有代表性的文献,对这些文献中利用引文构建的主题模型,从建模思想、生成过程、参数估计与推断算法等方面进行对比与分析。[结果/结论] 目前国内外利用引文构建的主题模型主要包括研究主题与引文分布的主题模型、研究被引与施引主题间关系的主题模型,以及基于引用内容的引用主题模型;主题模型中引入引文信息后,能够获得更完整的主题内容和特定主题下的重要文献,并可识别施引文献和被引文献之间主题间的关系及影响;已有的模型多集中在概率潜在语义分析(Probabilistic Latent Semantic Analysis,PLSA)和潜在狄利克雷分配(Latent Dirichlet Allocation,LDA)主题模型基础上进行扩展。未来可扩展研究引入引用内容的主题模型、模型的性能优化和评价方法、模型的应用研究等。  相似文献   

5.
A review of text and image retrieval approaches for broadcast news video   总被引:1,自引:0,他引:1  
The effectiveness of a video retrieval system largely depends on the choice of underlying text and image retrieval components. The unique properties of video collections (e.g., multiple sources, noisy features and temporal relations) suggest we examine the performance of these retrieval methods in such a multimodal environment, and identify the relative importance of the underlying retrieval components. In this paper, we review a variety of text/image retrieval approaches as well as their individual components in the context of broadcast news video. Numerous components of text/image retrieval have been discussed in detail, including retrieval models, text sources, temporal expansion methods, query expansion methods, image features, and similarity measures. For each component, we conduct a series of retrieval experiments on TRECVID video collections to identify their advantages and disadvantages. To provide a more complete coverage of video retrieval, we briefly discuss an emerging approach called concept-based video retrieval, and review strategies for combining multiple retrieval outputs.  相似文献   

6.
俞琰  赵乃瑄 《图书情报工作》2018,62(11):120-126
[目的/意义]针对专利文本主题建模中领域停用词自动选取尚未有充分研究的问题,提出一种新的领域停用词自动选取方法,用于专利文本主题模型分析,以提高专利主题模型的区分度与建模质量。[方法/过程]领域停用词本质上是信息比较少,在不同类别专利文本中区分度低的词。因此,引入辅助专利文本集,使用类别熵衡量词的分布情况,然后依据词的类别熵进行排序,选取类别熵最大的若干词作为领域停用词。[结果/结论]实验通过专利文本数据,验证了该方法的可行性与有效性,能够有效地提高专利主题模型的区分度。  相似文献   

7.
We augment naive Bayes models with statistical n-gram language models to address short-comings of the standard naive Bayes text classifier. The result is a generalized naive Bayes classifier which allows for a local Markov dependence among observations; a model we refer to as the C hain A ugmented N aive Bayes (CAN) Bayes classifier. CAN models have two advantages over standard naive Bayes classifiers. First, they relax some of the independence assumptions of naive Bayes—allowing a local Markov chain dependence in the observed variables—while still permitting efficient inference and learning. Second, they permit straightforward application of sophisticated smoothing techniques from statistical language modeling, which allows one to obtain better parameter estimates than the standard Laplace smoothing used in naive Bayes classification. In this paper, we introduce CAN models and apply them to various text classification problems. To demonstrate the language independent and task independent nature of these classifiers, we present experimental results on several text classification problems—authorship attribution, text genre classification, and topic detection—in several languages—Greek, English, Japanese and Chinese. We then systematically study the key factors in the CAN model that can influence the classification performance, and analyze the strengths and weaknesses of the model.  相似文献   

8.
[目的/意义] 在"新冠"疫情这类突发公共卫生事件中,网络社交媒体上迅速产生大量关于疫情的言论,其中包含不少蓄意传播的谣言,不仅危害公众心理健康,而且会影响应对公共卫生事件的方案实施。识别突发公共卫生事件的谣言能够使民众正确面对危机,为社会安定、网络治理起到积极的维护作用。[方法/过程] 首先对采集到的疫情期间已被证实的谣言进行深度分析,提取谣言文本的主要特征,包括上下文特征、话题类别特征、情感程度特征、关键词特征等;然后针对文本分类模型中的文本特征表达较为单一的问题,利用不同的模型对提取的谣言文本特征进行向量化,并对各类文本特征进行加强和融合。其中通过TF-IDF计算的词向量权重在捕获上下文特征的同时,能够加强词粒度的关键词特征信息。最后,使用BiLSTM+DNN模型对融合的特征向量进行分类判别。[结果/结论] 实验结果表明,话题类别、情感程度等特征对谣言识别均有贡献,特别是经过强化后的词向量与其他特征融合后对识别准确率有明显提升,召回率、F1值等指标均达到90%以上,效果超过其他的谣言识别模型,说明笔者所构建的方法能够很好地实现对突发公共卫生事件背景下的谣言识别。  相似文献   

9.
Two-stage statistical language models for text database selection   总被引:2,自引:0,他引:2  
As the number and diversity of distributed Web databases on the Internet exponentially increase, it is difficult for user to know which databases are appropriate to search. Given database language models that describe the content of each database, database selection services can provide assistance in locating databases relevant to the information needs of users. In this paper, we propose a database selection approach based on statistical language modeling. The basic idea behind the approach is that, for databases that are categorized into a topic hierarchy, individual language models are estimated at different search stages, and then the databases are ranked by the similarity to the query according to the estimated language model. Two-stage smoothed language models are presented to circumvent inaccuracy due to word sparseness. Experimental results demonstrate that such a language modeling approach is competitive with current state-of-the-art database selection approaches.  相似文献   

10.
This study assesses the potential of topic models coupled with machine translation for comparative communication research across language barriers. From a methodological point of view, the robustness of a combined approach is examined. For this purpose the results of different machine translation services (Google Translate vs. DeepL) as well as methods (full-text vs. term-by-term) are compared. From a substantive point of view, the integratability of the approach into comparative study designs is tested. For this, the online discourses about climate change in Germany, the United Kingdom, and the United States are compared. First, the results show that the approach is relatively robust and second, that integration in comparative study designs is not a problem. It is concluded that this as well as the relatively moderate costs in terms of time and money makes the strategy to couple topic models with machine translation a valuable addition to the toolbox of comparative communication researchers.  相似文献   

11.
梁爽  刘小平 《图书情报工作》2022,66(13):138-149
[目的/意义]梳理国内外基于文本挖掘的科技文献主题演化相关研究,对主题演化分析中使用的各种方法进行分类、归纳与总结,并提出现有研究存在的不足,为主题演化研究提供新的思路与借鉴意义。[方法/过程]依照国内外学者进行主题演化研究的一般流程,对数据集选取与对象分析、主题识别研究、主题演化研究(主题演化时序分析、主题强度演化分析、主题内容演化分析)3个分析层面中所使用的各类模型、指标与方法进行梳理比较与优缺点总结,提出现有研究的局限性并对未来发展做出展望。[结果/结论]当前研究已具有一定规模和较为成熟的分析体系,但仍存在以下不足:数据来源较为单一;LDA及相关扩展模型存在的弊端需进一步克服;缺乏对其他机器学习及深度学习算法的探索应用;演化分析方法需相互结合、互补互融。未来应针对以上问题做出相应改进与深入探究。  相似文献   

12.
基于动态LDA主题模型的内容主题挖掘与演化   总被引:1,自引:0,他引:1  
指出文本内容主题的挖掘和演化研究对于文本建模和分类及推荐效果提升具有重要作用。从分析基于LDA主题模型的文本内容主题挖掘原理入手,针对当前网络环境下的文本内容特点,构建适用于动态文内容本主题挖掘的LDA模型,并通过改进的Gibbs抽样估计提高主题挖掘的准确性,进而从主题相似度和强度两个方面研究内容主题随时间的演化问题。实验表明,所提方法可行且有效,对后续有关文本语义建模和分类研究等具有重要的实践意义。  相似文献   

13.
Business Process Management (BPM) is a topic of greatest relevance to government innovation. While the concept originally stems from the private sector, public sector organizations have established BPM capabilities and are in the move of developing these further. Despite the importance of the phenomenon, literature does however not yet provide a comprehensive picture of BPM capabilities in governments. In this paper, we thus examine BPM capabilities on the local government level by means of an intertwined quantitative survey and (representative) qualitative in-depth case study. We identify a set of BPM challenges and reflect on the power of prevalent BPM capability assessment and development models, mostly maturity models, to provide good guidance. We suggest taking into account organizational positions in order to overcome the significant shortcoming of the ‘maturity’ concept, especially the focus on convergence towards an “ideal” state. Thus, we argue for developmental models following divergence theories. Implications for practice and potentially fruitful avenues for future research are discussed in the light of our findings.  相似文献   

14.
15.
林杰  苗润生 《情报学报》2020,39(1):68-80
专业社交媒体中主题图谱的内容包括论坛中的主题及主题之间的关系,其具有挖掘专业产品创新方向、构建专业知识索引等重要应用价值。本文基于深度学习技术与文本挖掘技术,提出了专业社交媒体中的主题图谱构建方法。首先,使用专业社交媒体中的文本训练Skip-Gram模型,利用该模型的隐藏层权重与模型输出的预测结果,分别获取词语间的语义相似度与上下文关联度。其次,基于该语义相似度与上下文关联度,对已有领域种子本体词汇进行扩充,将语义相似或上下文相邻近的词汇纳入本体词汇,为主题抽取提供高质量的领域词汇。然后,基于扩充的专业本体词汇,使用结合本体词汇的LDA主题模型从专业社交媒体文本中抽取主题与主题词。最后,利用语义相似度与上下文关联度,定义关联度权重,通过图模型与谱聚类,获取主题间与主题词的关联关系与层次结构。本文使用汽车论坛语料进行主题图谱生成实验。实验结果表明,本文方法获取的主题词纯净度相比单独使用LDA模型提升了20.2%,且能够清晰合理地展现主题之间的关系。  相似文献   

16.
微博主题发现研究方法述评   总被引:2,自引:1,他引:1  
[目的/意义]对现有微博主题发现的研究文献进行全面的梳理和评述,为研究人员深入开展相关研究提供借鉴。[方法/过程]针对传统的主题发现的基本原理和主要研究方法,分析微博文本的组织特征,从基于短文本特征和基于非文本特征的这两个角度对微博主题发现方法进行梳理,并对两类方法进行详细的阐述及特点分析,最后对微博主题发现研究的发展趋势进行展望。[结果/结论]目前微博主题发现的研究还处于探索阶段,未来应该继续深化理论探索、创新研究方法。  相似文献   

17.
In this paper, we study different applications of cross-language latent topic models trained on comparable corpora. The first focus lies on the task of cross-language information retrieval (CLIR). The Bilingual Latent Dirichlet allocation model (BiLDA) allows us to create an interlingual, language-independent representation of both queries and documents. We construct several BiLDA-based document models for CLIR, where no additional translation resources are used. The second focus lies on the methods for extracting translation candidates and semantically related words using only per-topic word distributions of the cross-language latent topic model. As the main contribution, we combine the two former steps, blending the evidences from the per-document topic distributions and the per-topic word distributions of the topic model with the knowledge from the extracted lexicon. We design and evaluate the novel evidence-rich statistical model for CLIR, and prove that such a model, which combines various (only internal) evidences, obtains the best scores for experiments performed on the standard test collections of the CLEF 2001–2003 campaigns. We confirm these findings in an alternative evaluation, where we automatically generate queries and perform the known-item search on a test subset of Wikipedia articles. The main importance of this work lies in the fact that we train translation resources from comparable document-aligned corpora and provide novel CLIR statistical models that exhaustively exploit as many cross-lingual clues as possible in the quest for better CLIR results, without use of any additional external resources such as parallel corpora or machine-readable dictionaries.  相似文献   

18.
One important reason for the use of field categorization in bibliometrics is the necessity to make citation impact of papers published in different scientific fields comparable with each other. Raw citations are normalized by using field-categorization schemes to achieve comparable citation scores. There are different approaches to field categorization available. They can be broadly classified as intellectual and algorithmic approaches. A paper-based algorithmically constructed classification system (ACCS) was proposed which is based on citation relations. Using a few ACCS field-specific clusters, we investigate the discriminatory power of the ACCS. The micro study focusses on the topic ‘overall water splitting’ and related topics. The first part of the study investigates intellectually whether the ACCS is able to identify papers on overall water splitting reliably and validly. Next, we compare the ACCS with (1) a paper-based intellectual (INSPEC) classification and (2) a journal-based intellectual classification (Web of Science, WoS, subject categories). In the last part of our case study, we compare the average number of citations in selected ACCS clusters (on overall water splitting and related topics) with the average citation count of publications in WoS subject categories related to these clusters. The results of this micro study question the discriminatory power of the ACCS. We recommend larger follow-up studies on broad datasets.  相似文献   

19.
Search engine results are often biased towards a certain aspect of a query or towards a certain meaning for ambiguous query terms. Diversification of search results offers a way to supply the user with a better balanced result set increasing the probability that a user finds at least one document suiting her information need. In this paper, we present a reranking approach based on minimizing variance of Web search results to improve topic coverage in the top-k results. We investigate two different document representations as the basis for reranking. Smoothed language models and topic models derived by Latent Dirichlet?allocation. To evaluate our approach we selected 240 queries from Wikipedia disambiguation pages. This provides us with ambiguous queries together with a community generated balanced representation of their (sub)topics. For these queries we crawled two major commercial search engines. In addition, we present a new evaluation strategy based on Kullback-Leibler divergence and Wikipedia. We evaluate this method using the TREC sub-topic evaluation on the one hand, and manually annotated query results on the other hand. Our results show that minimizing variance in search results by reranking relevant pages significantly improves topic coverage in the top-k results with respect to Wikipedia, and gives a good overview of the overall search result. Moreover, latent topic models achieve competitive diversification with significantly less reranking. Finally, our evaluation reveals that our automatic evaluation strategy using Kullback-Leibler divergence correlates well with α-nDCG scores used in manual evaluation efforts.  相似文献   

20.
The detection of communities in large social networks is receiving increasing attention in a variety of research areas. Most existing community detection approaches focus on the topology of social connections (e.g., coauthor, citation, and social conversation) without considering their topic and dynamic features. In this paper, we propose two models to detect communities by considering both topic and dynamic features. First, the Community Topic Model (CTM) can identify communities sharing similar topics. Second, the Dynamic CTM (DCTM) can capture the dynamic features of communities and topics based on the Bernoulli distribution that leverages the temporal continuity between consecutive timestamps. Both models were tested on two datasets: ArnetMiner and Twitter. Experiments show that communities with similar topics can be detected and the co-evolution of communities and topics can be observed by these two models, which allow us to better understand the dynamic features of social networks and make improved personalized recommendations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号