首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Decision support systems (DSS) are a class of information systems where data, models, and an interface are combined to support a decision-maker's needs for data and analysis. This article reports on a system which fits all of the classic definitions of a DSS and which includes spatial models of object locations as crucial parts of the analytic, data, and interface support provided to the user. The system illustrates several interesting aspects of the construction of systems of this type, including the potential role of geographic information system (GIS) capabilities in DSS; the translation of user decision support needs with a geographic component into a DSS architecture; and the integration of a PC-based GIS package with additional interface, data management, and analytic tools. The system also illustrates certain managerial implications for systems of this type, including the importance of planning for system maintenance and the value of geographic data.  相似文献   

2.
Classifying Amharic webnews   总被引:1,自引:1,他引:0  
We present work aimed at compiling an Amharic corpus from the Web and automatically categorizing the texts. Amharic is the second most spoken Semitic language in the World (after Arabic) and used for countrywide communication in Ethiopia. It is highly inflectional and quite dialectally diversified. We discuss the issues of compiling and annotating a corpus of Amharic news articles from the Web. This corpus was then used in three sets of text classification experiments. Working with a less-researched language highlights a number of practical issues that might otherwise receive less attention or go unnoticed. The purpose of the experiments has not primarily been to develop a cutting-edge text classification system for Amharic, but rather to put the spotlight on some of these issues. The first two sets of experiments investigated the use of Self-Organizing Maps (SOMs) for document classification. Testing on small datasets, we first looked at classifying unseen data into 10 predefined categories of news items, and then at clustering it around query content, when taking 16 queries as class labels. The second set of experiments investigated the effect of operations such as stemming and part-of-speech tagging on text classification performance. We compared three representations while constructing classification models based on bagging of decision trees for the 10 predefined news categories. The best accuracy was achieved using the full text as representation. A representation using only the nouns performed almost equally well, confirming the assumption that most of the information required for distinguishing between various categories actually is contained in the nouns, while stemming did not have much effect on the performance of the classifier.
Lemma Nigussie HabteEmail:
  相似文献   

3.
颠覆性技术是一个具有复杂的内在结构的技术群。从空间维度来看,颠覆性技术是包含了主导技术、辅助技术、支撑技术的复杂技术群,涉及多学科、多领域。在此背景下,运用科学计量的方法对颠覆性技术进行科技评价和科学技术演变规律探索面临挑战,实质表现为数据检索。本文探索了一种基于机器学习的专利数据集构建新策略,将专利检索任务作为机器学习的二分类任务,类似于信息检索中基于主动学习的查询分类思想,并提出了将F-measure特征最大化方法与CNN(convolutional neural networks)模型相结合的文本分类改进方法。本文以人工智能(artificial intelligence,AI)技术域为例进行训练实验,实验结果的准确率、召回率和F1值分别达到98.01%、97.04%和97.89%,这表明本文提出的策略能够精准地识别人工智能专利,提高了专利检索的准确率和召回率,以利于构建精、准、全的人工智能技术域专利数据集。  相似文献   

4.
文本分类是文本挖掘的基础和核心。构建一个分类准确而且稳定的文本分类器是文本分类的关键,很多学者提出了不同的文本分类器模型和算法。在现有的分类器评估方法中,关心的只是分类准确率,而对稳定性这个重要的评价标准却没有涉及。本文提出使用开放测试和封闭测试的准确性指标的比值作为衡量文本分类器稳定性的评估标准。通过文献数据验证以及在所建构的贝叶斯分类器实验平台MBNC上进行的检验表明,用这种标准评价文本分类器具有其合理性。  相似文献   

5.
为了提高网页自动分类的准确率,基于信息融合的模型理论,提出了一种通用的网页自动分类模型和融合算法。该模型根据完成功能的不同分为四个层次:信息抽取层、数据预处理层、特征层和决策层,其中特征层是针对网页上不同种类的媒体信息采用不同的分类方法进行分类,并将分类结果分别输入决策层和与该特征层算法相关的其他的特征层。决策层是处理特征层的分类结果,并推导出最终的网页分类融合结果,并将该模型和算法进行了实现。实验表明,文章提出的融合模型和算法可以有效地改进网页自动分类准确率。  相似文献   

6.
[目的/意义] 以犯罪主体为新视角,以同主体类型犯罪为研究对象,提出"情报源-主体-案件-犯罪问题"的未成年人犯罪现象的研究思路,以分析未成年人犯罪的诱因、特点以及未成年人犯罪案件中的犯罪主体的影响因素,揭示未成年人犯罪规律,从而为实践中防控和治理未成年人犯罪现象提供更多战略情报分析思路。[方法/过程] 采用文本分析与社会网络分析方法,设定同主体类型犯罪问题而并非单个案例为情报研究对象,通过提取该类案例样本中与犯罪主体相关联的信息,构建犯罪主体社会网络,进行相应的理论探讨及实证研究。[结果/结论] 研究表明,社会网络分析适用于一定范围内未成年人犯罪现象的研究。基于犯罪主体社会网络的未成年人犯罪战略情报分析有利于客观地揭示此类特殊主体犯罪规律和犯罪特点,发掘犯罪主体影响因素,能够为未成年人犯罪战略情报分析的实施提供有效的辅助和决策支持。  相似文献   

7.
文本分类器准确性评估方法   总被引:10,自引:3,他引:10  
程泽凯  林士敏 《情报学报》2004,23(5):631-636
随着计算机网络与信息技术的飞速发展 ,信息极大丰富而知识相对匮乏的状况在加剧。文本挖掘正成为目前研究者关注的焦点。文本分类是文本挖掘的基础和核心。构建一个分类准确的文本分类器是文本分类的关键。现在有很多文本分类的算法 ,在不同的领域里取得了较好的效果。如何更加客观地评估分类器的性能 ,是目前值得研究的方向之一。结合作者的实际工作 ,本文列出目前常用的分类准确性测试和评估方法 ,简单对评估方法进行比较分析。文末提出了对准确性评估的一些改进设想。  相似文献   

8.
运用图示法自动提取中文专利文本的语义信息   总被引:1,自引:0,他引:1  
姜春涛 《图书情报工作》2015,59(21):115-122
[目的/意义]提出利用图结构的表示法自动挖掘中文专利文本的语义信息,以为基于文本内容的专利智能分析提供语义支持。[方法/过程] 设计两种运用图结构的模型:①基于关键词的文本图模型;②基于依存关系树的文本图模型。第一种图模型通过计算关键词之间的相似性关系来定义;第二种图模型则由句中所提取的语法关系来定义。在案例研究中,借助频繁子图挖掘算法,对所建图模型进行子图挖掘, 并构建以子图为特征的文本分类器,用来检测所建图模型的表达性和有效性。[结果/结论]将所建的基于图模型的文本分类器应用于4个不同技术领域的专利文本数据集,并与经典文本分类器的测试结果相比较而知:前者在使用明显较少的特征数的基础上,分类性能较后者提升2.1%-10.5%。由此而推断,使用图结构的表达法并结合图挖掘技术从专利文本中所提取的语义信息是有效的,有助于进一步的专利文本分析。  相似文献   

9.
The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Text Summarisation. In this paper, we treat event detection as a sentence level text classification problem. Overall, we compare the performance of discriminative versus generative approaches to this task: namely, a Support Vector Machine (SVM) classifier versus a Language Modeling (LM) approach. We also investigate a rule-based method that uses handcrafted lists of ‘trigger’ terms derived from WordNet. Two datasets are used in our experiments to test each approach on six different event types, i.e., Die, Attack, Injure, Meet, Transport and Charge-Indict. Our experimental results show that the trained SVM classifier significantly outperforms the simple rule-based system and language modeling approach on both datasets: ACE (F1 66% vs. 45% and 38%, respectively) and IBC (F1 92% vs. 88% and 74%, respectively). A detailed error analysis framework for the task is also provided which separates errors into different types: semantic, inference, continuous and trigger-less.  相似文献   

10.
张琳 《图书与情报》2006,(5):89-92,97
计算机犯罪已严重危害了信息系统安全,各国政府也已认识到打击和惩罚计算机犯罪、保护信息系统安全的重要性。美国很早就将保护信息系统安全纳入了法治化的轨道,在打击计算机犯罪中也积累了丰富的经验。文章通过探讨美国计算机犯罪立法的历史沿革和最新发展,对信息系统安全的法治化进程中的规律进行总结,以期为我国的信息法制建设提供借鉴。  相似文献   

11.
本文以引起公众质疑的系列校园血案和菲律宾人质事件报道为分析对象,就处于侦查阶段的刑事案件报道进行探讨,从宪法学、行政法学、刑事法学等角度剖析犯罪新闻报道的正当性及边界,辨析犯罪新闻报道与公众知情权、侦查权行使、犯罪嫌疑人人权保障、被害人权利保护等关系,促进犯罪新闻报道与公共安全、公共秩序的保持、公正审判的维护、公民人格尊严等价值之间的平衡与协调。犯罪新闻报道要实现多种价值的平衡,符合社会公共利益,媒体法规与媒体自律的互动是一条现实途径。  相似文献   

12.
2006-2007年国外对信息检索基础理论的研究主要集中于决策理论、隐含语义索引理论研究以及信息检索评价理论研究。关于信息检索基本原理的研究主要集中在信息检索中的分类、信息检索模型、信息检索类型和检索方式等方面。信息检索中的分类的研究重点包括有关分类器的研究;有关特征选择的研究;有关领域相关词的研究。信息检索类型的研究主要包括焦点检索、图像检索、视频检索、合作过滤、机器音译、无线网中网。检索方式的研究主要包括上下文检索、集成检索、问答系统检索以及用户查询处理等问题。  相似文献   

13.
张倩  刘怀亮 《图书情报工作》2013,57(21):126-132
为了解决基于向量空间模型构建短文本分类器时造成的文本结构信息的缺失以及大量样本存在的标注瓶颈问题,提出一种基于图结构的半监督学习分类方法,这种方法既能保留短文本的结构语义关系,又能实现未标注样本的充分利用,提高分类器的性能。通过引入半监督学习的思想,将数量规模较大的未标注样本与少量已标注样本相结合进行基于图结构的自训练学习,不断迭代实现训练样本集的扩充,从而构建最终短文本分类器。经对比实验证明,这种方法能够获得较好的分类效果。  相似文献   

14.
This paper reports on problems and conflicts encountered when using decision support systems (DSS) in political contexts. Based on a literature study and two case studies we describe problems encountered in relation not only to the DSS itself, but also to the political decision process. The case studies have been carried out in two cities in Sweden that at different times but in similar situations have used DSS in order to reach a decision in complicated and contested matters. In both cases we have previously found that the method and IT tool used for decision analysis were appreciated by most participants, but the inherent rationality of the DSS was in conflict with how participants usually make decisions as well as with the political process. The assumption was that a strict and open method would make grounds for clear decisions, but the results of the decision process were none of the cases implemented. In one case the result of the decision analysis was that no clear decision was made. In the other case the lowest ranked alternative was implemented. Furthermore, in neither city the method was ever used again. We therefore ask: What are the challenges and limitations to using DSS in political contexts? Our study shows that challenges relate to selecting and using criteria; eliciting weights for criteria (high level of subjectivity); understanding all the amount of facts available in the system; time constraints; and lack of impact on the final decision. This study contributes to both research and practice by increasing the understanding of what challenges are experienced in DSS use, since the findings can be used as a framework of challenges that should be addressed, in design of systems as well as method for use. The study also contributes to understanding the role of politicians in decision-making and the consequences for the use of DSS. Further, the literature study showed that there are overall very few studies on the actual use of DSS in a political context, and we therefore conclude by encouraging more studies reporting actual use.  相似文献   

15.
The effective representation of the relationship between the documents and their contents is crucial to increase classification performance of text documents in the text classification. Term weighting is a preprocess aiming to represent text documents better in Vector Space by assigning proper weights to terms. Since the calculation of the appropriate weight values directly affects performance of the text classification, in the literature, term weighting is still one of the important sub-research areas of text classification. In this study, we propose a novel term weighting (MONO) strategy which can use the non-occurrence information of terms more effectively than existing term weighting approaches in the literature. The proposed weighting strategy also performs intra-class document scaling to supply better representations of distinguishing capabilities of terms occurring in the different quantity of documents in the same quantity of class. Based on the MONO weighting strategy, two novel supervised term weighting schemes called TF-MONO and SRTF-MONO were proposed for text classification. The proposed schemes were tested with two different classifiers such as SVM and KNN on 3 different datasets named Reuters-21578, 20-Newsgroups, and WebKB. The classification performances of the proposed schemes were compared with 5 different existing term weighting schemes in the literature named TF-IDF, TF-IDF-ICF, TF-RF, TF-IDF-ICSDF, and TF-IGM. The results obtained from 7 different schemes show that SRTF-MONO generally outperformed other schemes for all three datasets. Moreover, TF-MONO has promised both Micro-F1 and Macro-F1 results compared to other five benchmark term weighting methods especially on the Reuters-21578 and 20-Newsgroups datasets.  相似文献   

16.
用AUC评估分类器的预测性能   总被引:1,自引:0,他引:1  
杨波  程泽凯  秦锋 《情报学报》2007,(2):275-279
准确率一直被作为分类器预测性能的主要评估标准,但是它存在着诸多的缺点和不足。本文将准确率与AUC(the area under the Receiver Operating Characteristic curve)进行了理论上的对比分析,并分别使用AUC和准确率对3种分类学习算法在15个两类数据集上进行了评估。综合理论和实验两个方面的结果,显示了AUC不但优于而且应该替代准确率,成为更好的分类器性能的评估度量。同时,用AUC对3种分类学习算法的重新评估,进一步证实了基于贝叶斯定理的NaiveBayes和TAN-CMI分类算法优于决策树分类算法C4.5。  相似文献   

17.
In this paper, we quantify the existence of concept drift in patent data, and examine its impact on classification accuracy. When developing algorithms for classifying incoming patent applications with respect to their category in the International Patent Classification (IPC) hierarchy, a temporal mismatch between training data and incoming documents may deteriorate classification results. We measure the effect of this temporal mismatch and aim to tackle it by optimal selection of training data. To illustrate the various aspects of concept drift on IPC class level, we first perform quantitative analyses on a subset of English abstracts extracted from patent documents in the CLEF-IP 2011 patent corpus. In a series of classification experiments, we then show the impact of temporal variation on the classification accuracy of incoming applications. We further examine what training data selection method, combined with our classification approach yields the best classifier; and how combining different text representations may improve patent classification. We found that using the most recent data is a better strategy than static sampling but that extending a set of recent training data with older documents does not harm classification performance. In addition, we confirm previous findings that using 2-skip-2-grams on top of the bag of unigrams structurally improves patent classification. Our work is an important contribution to the research into concept drift for text classification, and to the practice of classifying incoming patent applications.  相似文献   

18.
We augment naive Bayes models with statistical n-gram language models to address short-comings of the standard naive Bayes text classifier. The result is a generalized naive Bayes classifier which allows for a local Markov dependence among observations; a model we refer to as the C hain A ugmented N aive Bayes (CAN) Bayes classifier. CAN models have two advantages over standard naive Bayes classifiers. First, they relax some of the independence assumptions of naive Bayes—allowing a local Markov chain dependence in the observed variables—while still permitting efficient inference and learning. Second, they permit straightforward application of sophisticated smoothing techniques from statistical language modeling, which allows one to obtain better parameter estimates than the standard Laplace smoothing used in naive Bayes classification. In this paper, we introduce CAN models and apply them to various text classification problems. To demonstrate the language independent and task independent nature of these classifiers, we present experimental results on several text classification problems—authorship attribution, text genre classification, and topic detection—in several languages—Greek, English, Japanese and Chinese. We then systematically study the key factors in the CAN model that can influence the classification performance, and analyze the strengths and weaknesses of the model.  相似文献   

19.
多层次web文本分类   总被引:8,自引:0,他引:8  
凌云  刘军  王勋 《情报学报》2005,24(6):684-689
传统的文本分类大多基于向量空间,分类体系为甲面体系,忽视了类别间的层次关系。根据LSA理论提出了一种多层次web文本分类方法。建立类模型时,根据类别的层次关系树由下到上逐层为具有相同父节点的类别建立一个类模型;分类时,由上到下,根据相应的类模型存LS空间上分类。这种分类方法解决了LSA模型中高维矩阵难以进行奇异值分解的问题。同时体现了web文本中词条的语义关系,注重了词条在网页中的表现形式。实验表明,多层次web文本分类方法比基于平面分类体系的分类方法在查全率和准确率方面要好。  相似文献   

20.
This paper reports on a research study aiming to identify the user requirements of digital scholarship services (DSS) in university libraries. Due to the exploratory nature of this study, a case study approach was adopted as the overarching methodology. Wuhan University Library (one of the top university libraries in China) was adopted as the case study. Specifically, a mixed qualitative-quantitative approach was employed for the case analysis. A qualitative study was performed aiming to identify and qualify users' DSS requirements. The analysis of qualitative interview data pointed to 17 DSS requirements under five themes: formulating research ideas, locating research partners, writing research proposals, conducting research, and publishing results. Subsequently, a quantitative Kano model analysis was undertaken to validate, verify and prioritise the DSS requirements identified. Based on measuring individual requirements' priority, DSS requirements were classified into four types: must-be, one-dimensional, attractive, and indifference. Finally, a set of strategic suggestions for DSS development were devised. This paper is of interests to library and information science researchers, as well as library managers and professionals. Although the data were collected from a university library in China, the research findings provide useful insights and implications that can be shared across international borders.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号