首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 250 毫秒
1.
本文利用把中缀表达式编译成后缀表达式的原理,进行表达式解析器的设计。该表达式解析器能够对变量、整型数据和浮点型数据进行加、减、乘、除、函数的混合运算,变量值可以直接输入或从数据库中调用。  相似文献   

2.
针对SAX解析器的实现算法和具体程序进行了论述。  相似文献   

3.
对于在J2ME平台下利用XML解析器开发SOAP协议的基本方法进行了研究,介绍了J2ME平台和SOAP协议,其中重点探讨了XML的开发,提出了一种利用XML解析器在J2ME平台上实现SOAP协议的方法,具有一定的推广价值。  相似文献   

4.
针对信息资源异构现象,可以利用信息互操作来解决。提出了实现基于本体的异构信息互操作的概念化模型——OHIIS系统模型,采用由XML解析器驱动的本体来实现搜索算法,用XMLSchema来实现本体的Web表示,通讯模块采用标准网络互连协议来实现。信息互操作请求者通过Web浏览器或者Web应用程序向信息提供者发送信息请求,而提供者则通过WebService响应这些请求,从而实现了异构信息的互操作。  相似文献   

5.
何云东 《中国科技信息》2009,(15):108-108,110
此文采用DLL插件技术,实现表达式计算器中函数库的扩展.将计算函数编译成插件动态链接库(DLL)文件,然后表达式计算器通过动态调用的方式实现函数库的扩展.这样避免了对表达式计算器主程序进行修改,增加了计算器的灵活性.  相似文献   

6.
复杂表达式解析和计算的研究实现   总被引:2,自引:1,他引:1  
本文采用Pascal语言,在Delphi7上研究实现了一种处理复杂表达式解析和计算的方法。通过词法分析和堆栈操作对输入表达式进行编译,形成一棵自上而下的表达式树,然后在保证表达式中的操作数的数据来源正确的基础上通过对表达式树的遍历进行计算。实验证明,此方法能够处理含有常量、变量、数组以及函数的表达式的运算,具有一定的实用性。  相似文献   

7.
针对目前基于主题图的中文自动分类的空缺,文章在总结Ontopia对英文和挪威文自动分类的技术基础上,结合中文特殊性,构建了一个基于主题图的中文分类原型系统。该系统通过借助POI、PDF、SAX作为文档文本解析器提取文本,采用盘古分词对文本进行分析,以Java为系统实现主要语言,达到了基于主题图的中文自动分类的目的。  相似文献   

8.
介绍数据结构学科中的一个重要领域——后缀表达式(逆波兰式)的求法。通过栈的应用、标识符树、扩号转换三种方法讨论后缀表达式实现方法以及它们的适用范围。  相似文献   

9.
为了实现对飞行试验机载网络化采集器网络数据的实时检查,在Visual Studio软件开发环境下,使用C++编程语言,利用Windows系统自带的MSXML语言解析器,对专用数据采集器地面配置文件进行解析、存储。利用WinPcap抓包技术结合XML文件的解析结果,实现网络数据包的捕获、分析、过滤和参数的实时显示,为测试系统的搭建、验证、故障诊断提供技术支持。  相似文献   

10.
最新一代的人工智能冠军也许根本没接近所谓的技术临界点,模拟人还差得远。它们只是有更精密的语言解析器和语言脚本,编得更好而已。  相似文献   

11.
This paper describes a computer program which converts the text of user input and system responses from an on-line search system into fixed format records which describe the interaction. It also outlines the syntax of the query language and the format of the output record produced by the parser. The study discusses problems in constructing the parser, the logic of the parser and its performance characterestics, as well as recommendations for improving the process of logging on-line searches.  相似文献   

12.
This article proposes a syntactic parsing strategy based on a dependency grammar containing formal rules and a compression technique that reduces the complexity of those rules. Compression parsing is mainly driven by the ‘single-head’ constraint of Dependency Grammar, and can be seen as an alternative method to the well-known constructive strategy. The compression algorithm simplifies the input sentence by progressively removing from it the dependent tokens as soon as binary syntactic dependencies are recognized. This strategy is thus similar to that used in deterministic dependency parsing. A compression parser was implemented and released under General Public License, as well as a cross-lingual grammar with Universal Dependencies, containing only broad-coverage rules applied to Romance languages. The system is an almost delexicalized parser which does not need training data to analyze Romance languages. The rule-based cross-lingual parser was submitted to CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. The performance of our system was compared to the other supervised systems participating in the competition, paying special attention to the parsing of different treebanks of the same language. We also trained a supervised delexicalized parser for Romance languages in order to compare it to our rule-based system. The results show that the performance of our cross-lingual method does not change across related languages and across different treebanks, while most supervised methods turn out to be very dependent on the text domain used to train the system.  相似文献   

13.
In this paper, we propose a new language model, namely, a dependency structure language model, for topic detection and tracking (TDT) to compensate for weakness of unigram and bigram language models. The dependency structure language model is based on the Chow expansion theory and the dependency parse tree generated by a linguistic parser. So, long-distance dependencies can be naturally captured by the dependency structure language model. We carried out extensive experiments to verify the proposed model on topic tracking and link detection in TDT. In both cases, the dependency structure language models perform better than strong baseline approaches.  相似文献   

14.
马坤 《现代情报》2012,32(12):44-49
为了提高文献录入效率和准确率,减少录入文献的人工审核,提出一种基于DOI和论文数据库的在线文献元数据获取方法,设计DOI解析代理集成异构的DOI注册代理机构的服务接口,通过RoadRunner算法实现基于论文数据库详情页的文献元数据抽取。最后实现在线文献元数据智能录入系统,验证上述方法的有效性和实用性。  相似文献   

15.
In this paper, we propose a novel approach to automatic generation of summary templates from given collections of summary articles. We first develop an entity-aspect LDA model to simultaneously cluster both sentences and words into aspects. We then apply frequent subtree pattern mining on the dependency parse trees of the clustered and labeled sentences to discover sentence patterns that well represent the aspects. Finally, we use the generated templates to construct summaries for new entities. Key features of our method include automatic grouping of semantically related sentence patterns and automatic identification of template slots that need to be filled in. Also, we implement a new sentence compression algorithm which use dependency tree instead of parser tree. We apply our method on five Wikipedia entity categories and compare our method with three baseline methods. Both quantitative evaluation based on human judgment and qualitative comparison demonstrate the effectiveness and advantages of our method.  相似文献   

16.
运用数据挖掘技术得到旅游文本属性与特征已成为旅游研究的重要领域,对旅游微博发文主题的研究有助于旅游机构形象塑造及内容传播推广,对旅游机构的微博信息供给及旅游形象的提升具有一定意义。本研究首先对内容分析法在旅游研究的运用状况及国内外旅游微博相关方面的研究进行梳理;其次,以国家旅游局新浪微博的网络文本内容为研究对象,借助Rost word parser词频分析软件提取网络文本的高频特征词并进行筛选;再次,采用内容分析法,结合社会网络及共词分析法,得到网络文本高频词之间的社会网络联系;最后,探索高频词的属性及其之间的联系特征,在国家旅游局新浪微博的内容分析基础上,提炼出其微博内容分为人文景观、自然景观、游客出行、旅游政务信息4个主题。  相似文献   

17.
This paper describes a state-of-the-art supervised, knowledge-intensive approach to the automatic identification of semantic relations between nominals in English sentences. The system employs a combination of rich and varied sets of new and previously used lexical, syntactic, and semantic features extracted from various knowledge sources such as WordNet and additional annotated corpora. The system ranked first at the third most popular SemEval 2007 Task – Classification of Semantic Relations between Nominals and achieved an F-measure of 72.4% and an accuracy of 76.3%. We also show that some semantic relations are better suited for WordNet-based models than other relations. Additionally, we make a distinction between out-of-context (regular) examples and those that require sentence context for relation identification and show that contextual data are important for the performance of a noun–noun semantic parser. Finally, learning curves show that the task difficulty varies across relations and that our learned WordNet-based representation is highly accurate so the performance results suggest the upper bound on what this representation can do.  相似文献   

18.
This paper presents a model that incorporates contemporary theories of tense and aspect and develops a new framework for extracting temporal relations between two sentence-internal events, given their tense, aspect, and a temporal connecting word relating the two events. A linguistic constraint on event combination has been implemented to detect incorrect parser analyses and potentially apply syntactic reanalysis or semantic reinterpretation—in preparation for subsequent processing for multi-document summarization. An important contribution of this work is the extension of two different existing theoretical frameworks—Hornstein’s 1990 theory of tense analysis and Allen’s 1984 theory on event ordering—and the combination of both into a unified system for representing and constraining combinations of different event types (points, closed intervals, and open-ended intervals). We show that our theoretical results have been verified in a large-scale corpus analysis. The framework is designed to inform a temporally motivated sentence-ordering module in an implemented multi-document summarization system.  相似文献   

19.
Bibliographic collections in traditional libraries often compile records from distributed sources where variable criteria have been applied to the normalization of the data. Furthermore, the source records often follow classical standards, such as MARC21, where a strict normalization of author names is not enforced. The identification of equivalent records in large catalogues is therefore required, for example, when migrating the data to new repositories which apply modern specifications for cataloguing, such as the FRBR and RDA standards. An open-source tool has been implemented to assist authority control in bibliographic catalogues when external features (such as the citations found in scientific articles) are not available for the disambiguation of creator names. This tool is based on similarity measures between the variants of author names combined with a parser which interprets the dates and periods associated with the creator. An efficient data structure (the unigram frequency vector trie) has been used to accelerate the identification of variants. The algorithms employed and the attribute grammar are described in detail and their implementation is distributed as an open-source resource to allow for an easier uptake.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号