首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   179篇
  免费   4篇
  国内免费   2篇
教育   65篇
科学研究   64篇
体育   14篇
综合类   11篇
信息传播   31篇
  2023年   5篇
  2022年   1篇
  2021年   1篇
  2020年   5篇
  2019年   9篇
  2018年   3篇
  2017年   3篇
  2016年   1篇
  2015年   5篇
  2014年   8篇
  2013年   11篇
  2012年   11篇
  2011年   15篇
  2010年   6篇
  2009年   13篇
  2008年   18篇
  2007年   17篇
  2006年   19篇
  2005年   13篇
  2004年   5篇
  2003年   4篇
  2002年   4篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
排序方式: 共有185条查询结果,搜索用时 15 毫秒
21.
Abstractive summarization aims to generate a concise summary covering salient content from single or multiple text documents. Many recent abstractive summarization methods are built on the transformer model to capture long-range dependencies in the input text and achieve parallelization. In the transformer encoder, calculating attention weights is a crucial step for encoding input documents. Input documents usually contain some key phrases conveying salient information, and it is important to encode these phrases completely. However, existing transformer-based summarization works did not consider key phrases in input when determining attention weights. Consequently, some of the tokens within key phrases only receive small attention weights, which is not conducive to encoding the semantic information of input documents. In this paper, we introduce some prior knowledge of key phrases into the transformer-based summarization model and guide the model to encode key phrases. For the contextual representation of each token in the key phrase, we assume the tokens within the same key phrase make larger contributions compared with other tokens in the input sequence. Based on this assumption, we propose the Key Phrase Aware Transformer (KPAT), a model with the highlighting mechanism in the encoder to assign greater attention weights for tokens within key phrases. Specifically, we first extract key phrases from the input document and score the phrases’ importance. Then we build the block diagonal highlighting matrix to indicate these phrases’ importance scores and positions. To combine self-attention weights with key phrases’ importance scores, we design two structures of highlighting attention for each head and the multi-head highlighting attention. Experimental results on two datasets (Multi-News and PubMed) from different summarization tasks and domains show that our KPAT model significantly outperforms advanced summarization baselines. We conduct more experiments to analyze the impact of each part of our model on the summarization performance and verify the effectiveness of our proposed highlighting mechanism.  相似文献   
22.
在关于斯大林的研究中,斯大林的列宁主义观是一个热点,学术界在这方面已经进行了大量的研究。但是在系统性方面还有待于进一步深化。结合学术界的研究成果,本文旨在对斯大林的列宁主义观作进一步较系统地探讨,从而进一步从整体上理解和把握斯大林的列宁主义观。  相似文献   
23.
文本情感摘要技术的目的是以简洁的形式准确表达文章的核心情感内容。为解决不同的文档结构及内容特征等问题对摘要结果的影响,提出了一种基于主题的SE-TextRank 情感摘要方法。通过LDA 模型自动获取收敛后的文本主题,利用余弦距离算法进行主题句子分组,使用传统多特征融合以及SE-TextRank 情感摘要算法对组内中心句抽取,最终获取目的摘要。实验表明,采用此方法能够更为高效的获取新闻文本摘要结果。  相似文献   
24.
论自动文摘及其分类   总被引:10,自引:1,他引:10  
自动文摘 ,即利用计算机自动编制文摘 ,是信息时代的需要。本文讨论了文摘的不同定义、特点和功能。目前 ,文摘的分类方法不适用于自动文摘的分类 ,因此 ,本文试着从多角度对自动文摘系统进行了分类 ,这样的分类根据自动文摘的特点进行的划分 ,是对自动文摘分类的一种总结 ,可以作为构造自动文摘系统和思考自动文摘发展方向的参考和借鉴。最后 ,概述了中文自动文摘系统的研究状况 ,展望了自动文摘的发展趋势。  相似文献   
25.
Summarizing Similarities and Differences Among Related Documents   总被引:10,自引:0,他引:10  
In many modern information retrieval applications, a common problem which arises is the existence of multiple documents covering similar information, as in the case of multiple news stories about an event or a sequence of events. A particular challenge for text summarization is to be able to summarize the similarities and differences in information content among these documents. The approach described here exploits the results of recent progress in information extraction to represent salient units of text and their relationships. By exploiting meaningful relations between units based on an analysis of text cohesion and the context in which the comparison is desired, the summarizer can pinpoint similarities and differences, and align text segments. In evaluation experiments, these techniques for exploiting cohesion relations result in summaries which (i) help users more quickly complete a retrieval task (ii) result in improved alignment accuracy over baselines, and (iii) improve identification of topic-relevant similarities and differences.  相似文献   
26.
教育信息化是教育现代化的抓手和推动力,为了解我国特殊教育信息化发展的研究现状,文章界定了特殊教育信息化的涵义,从时间、研究内容、地域以及参考类型等多个维度对我国特殊教育信息化的学术研究文献进行分析和解读,从而整理出目前特殊教育信息化研究现状,并参考发达国家在该领域的经验提出未来的研究重点和方向,以期为其他特殊教育信息化研究提供参考和借鉴。  相似文献   
27.
曲名与本事均为乐府诗的基本构成要素,二者有着非常紧密的内在联系。乐府诗本事是其曲名取义的直接来源,曲名是其本事内容的关键字眼,也是对其本事最精当的概括。文人创作往往以曲名来化用乐曲创调本事、传播本事中的各种信息,从而丰富作品的文学内涵。  相似文献   
28.
德国高等职业教育号称德国经济腾飞的“发动机”。德国高职教育发展的文献综述,有助于深刻了解德国高等职业教育的同时,为我国高等职业教育的发展提供借鉴。学者们对于德国高职教育发展的研究主要集中在教育思想、培养模式、师资建设、法律保障和专业课程等方面。  相似文献   
29.
The evolution of the job market has resulted in traditional methods of recruitment becoming insufficient. As it is now necessary to handle volumes of information (mostly in the form of free text) that are impossible to process manually, an analysis and assisted categorization are essential to address this issue. In this paper, we present a combination of the E-Gen and Cortex systems. E-Gen aims to perform analysis and categorization of job offers together with the responses given by the candidates. E-Gen system strategy is based on vectorial and probabilistic models to solve the problem of profiling applications according to a specific job offer. Cortex is a statistical automatic summarization system. In this work, E-Gen uses Cortex as a powerful filter to eliminate irrelevant information contained in candidate answers. Our main objective is to develop a system to assist a recruitment consultant and the results obtained by the proposed combination surpass those of E-Gen in standalone mode on this task.  相似文献   
30.
由于史书记载不一,关于韩愈故里学界一直存有争议。流行的看法主要有四种:"昌黎说"、"邓州南阳说"、"孟县说"和"修武说"。从目前研究现状来看,支持"孟县说"和"修武说"者较多。故里之争背后是经济利益的博弈,文化搭台、经济唱戏,这本无可厚非,自有其合理性和积极作用;只是,文化搭台的前提要尊重历史,不能局限于地域之争。历史研究要跨越地域观念,以求实求真为原则。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号