首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1720篇
  免费   37篇
  国内免费   127篇
教育   1055篇
科学研究   418篇
各国文化   1篇
体育   10篇
综合类   71篇
信息传播   329篇
  2024年   1篇
  2023年   22篇
  2022年   39篇
  2021年   33篇
  2020年   65篇
  2019年   64篇
  2018年   42篇
  2017年   20篇
  2016年   35篇
  2015年   63篇
  2014年   128篇
  2013年   109篇
  2012年   125篇
  2011年   164篇
  2010年   122篇
  2009年   111篇
  2008年   137篇
  2007年   163篇
  2006年   109篇
  2005年   85篇
  2004年   67篇
  2003年   44篇
  2002年   38篇
  2001年   26篇
  2000年   27篇
  1999年   8篇
  1998年   8篇
  1997年   7篇
  1996年   9篇
  1995年   4篇
  1994年   5篇
  1993年   1篇
  1992年   3篇
排序方式: 共有1884条查询结果,搜索用时 15 毫秒
161.
探讨富川脐橙皮粉末皂化后番茄红素提取最佳工艺及分离纯化。以富川脐橙皮粉末为原料,0.Sm01/L碳酸钠溶液为皂化剂,在超声波协同下以氯仿和石油醚为混合溶剂提取番茄红素。设计了3因素3水平正交实验研究了番茄红素的提取工艺,以番茄红素的得率为考察指标筛选番茄红素的最佳提取工艺:料液(g/ml)比1:20、超声提取时间25min、氯仿:石油醚(1:3,V/V),在此最佳提取条件下,番茄红素得率为14.72mg/100g。然后对脐橙皮中番茄红素进行分离提纯,得到脐橙皮中番茄红素的含量为62.83%。  相似文献   
162.
PKS背诵模式从语言学和认知心理学的角度对背诵重新解析.该模式以词块为关键词作为背诵的基本单位,以图片表达词块帮助学习者形成视觉映像,利用思维导图管理词块,再辅助以视听读结合的输入方式,寓背诵于娱乐,激发学习者的学习动机,提高背诵效率.PKS背诵有着强大的认知优势,词块可以清除理解障碍,思维导图可以促进背诵图式的建构,二者结合拥有强大的助记功能.该模式的应用有利于培养学习者的想象力和自主学习能力.  相似文献   
163.
通过对高校学报版式设计与文字编排的研究分析,指出了高校学报版式设计的重要性,提出了通过追求文字编排中各元素的变化与统一,强调了版式设计要有独创风格,从而更好地满足读者的视觉享受。  相似文献   
164.
鲍玉来  耿雪来  飞龙 《现代情报》2019,39(8):132-136
[目的/意义]在非结构化语料集中抽取知识要素,是实现知识图谱的重要环节,本文探索了应用深度学习中的卷积神经网络(CNN)模型进行旅游领域知识关系抽取方法。[方法/过程]抓取专业旅游网站的相关数据建立语料库,对部分语料进行人工标注作为训练集和测试集,通过Python语言编程实现分词、向量化及CNN模型,进行关系抽取实验。[结果/结论]实验结果表明,应用卷积神经网络对非结构化的旅游文本进行关系抽取时能够取得满意的效果(Precision 0.77,Recall 0.76,F1-measure 0.76)。抽取结果通过人工校对进行优化后,可以为旅游知识图谱构建、领域本体构建等工作奠定基础。  相似文献   
165.
Automated keyphrase extraction is a fundamental textual information processing task concerned with the selection of representative phrases from a document that summarize its content. This work presents a novel unsupervised method for keyphrase extraction, whose main innovation is the use of local word embeddings (in particular GloVe vectors), i.e., embeddings trained from the single document under consideration. We argue that such local representation of words and keyphrases are able to accurately capture their semantics in the context of the document they are part of, and therefore can help in improving keyphrase extraction quality. Empirical results offer evidence that indeed local representations lead to better keyphrase extraction results compared to both embeddings trained on very large third corpora or larger corpora consisting of several documents of the same scientific field and to other state-of-the-art unsupervised keyphrase extraction methods.  相似文献   
166.
地表水面精确提取是研究地表水质和水量变化的重要基础。斯里兰卡是"21世纪海上丝绸之路"的重要参与国,年降雨量丰富,但时空分布不均,斯里兰卡人民长期用水困难,研究斯里兰卡地表水体有助于斯里兰卡民生问题的解决。斯里兰卡国内散布着大量的小型水库和坑塘,这些小面积水体易受周边环境因素影响而提取困难。基于2017年7月斯里兰卡中东部地区的哨兵(Sentinel)1/2号卫星影像,对比分析单波段法、水体指数法和监督分类等水体提取方法的精度和存在的问题。结果表明,归一化水体指数法NDWI的准确率最高,分类精度达94%。  相似文献   
167.
Distant supervision (DS) has the advantage of automatically generating large amounts of labelled training data and has been widely used for relation extraction. However, there are usually many wrong labels in the automatically labelled data in distant supervision (Riedel, Yao, & McCallum, 2010). This paper presents a novel method to reduce the wrong labels. The proposed method uses the semantic Jaccard with word embedding to measure the semantic similarity between the relation phrase in the knowledge base and the dependency phrases between two entities in a sentence to filter the wrong labels. In the process of reducing wrong labels, the semantic Jaccard algorithm selects a core dependency phrase to represent the candidate relation in a sentence, which can capture features for relation classification and avoid the negative impact from irrelevant term sequences that previous neural network models of relation extraction often suffer. In the process of relation classification, the core dependency phrases are also used as the input of a convolutional neural network (CNN) for relation classification. The experimental results show that compared with the methods using original DS data, the methods using filtered DS data performed much better in relation extraction. It indicates that the semantic similarity based method is effective in reducing wrong labels. The relation extraction performance of the CNN model using the core dependency phrases as input is the best of all, which indicates that using the core dependency phrases as input of CNN is enough to capture the features for relation classification and could avoid negative impact from irrelevant terms.  相似文献   
168.
Within the context of Information Extraction (IE), relation extraction is oriented towards identifying a variety of relation phrases and their arguments in arbitrary sentences. In this paper, we present a clause-based framework for information extraction in textual documents. Our framework focuses on two important challenges in information extraction: 1) Open Information Extraction and (OIE), and 2) Relation Extraction (RE). In the plethora of research that focus on the use of syntactic and dependency parsing for the purposes of detecting relations, there has been increasing evidence of incoherent and uninformative extractions. The extracted relations may even be erroneous at times and fail to provide a meaningful interpretation. In our work, we use the English clause structure and clause types in an effort to generate propositions that can be deemed as extractable relations. Moreover, we propose refinements to the grammatical structure of syntactic and dependency parsing that help reduce the number of incoherent and uninformative extractions from clauses. In our experiments both in the open information extraction and relation extraction domains, we carefully evaluate our system on various benchmark datasets and compare the performance of our work against existing state-of-the-art information extraction systems. Our work shows improved performance compared to the state-of-the-art techniques.  相似文献   
169.
网络招聘文本技能信息自动抽取研究   总被引:1,自引:1,他引:0  
[目的/意义]针对目前网络招聘文本手工抽取技能信息无法满足大数据量分析要求的问题,提出一种针对大量网络招聘文本的技能信息自动抽取方法。[方法/过程]根据网络招聘文本的特点,利用依存句法分析选取候选技能,然后提出领域相关性指标衡量候选技能,将其融入传统的术语抽取方法之中,形成一种网络招聘文本技能信息自动抽取方法。[结果/结论]实验表明,本文提出的方法能够从网络招聘文本中自动、快速、准确地抽取技能信息。  相似文献   
170.
Five hundred million tweets are posted daily, making Twitter a major social media platform from which topical information on events can be extracted. These events are represented by three main dimensions: time, location and entity-related information. The focus of this paper is location, which is an essential dimension for geo-spatial applications, either when helping rescue operations during a disaster or when used for contextual recommendations. While the first type of application needs high recall, the second is more precision-oriented. This paper studies the recall/precision trade-off, combining different methods to extract locations. In the context of short posts, applying tools that have been developed for natural language is not sufficient given the nature of tweets which are generally too short to be linguistically correct. Also bearing in mind the high number of posts that need to be handled, we hypothesize that predicting whether a post contains a location or not could make the location extractors more focused and thus more effective. We introduce a model to predict whether a tweet contains a location or not and show that location prediction is a useful pre-processing step for location extraction. We define a number of new tweet features and we conduct an intensive evaluation. Our findings are that (1) combining existing location extraction tools is effective for precision-oriented or recall-oriented results, (2) enriching tweet representation is effective for predicting whether a tweet contains a location or not, (3) words appearing in a geography gazetteer and the occurrence of a preposition just before a proper noun are the two most important features for predicting the occurrence of a location in tweets, and (4) the accuracy of location extraction improves when it is possible to predict that there is a location in a tweet.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号