首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1720篇
  免费   37篇
  国内免费   127篇
教育   1055篇
科学研究   418篇
各国文化   1篇
体育   10篇
综合类   71篇
信息传播   329篇
  2024年   1篇
  2023年   22篇
  2022年   39篇
  2021年   33篇
  2020年   65篇
  2019年   64篇
  2018年   42篇
  2017年   20篇
  2016年   35篇
  2015年   63篇
  2014年   128篇
  2013年   109篇
  2012年   125篇
  2011年   164篇
  2010年   122篇
  2009年   111篇
  2008年   137篇
  2007年   163篇
  2006年   109篇
  2005年   85篇
  2004年   67篇
  2003年   44篇
  2002年   38篇
  2001年   26篇
  2000年   27篇
  1999年   8篇
  1998年   8篇
  1997年   7篇
  1996年   9篇
  1995年   4篇
  1994年   5篇
  1993年   1篇
  1992年   3篇
排序方式: 共有1884条查询结果,搜索用时 15 毫秒
171.
Distant supervision (DS) has the advantage of automatically generating large amounts of labelled training data and has been widely used for relation extraction. However, there are usually many wrong labels in the automatically labelled data in distant supervision (Riedel, Yao, & McCallum, 2010). This paper presents a novel method to reduce the wrong labels. The proposed method uses the semantic Jaccard with word embedding to measure the semantic similarity between the relation phrase in the knowledge base and the dependency phrases between two entities in a sentence to filter the wrong labels. In the process of reducing wrong labels, the semantic Jaccard algorithm selects a core dependency phrase to represent the candidate relation in a sentence, which can capture features for relation classification and avoid the negative impact from irrelevant term sequences that previous neural network models of relation extraction often suffer. In the process of relation classification, the core dependency phrases are also used as the input of a convolutional neural network (CNN) for relation classification. The experimental results show that compared with the methods using original DS data, the methods using filtered DS data performed much better in relation extraction. It indicates that the semantic similarity based method is effective in reducing wrong labels. The relation extraction performance of the CNN model using the core dependency phrases as input is the best of all, which indicates that using the core dependency phrases as input of CNN is enough to capture the features for relation classification and could avoid negative impact from irrelevant terms.  相似文献   
172.
Within the context of Information Extraction (IE), relation extraction is oriented towards identifying a variety of relation phrases and their arguments in arbitrary sentences. In this paper, we present a clause-based framework for information extraction in textual documents. Our framework focuses on two important challenges in information extraction: 1) Open Information Extraction and (OIE), and 2) Relation Extraction (RE). In the plethora of research that focus on the use of syntactic and dependency parsing for the purposes of detecting relations, there has been increasing evidence of incoherent and uninformative extractions. The extracted relations may even be erroneous at times and fail to provide a meaningful interpretation. In our work, we use the English clause structure and clause types in an effort to generate propositions that can be deemed as extractable relations. Moreover, we propose refinements to the grammatical structure of syntactic and dependency parsing that help reduce the number of incoherent and uninformative extractions from clauses. In our experiments both in the open information extraction and relation extraction domains, we carefully evaluate our system on various benchmark datasets and compare the performance of our work against existing state-of-the-art information extraction systems. Our work shows improved performance compared to the state-of-the-art techniques.  相似文献   
173.
网络招聘文本技能信息自动抽取研究   总被引:1,自引:1,他引:0  
[目的/意义]针对目前网络招聘文本手工抽取技能信息无法满足大数据量分析要求的问题,提出一种针对大量网络招聘文本的技能信息自动抽取方法。[方法/过程]根据网络招聘文本的特点,利用依存句法分析选取候选技能,然后提出领域相关性指标衡量候选技能,将其融入传统的术语抽取方法之中,形成一种网络招聘文本技能信息自动抽取方法。[结果/结论]实验表明,本文提出的方法能够从网络招聘文本中自动、快速、准确地抽取技能信息。  相似文献   
174.
Five hundred million tweets are posted daily, making Twitter a major social media platform from which topical information on events can be extracted. These events are represented by three main dimensions: time, location and entity-related information. The focus of this paper is location, which is an essential dimension for geo-spatial applications, either when helping rescue operations during a disaster or when used for contextual recommendations. While the first type of application needs high recall, the second is more precision-oriented. This paper studies the recall/precision trade-off, combining different methods to extract locations. In the context of short posts, applying tools that have been developed for natural language is not sufficient given the nature of tweets which are generally too short to be linguistically correct. Also bearing in mind the high number of posts that need to be handled, we hypothesize that predicting whether a post contains a location or not could make the location extractors more focused and thus more effective. We introduce a model to predict whether a tweet contains a location or not and show that location prediction is a useful pre-processing step for location extraction. We define a number of new tweet features and we conduct an intensive evaluation. Our findings are that (1) combining existing location extraction tools is effective for precision-oriented or recall-oriented results, (2) enriching tweet representation is effective for predicting whether a tweet contains a location or not, (3) words appearing in a geography gazetteer and the occurrence of a preposition just before a proper noun are the two most important features for predicting the occurrence of a location in tweets, and (4) the accuracy of location extraction improves when it is possible to predict that there is a location in a tweet.  相似文献   
175.
We investigated combined effects of ambient temperature (23°C or 13°C) and fraction of inspired oxygen (21%O2 or 13%O2) on energy cost of walking (Cw: J·kg?1·km?1) and economical speed (ES). Eighteen healthy young adults (11 males, seven females) walked at seven speeds from 0.67 to 1.67 m s?1 (four min per stage). Environmental conditions were set; thermoneutral (N: 23°C) with normoxia (N: 21%O2) = NN; 23°C (N) with hypoxia (H: 13%O2) = NH; cool (C: 13°C) with 21%O2 (N) = CN, and 13°C (C) with 13%O2 (H) = CH. Muscle deoxygenation (HHb) and tissue O2 saturation (StO2) were measured at tibialis anterior. We found a significantly slower ES in NH (1.289 ± 0.091 m s?1) and CH (1.275 ± 0.099 m s?1) than in NN (1.334 ± 0.112 m s?1) and CN (1.332 ± 0.104 m s?1). Changes in HHb and StO2 were related to the ES. These results suggested that the combined effects (exposure to hypoxia and cool) is nearly equal to exposure to hypoxia and cool individually. Specifically, acute moderate hypoxia slowed the ES by approx. 4%, but acute cool environment did not affect the ES. Further, HHb and StO2 may partly account for an individual ES.  相似文献   
176.
极化合成孔径雷达(PolSAR)以其多参数、多通道、多极化、信息记录更加完整等特点,在城市地物提取领域中发挥着重要作用,并已成为遥感影像研究领域的热点。选择覆盖苏州市的Radarsat2影像,利用极化非相干分解法和灰度共生矩阵法分别提取19种极化特征和8种纹理特征,通过分析建筑物、植被和水体的极化特征和纹理特征进行特征组合,结合主成分分析法(PCA)和支持向量机法(SVM)对城市建筑物进行提取,并定量评估精度。结果表明:基于极化特征的建筑物提取精度最高为92.4%;基于纹理特征的提取精度最高为88.9%;极化特征与纹理特征相结合可以提高精度,最高精度为93.7%;PCA特征融合算法具有较高的运算效率,同时提高了精度。  相似文献   
177.
Searching for relevant material that satisfies the information need of a user, within a large document collection is a critical activity for web search engines. Query Expansion techniques are widely used by search engines for the disambiguation of user’s information need and for improving the information retrieval (IR) performance. Knowledge-based, corpus-based and relevance feedback, are the main QE techniques, that employ different approaches for expanding the user query with synonyms of the search terms (word synonymy) in order to bring more relevant documents and for filtering documents that contain search terms but with a different meaning (also known as word polysemy problem) than the user intended. This work, surveys existing query expansion techniques, highlights their strengths and limitations and introduces a new method that combines the power of knowledge-based or corpus-based techniques with that of relevance feedback. Experimental evaluation on three information retrieval benchmark datasets shows that the application of knowledge or corpus-based query expansion techniques on the results of the relevance feedback step improves the information retrieval performance, with knowledge-based techniques providing significantly better results than their simple relevance feedback alternatives in all sets.  相似文献   
178.
This study proposes a temporal analysis method to utilize heterogeneous resources such as papers, patents, and web news articles in an integrated manner. We analyzed the time gap phenomena between three resources and two academic areas by conducting text mining-based content analysis. To this end, a topic modeling technique, Latent Dirichlet Allocation (LDA) was used to estimate the optimal time gaps among three resources (papers, patents, and web news articles) in two research domains. The contributions of this study are summarized as follows: firstly, we propose a new temporal analysis method to understand the content characteristics and trends of heterogeneous multiple resources in an integrated manner. We applied it to measure the exact time intervals between academic areas by understanding the time gap phenomena. The results of temporal analysis showed that the resources of the medical field had more up-to-date property than those of the computer field, and thus prompter disclosure to the public. Secondly, we adopted a power-law exponent measurement and content analysis to evaluate the proposed method. With the proposed method, we demonstrate how to analyze heterogeneous resources more precisely and comprehensively.  相似文献   
179.
Negation recognition in medical narrative reports   总被引:1,自引:0,他引:1  
Substantial medical data, such as discharge summaries and operative reports are stored in electronic textual form. Databases containing free-text clinical narratives reports often need to be retrieved to find relevant information for clinical and research purposes. The context of negation, a negative finding, is of special importance, since many of the most frequently described findings are such. When searching free-text narratives for patients with a certain medical condition, if negation is not taken into account, many of the documents retrieved will be irrelevant. Hence, negation is a major source of poor precision in medical information retrieval systems. Previous research has shown that negated findings may be difficult to identify if the words implying negations (negation signals) are more than a few words away from them. We present a new pattern learning method for automatic identification of negative context in clinical narratives reports. We compare the new algorithm to previous methods proposed for the same task, and show its advantages: accuracy improvement compared to other machine learning methods, and much faster than manual knowledge engineering techniques with matching accuracy. The new algorithm can be applied also to further context identification and information extraction tasks.
Lior RokachEmail:
  相似文献   
180.
A compressed full-text self-index for a text T, of size u, is a data structure used to search for patterns P, of size m, in T, that requires reduced space, i.e. space that depends on the empirical entropy (H k or H 0) of T, and is, furthermore, able to reproduce any substring of T. In this paper we present a new compressed self-index able to locate the occurrences of P in O((m + occ)log u) time, where occ is the number of occurrences. The fundamental improvement over previous LZ78 based indexes is the reduction of the search time dependency on m from O(m 2) to O(m). To achieve this result we point out the main obstacle to linear time algorithms based on LZ78 data compression and expose and explore the nature of a recurrent structure in LZ-indexes, the suffix tree. We show that our method is very competitive in practice by comparing it against other state of the art compressed indexes.
Arlindo L. OliveiraEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号