首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2012篇
  免费   39篇
  国内免费   113篇
教育   1055篇
科学研究   422篇
各国文化   1篇
体育   18篇
综合类   51篇
文化理论   1篇
信息传播   616篇
  2023年   11篇
  2022年   6篇
  2021年   15篇
  2020年   18篇
  2019年   27篇
  2018年   22篇
  2017年   32篇
  2016年   25篇
  2015年   29篇
  2014年   107篇
  2013年   140篇
  2012年   159篇
  2011年   204篇
  2010年   175篇
  2009年   160篇
  2008年   154篇
  2007年   210篇
  2006年   196篇
  2005年   174篇
  2004年   100篇
  2003年   70篇
  2002年   59篇
  2001年   42篇
  2000年   14篇
  1999年   3篇
  1998年   4篇
  1997年   5篇
  1996年   2篇
  1992年   1篇
排序方式: 共有2164条查询结果,搜索用时 15 毫秒
1.
Semantic knowledge accumulates through explicit means and productive processes (e.g., analogy). These means work in concert when information explicitly acquired in separate episodes is integrated, and the integrated representation is used to self-derive new knowledge. We tested whether (a) self-derivation through memory integration extends beyond general information to science content, (b) self-derived information is retained, and (c) details of explicit learning episodes are retained. Testing was in second-grade classrooms (children 7–9 years). Children self-derived new knowledge; performance did not differ for general knowledge (Experiment 1) and science curriculum facts (Experiment 2). In Experiment 1, children retained self-derived knowledge over one week. In Experiment 2, children remembered details of the learning episodes that gave rise to self-derived knowledge; performance suggests that memory integration is dependent on explicit prompts. The findings support nomination of self-derivation through memory integration as a model for accumulation of semantic knowledge and inform the processes involved.  相似文献   
2.
Distant supervision (DS) has the advantage of automatically generating large amounts of labelled training data and has been widely used for relation extraction. However, there are usually many wrong labels in the automatically labelled data in distant supervision (Riedel, Yao, & McCallum, 2010). This paper presents a novel method to reduce the wrong labels. The proposed method uses the semantic Jaccard with word embedding to measure the semantic similarity between the relation phrase in the knowledge base and the dependency phrases between two entities in a sentence to filter the wrong labels. In the process of reducing wrong labels, the semantic Jaccard algorithm selects a core dependency phrase to represent the candidate relation in a sentence, which can capture features for relation classification and avoid the negative impact from irrelevant term sequences that previous neural network models of relation extraction often suffer. In the process of relation classification, the core dependency phrases are also used as the input of a convolutional neural network (CNN) for relation classification. The experimental results show that compared with the methods using original DS data, the methods using filtered DS data performed much better in relation extraction. It indicates that the semantic similarity based method is effective in reducing wrong labels. The relation extraction performance of the CNN model using the core dependency phrases as input is the best of all, which indicates that using the core dependency phrases as input of CNN is enough to capture the features for relation classification and could avoid negative impact from irrelevant terms.  相似文献   
3.
Traditional information retrieval techniques that primarily rely on keyword-based linking of the query and document spaces face challenges such as the vocabulary mismatch problem where relevant documents to a given query might not be retrieved simply due to the use of different terminology for describing the same concepts. As such, semantic search techniques aim to address such limitations of keyword-based retrieval models by incorporating semantic information from standard knowledge bases such as Freebase and DBpedia. The literature has already shown that while the sole consideration of semantic information might not lead to improved retrieval performance over keyword-based search, their consideration enables the retrieval of a set of relevant documents that cannot be retrieved by keyword-based methods. As such, building indices that store and provide access to semantic information during the retrieval process is important. While the process for building and querying keyword-based indices is quite well understood, the incorporation of semantic information within search indices is still an open challenge. Existing work have proposed to build one unified index encompassing both textual and semantic information or to build separate yet integrated indices for each information type but they face limitations such as increased query process time. In this paper, we propose to use neural embeddings-based representations of term, semantic entity, semantic type and documents within the same embedding space to facilitate the development of a unified search index that would consist of these four information types. We perform experiments on standard and widely used document collections including Clueweb09-B and Robust04 to evaluate our proposed indexing strategy from both effectiveness and efficiency perspectives. Based on our experiments, we find that when neural embeddings are used to build inverted indices; hence relaxing the requirement to explicitly observe the posting list key in the indexed document: (a) retrieval efficiency will increase compared to a standard inverted index, hence reduces the index size and query processing time, and (b) while retrieval efficiency, which is the main objective of an efficient indexing mechanism improves using our proposed method, retrieval effectiveness also retains competitive performance compared to the baseline in terms of retrieving a reasonable number of relevant documents from the indexed corpus.  相似文献   
4.
The proposed work aims to explore and compare the potency of syntactic-semantic based linguistic structures in plagiarism detection using natural language processing techniques. The current work explores linguistic features, viz., part of speech tags, chunks and semantic roles in detecting plagiarized fragments and utilizes a combined syntactic-semantic similarity metric, which extracts the semantic concepts from WordNet lexical database. The linguistic information is utilized for effective pre-processing and for availing semantically relevant comparisons. Another major contribution is the analysis of the proposed approach on plagiarism cases of various complexity levels. The impact of plagiarism types and complexity levels, upon the features extracted is analyzed and discussed. Further, unlike the existing systems, which were evaluated on some limited data sets, the proposed approach is evaluated on a larger scale using the plagiarism corpus provided by PAN1 competition from 2009 to 2014. The approach presented considerable improvement in comparison with the top-ranked systems of the respective years. The evaluation and analysis with various cases of plagiarism also reflected the supremacy of deeper linguistic features for identifying manually plagiarized data.  相似文献   
5.
Topic evolution has been described by many approaches from a macro level to a detail level, by extracting topic dynamics from text in literature and other media types. However, why the evolution happens is less studied. In this paper, we focus on whether and how the keyword semantics can invoke or affect the topic evolution. We assume that the semantic relatedness among the keywords can affect topic popularity during literature surveying and citing process, thus invoking evolution. However, the assumption is needed to be confirmed in an approach that fully considers the semantic interactions among topics. Traditional topic evolution analyses in scientometric domains cannot provide such support because of using limited semantic meanings. To address this problem, we apply the Google Word2Vec, a deep learning language model, to enhance the keywords with more complete semantic information. We further develop the semantic space as an urban geographic space. We analyze the topic evolution geographically using the measures of spatial autocorrelation, as if keywords are the changing lands in an evolving city. The keyword citations (keyword citation counts one when the paper containing this keyword obtains a citation) are used as an indicator of keyword popularity. Using the bibliographical datasets of the geographical natural hazard field, experimental results demonstrate that in some local areas, the popularity of keywords is affecting that of the surrounding keywords. However, there are no significant impacts on the evolution of all keywords. The spatial autocorrelation analysis identifies the interaction patterns (including High-High leading, High-Low suppressing) among the keywords in local areas. This approach can be regarded as an analyzing framework borrowed from geospatial modeling. Moreover, the prediction results in local areas are demonstrated to be more accurate if considering the spatial autocorrelations.  相似文献   
6.
Image and text matching bridges visual and textual modality differences and plays a considerable role in cross-modal retrieval. Much progress has been achieved through semantic representation and alignment. However, the distribution of multimedia data is severely unbalanced and contains many low-frequency occurrences, which are often ignored and cause performance degradation, i.e., the long-tail effect. In this work, we propose a novel rare-aware attention network (RAAN), which explores and exploits textual rare content for tackling the long-tail effect of image and text matching. Specifically, we first design a rare-aware mining module, which contains global prior information construction and rare fragment detector for modeling the characteristic of rare content. Then, the rare attention matching utilizes prior information as attention to guide the representation enhancement of rare content and introduces the rareness representation to strengthen the similarity calculation. Finally, we design prior information loss to optimize the model together with the triplet loss. We perform quantitative and qualitative experiments on two large-scale databases and achieve leading performance. In particular, we conduct 0-shot test for rare content and improve rSum by 21.0 and 41.5 on Flickr30K (155,000 image and text pairs) and MSCOCO (616,435 image and text pairs), demonstrating the effectiveness of the proposed method for the long-tail effect.  相似文献   
7.
为深入分析开放式网格服务模型的基本原理与机制,遵从面向服务的结构原则,充分利用web service的体系结构及集成技术,通过对OGSA的网格服务模型的探讨,展示了多种技术相融合的发展趋势。web service是一种新兴的以服务为中心的分布式系统技术,OGSA的网格服务模型是一种基于web service技术的开放式组件模型。  相似文献   
8.
高校图书馆电子文献数据库目录网页调查分析   总被引:2,自引:0,他引:2  
叶允中 《现代情报》2005,25(10):115-117
通过对我国87所高校图书馆电子文献教据库目录网页现状的调查,指出当前高校图书馆电子文献教据库目录网页制作中存在的问题,并提出相应的建议和措施。  相似文献   
9.
论网络环境中科技编辑对学术成果的价值判断   总被引:6,自引:0,他引:6  
夏书林 《编辑学报》2006,18(6):401-403
网络时代学术期刊的社会功能已发生价值位移,从注重科研成果的首次发布转变为注重科研成果的社会认同.科技编辑应该针对不同学科的发展状况,判断该学科科学规范的实际价值,在鉴定学术成果过程中对常规研究和创新研究作出不同的价值评定.科技编辑还应对学术成果的表达方式有所创新.  相似文献   
10.
张婷 《现代情报》2007,27(5):106-108
为解决图书馆网站更新及维护的成本与工作量过大的问题,本文设计并实现一个分布式内容提交系统,大大提高更新和维护图书馆网站的效率,使非技术人员也可参与网站内容编辑、更新,降低了对专业技术人员的依赖,同时通过提供内容质量控制系统,确保了站点内容的质量。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号