首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
胡多军 《科教文汇》2013,(14):79-81
EFL书面语篇生成是一个复杂的语义整合和编码过程。编码者应该通过对自己的英语语言交际能力、语篇生成的情景语境以及ELF语篇生成策略等知识进行元认知分析和体验,积极对语篇内在的词汇、语义和内容以及外在的词汇、语法标记进行有效监控和选择,确保生成语篇的正确性、合法性和合适性。  相似文献   

2.
Existing methods for text generation usually fed the overall sentiment polarity of a product as an input into the seq2seq model to generate a relatively fluent review. However, these methods cannot express more fine-grained sentiment polarity. Although some studies attempt to generate aspect-level sentiment controllable reviews, the personalized attribute of reviews would be ignored. In this paper, a hierarchical template-transformer model is proposed for personalized fine-grained sentiment controllable generation, which aims to generate aspect-level sentiment controllable reviews with personalized information. The hierarchical structure can effectively learn sentiment information and lexical information separately. The template transformer uses a part of speech (POS) template to guide the generation process and generate a smoother review. To verify our model, we used the existing model to obtain a corpus named FSCG-80 from Yelp, which contains 800K samples and conducted a series of experiments on this corpus. Experimental results show that our model can achieve up to 89.93% aspect-sentiment control accuracy and generate more fluent reviews.  相似文献   

3.
This paper presents a formalism for the representation of complex semantic relations among concepts of natural language. We define a semantic algebra as a set of atomic concepts together with an ordered set of semantic relations. Semantic trees are a graphical representation of a semantic algebra (comparable to Kantorovic trees for boolean or arithmetical expressions). A semantic tree is an ordered tree with nodes labeled with relation and concept names. We generate semantic trees from natural language texts in such a way that they represent the semantic relations which hold among the concepts occurring within that text. This generation process is carried out by a transformational grammar which transforms directly natural language sentences into semantic trees. We present an example for concepts and relations within the domain of computer science where we have generated semantic trees from definition texts by means of a metalanguage for transformational grammars (a sort of metacompiler for transformational grammars). The semantic trees generated so far serve for thesaurus entries in an information retrieval system.  相似文献   

4.
The success of information retrieval depends on the ability to measure the effective relationship between a query and its response. If both are posed in natural language, one might expect that understanding the meaning of that language could not be avoided. The aim of this research is to demonstrate that it is perhaps unnecessary to be able to determine the meaning in the absolute sense; it may be sufficient to measure how far there is a conformity in meaning, and then only in the context of the set of documents in which the answer to a query is sought. Handling a particular language using a computer is made possible through replacing certain texts by special sets. A given text has a ‘syntactic trace’, the set of all the overlapping trigrams forming part of the text. When determining the effective relationship between a query and its answer, not only do their syntactic traces play a role, but so do the traces of all other documents in the set. This is known as the ‘information trace method’.  相似文献   

5.
6.
[目的/意义]针对单纯使用统计自然语言处理技术对社交网络上产生的短文本数据进行意向分类时存在的特征稀疏、语义模糊和标记数据不足等问题,提出了一种融合心理语言学信息的Co-training意图分类方法。[方法/过程]首先,为丰富语义信息,在提取文本特征的同时融合带有情感倾向的心理语言学线索对特征维度进行扩展。其次,针对标记数据有限的问题,在模型训练阶段使用半监督集成法对两种机器学习分类方法(基于事件内容表达分类器与情感事件表达分类器)进行协同训练(Co-training)。最后,采用置信度乘积的投票制进行分类。[结论/结果]实验结果表明融入心理语言学信息的语料再经过协同训练的分类效果更优。  相似文献   

7.
自然语言理解心理学在短文本分类中的实证研究   总被引:1,自引:0,他引:1  
目前对文本分类研究多数集中在对大规模语料基础上的特征选择或分类器算法的研究。本文是建立在训练样本少且样本长度短的基础上,根据人脑对自然语言理解的心理学原理"人们总是根据已知的最熟悉的、最典型的例子进行判断,只有在该方法不奏效的时候才使用频率这一概念,并且使用的是十分简单的频率"从该角度进行短文本分类的实证研究。以心理学中的"熟悉原理"、"典型原理"等为模型建立特殊词库和典型案例词库,改进了传统文本分类的实验步骤,同时提出了该方法的优势和局限性。  相似文献   

8.
We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.  相似文献   

9.
The application of natural language processing (NLP) to financial fields is advancing with an increase in the number of available financial documents. Transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) have been successful in NLP in recent years. These cutting-edge models have been adapted to the financial domain by applying financial corpora to existing pre-trained models and by pre-training with the financial corpora from scratch. In Japanese, by contrast, financial terminology cannot be applied from a general vocabulary without further processing. In this study, we construct language models suitable for the financial domain. Furthermore, we compare methods for adapting language models to the financial domain, such as pre-training methods and vocabulary adaptation. We confirm that the adaptation of a pre-training corpus and tokenizer vocabulary based on a corpus of financial text is effective in several downstream financial tasks. No significant difference is observed between pre-training with the financial corpus and continuous pre-training from the general language model with the financial corpus. We have released our source code and pre-trained models.  相似文献   

10.
Social media represents an emerging challenging sector where the natural language expressions of people can be easily reported through blogs and short text messages. This is rapidly creating unique contents of massive dimensions that need to be efficiently and effectively analyzed to create actionable knowledge for decision making processes. A key information that can be grasped from social environments relates to the polarity of text messages. To better capture the sentiment orientation of the messages, several valuable expressive forms could be taken into account. In this paper, three expressive signals – typically used in microblogs – have been explored: (1) adjectives, (2) emoticon, emphatic and onomatopoeic expressions and (3) expressive lengthening. Once a text message has been normalized to better conform social media posts to a canonical language, the considered expressive signals have been used to enrich the feature space and train several baseline and ensemble classifiers aimed at polarity classification. The experimental results show that adjectives are more discriminative and impacting than the other considered expressive signals.  相似文献   

11.
Performance of text classification models tends to drop over time due to changes in data, which limits the lifetime of a pretrained model. Therefore an ability to predict a model’s ability to persist over time can help design models that can be effectively used over a longer period of time. In this paper, we provide a thorough discussion into the problem, establish an evaluation setup for the task. We look at this problem from a practical perspective by assessing the ability of a wide range of language models and classification algorithms to persist over time, as well as how dataset characteristics can help predict the temporal stability of different models. We perform longitudinal classification experiments on three datasets spanning between 6 and 19 years, and involving diverse tasks and types of data. By splitting the longitudinal datasets into years, we perform a comprehensive set of experiments by training and testing across data that are different numbers of years apart from each other, both in the past and in the future. This enables a gradual investigation into the impact of the temporal gap between training and test sets on the classification performance, as well as measuring the extent of the persistence over time. Through experimenting with a range of language models and algorithms, we observe a consistent trend of performance drop over time, which however differs significantly across datasets; indeed, datasets whose domain is more closed and language is more stable, such as with book reviews, exhibit a less pronounced performance drop than open-domain social media datasets where language varies significantly more. We find that one can estimate how a model will retain its performance over time based on (i) how well the model performs over a restricted time period and its extrapolation to a longer time period, and (ii) the linguistic characteristics of the dataset, such as the familiarity score between subsets from different years. Findings from these experiments have important implications for the design of text classification models with the aim of preserving performance over time.  相似文献   

12.
马晓炜  黄乐 《科教文汇》2013,(16):112-112,115
感谢语是人际交往中的重要表达,同时也是文学作品中必不可少的一部分。通过对《红楼梦》中感谢语的语料分析,我们将感谢语分为直接感谢语和间接感谢语两部分进行讨论,主要从它们的内部结构、表现形式等方面分析。  相似文献   

13.
Due to the large repository of documents available on the web, users are usually inundated by a large volume of information, most of which is found to be irrelevant. Since user perspectives vary, a client-side text filtering system that learns the user's perspective can reduce the problem of irrelevant retrieval. In this paper, we have provided the design of a customized text information filtering system which learns user preferences and modifies the initial query to fetch better documents. It uses a rough-fuzzy reasoning scheme. The rough-set based reasoning takes care of natural language nuances, like synonym handling, very elegantly. The fuzzy decider provides qualitative grading to the documents for the user's perusal. We have provided the detailed design of the various modules and some results related to the performance analysis of the system.  相似文献   

14.
A new approach to narrative abstractive summarization (NATSUM) is presented in this paper. NATSUM is centered on generating a narrative chronologically ordered summary about a target entity from several news documents related to the same topic. To achieve this, first, our system creates a cross-document timeline where a time point contains all the event mentions that refer to the same event. This timeline is enriched with all the arguments of the events that are extracted from different documents. Secondly, using natural language generation techniques, one sentence for each event is produced using the arguments involved in the event. Specifically, a hybrid surface realization approach is used, based on over-generation and ranking techniques. The evaluation demonstrates that NATSUM performed better than extractive summarization approaches and competitive abstractive baselines, improving the F1-measure at least by 50%, when a real scenario is simulated.  相似文献   

15.
马剑  许宏生 《科教文汇》2013,(21):157-158
新课程要体现预设与生成的辩证统一。结合部分来自一线化学教学比赛的案例,从情境预设、运用多元评价法以及在有限预设教学时间内实现无限生成三个方面来论述如何处理预设与生成二者之间的关系。  相似文献   

16.
丁玲 《科教文汇》2011,(14):77-77,106
课文的背景知识学习是语文课堂教学过程中的重要环节,许多教师都将它安排在第一堂课。本文通过背景知识的学习原则,分析如何在语文教学中学习背景知识。  相似文献   

17.
Recently, the Transformer model architecture and the pre-trained Transformer-based language models have shown impressive performance when used in solving both natural language understanding and text generation tasks. Nevertheless, there is little research done on using these models for text generation in Arabic. This research aims at leveraging and comparing the performance of different model architectures, including RNN-based and Transformer-based ones, and different pre-trained language models, including mBERT, AraBERT, AraGPT2, and AraT5 for Arabic abstractive summarization. We first built an Arabic summarization dataset of 84,764 high-quality text-summary pairs. To use mBERT and AraBERT in the context of text summarization, we employed a BERT2BERT-based encoder-decoder model where we initialized both the encoder and decoder with the respective model weights. The proposed models have been tested using ROUGE metrics and manual human evaluation. We also compared their performance on out-of-domain data. Our pre-trained Transformer-based models give a large improvement in performance with ~79% less data. We found that AraT5 scores ~3 ROUGE higher than a BERT2BERT-based model that is initialized with AraBERT, indicating that an encoder-decoder pre-trained Transformer is more suitable for summarizing Arabic text. Also, both of these two models perform better than AraGPT2 by a clear margin, which we found to produce summaries with high readability but with relatively lesser quality. On the other hand, we found that both AraT5 and AraGPT2 are better at summarizing out-of-domain text. We released our models and dataset publicly1,.2  相似文献   

18.
Text categorization pertains to the automatic learning of a text categorization model from a training set of preclassified documents on the basis of their contents and the subsequent assignment of unclassified documents to appropriate categories. Most existing text categorization techniques deal with monolingual documents (i.e., written in the same language) during the learning of the text categorization model and category assignment (or prediction) for unclassified documents. However, with the globalization of business environments and advances in Internet technology, an organization or individual may generate and organize into categories documents in one language and subsequently archive documents in different languages into existing categories, which necessitate cross-lingual text categorization (CLTC). Specifically, cross-lingual text categorization deals with learning a text categorization model from a set of training documents written in one language (e.g., L1) and then classifying new documents in a different language (e.g., L2). Motivated by the significance of this demand, this study aims to design a CLTC technique with two different category assignment methods, namely, individual- and cluster-based. Using monolingual text categorization as a performance reference, our empirical evaluation results demonstrate the cross-lingual capability of the proposed CLTC technique. Moreover, the classification accuracy achieved by the cluster-based category assignment method is statistically significantly higher than that attained by the individual-based method.  相似文献   

19.
Sarcasm expression is a pervasive literary technique in which people intentionally express the opposite of what is implied. Accurate detection of sarcasm in a text can facilitate the understanding of speakers’ true intentions and promote other natural language processing tasks, especially sentiment analysis tasks. Since sarcasm is a kind of implicit sentiment expression and speakers deliberately confuse the audience, it is challenging to detect sarcasm only by text. Existing approaches based on machine learning and deep learning achieved unsatisfactory performance when handling sarcasm text with complex expression or needing specific background knowledge to understand. Especially, due to the characteristics of the Chinese language itself, sarcasm detection in Chinese is more difficult. To alleviate this dilemma on Chinese sarcasm detection, we propose a sememe and auxiliary enhanced attention neural model, SAAG. At the word level, we introduce sememe knowledge to enhance the representation learning of Chinese words. Sememe is the minimum unit of meaning, which is a fine-grained portrayal of a word. At the sentence level, we leverage some auxiliary information, such as the news title, to learning the representation of the context and background of sarcasm expression. Then, we construct the representation of text expression progressively and dynamically. The evaluation on a sarcasm dateset, consisting of comments on news text, reveals that our proposed approach is effective and outperforms the state-of-the-art models.  相似文献   

20.
Two probabilistic approaches to cross-lingual retrieval are in wide use today, those based on probabilistic models of relevance, as exemplified by INQUERY, and those based on language modeling. INQUERY, as a query net model, allows the easy incorporation of query operators, including a synonym operator, which has proven to be extremely useful in cross-language information retrieval (CLIR), in an approach often called structured query translation. In contrast, language models incorporate translation probabilities into a unified framework. We compare the two approaches on Arabic and Spanish data sets, using two kinds of bilingual dictionaries––one derived from a conventional dictionary, and one derived from a parallel corpus. We find that structured query processing gives slightly better results when queries are not expanded. On the other hand, when queries are expanded, language modeling gives better results, but only when using a probabilistic dictionary derived from a parallel corpus.We pursue two additional issues inherent in the comparison of structured query processing with language modeling. The first concerns query expansion, and the second is the role of translation probabilities. We compare conventional expansion techniques (pseudo-relevance feedback) with relevance modeling, a new IR approach which fits into the formal framework of language modeling. We find that relevance modeling and pseudo-relevance feedback achieve comparable levels of retrieval and that good translation probabilities confer a small but significant advantage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号