首页 | 本学科首页   官方微博 | 高级检索  
     检索      


An analysis of evaluation campaigns in ad-hoc medical information retrieval: CLEF eHealth 2013 and 2014
Authors:Lorraine Goeuriot  Gareth J F Jones  Liadh Kelly  Johannes Leveling  Mihai Lupu  Joao Palotti  Guido Zuccon
Institution:1.LIG,Université Grenoble Alpes,Grenoble,France;2.Dublin City University,Dublin,Ireland;3.Maynooth University,Maynooth,Ireland;4.TU Wien,Vienna,Austria;5.Queensland University of Technology,Brisbane,Australia
Abstract:Since its inception in 2013, one of the key contributions of the CLEF eHealth evaluation campaign has been the organization of an ad-hoc information retrieval (IR) benchmarking task. This IR task evaluates systems intended to support laypeople searching for and understanding health information. Each year the task provides registered participants with standard IR test collections consisting of a document collection and topic set. Participants then return retrieval results obtained by their IR systems for each query, which are assessed using a pooling procedure. In this article we focus on CLEF eHealth 2013 and 2014s retrieval task, which saw topics created based on patients’ information needs associated with their medical discharge summaries. We overview the task and datasets created, and the results obtained by participating teams over these two years. We then provide a detailed comparative analysis of the results, and conduct an evaluation of the datasets in the light of these results. This twofold study of the evaluation campaign teaches us about technical aspects of medical IR, such as the effectiveness of query expansion; the quality and characteristics of CLEF eHealth IR datasets, such as their reliability; and how to run an IR evaluation campaign in the medical domain.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号