首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3篇
  信息传播   3篇
  2018年   1篇
  2008年   1篇
  2002年   1篇
排序方式: 共有3条查询结果,搜索用时 15 毫秒
1
1.
医学论文中定性资料假设检验方法的常见错误分析   总被引:2,自引:1,他引:1  
张功员 《编辑学报》2002,14(3):184-186
分析医学论文中使用定性资料假设检验方法的常见错误。认为论文作者在使用统计方法时应掌握其适用条件,编者应加强对文稿统计学使用的审查工作。  相似文献
2.
图表中平均数差异显著性检验结果的规范表达   总被引:2,自引:0,他引:2  
郝拉娣  何平 《编辑学报》2008,20(2):120-122
科技论文图表中平均数差异显著性检验结果的表达存在很多问题,如图表中不标出检验结果、表达符号多样、符号标注位置不统一、检验结果描述缺项等,以及图表注释中对检验结果的表述不全面,甚至描述有错。通过分析存在的问题,提出了规范表达的建议。  相似文献
3.
In the field of scientometrics, impact indicators and ranking algorithms are frequently evaluated using unlabelled test data comprising relevant entities (e.g., papers, authors, or institutions) that are considered important. The rationale is that the higher some algorithm ranks these entities, the better its performance. To compute a performance score for an algorithm, an evaluation measure is required to translate the rank distribution of the relevant entities into a single-value performance score. Until recently, it was simply assumed that taking the average rank (of the relevant entities) is an appropriate evaluation measure when comparing ranking algorithms or fine-tuning algorithm parameters.With this paper we propose a framework for evaluating the evaluation measures themselves. Using this framework the following questions can now be answered: (1) which evaluation measure should be chosen for an experiment, and (2) given an evaluation measure and corresponding performance scores for the algorithms under investigation, how significant are the observed performance differences?Using two publication databases and four test data sets we demonstrate the functionality of the framework and analyse the stability and discriminative power of the most common information retrieval evaluation measures. We find that there is no clear winner and that the performance of the evaluation measures is highly dependent on the underlying data. Our results show that the average rank is indeed an adequate and stable measure. However, we also show that relatively large performance differences are required to confidently determine if one ranking algorithm is significantly superior to another. Lastly, we list alternative measures that also yield stable results and highlight measures that should not be used in this context.  相似文献
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号