首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
本文介绍了一种适用于教务部门排课的算法。该算法借鉴了资源管理的思想,使用以集合为元素的矩阵建立了问题的数学模型,算法的实现是以集合运算为基础的。该算法是动态、次优的。但时间和空间复杂性几乎和问题规模成正比。  相似文献   

2.
目前,市场上电力系统频率保护装置频率的计算主要分为软件法和硬件法两种。基于软件测频算法存在计算误差较大及计算数据窗长度高达160 ms的情况,以及硬件频率测量法由于谐波的影响存在频率计算误差较大的问题,提出采用数字滤波器结合三点采样频率算法的频率计算方案。该计算方案具有计算频率的数据窗较短,频率计算误差小,且受谐波影响小的优点。该频率计算方案可在中压保护、频率电压控制和快切中使用,可大大提高保护和自动装置的技术指标。  相似文献   

3.
在传统集合中元素 X(或 Y)只扮演一个角色,代表一种身份,表示一种性质,发挥一种职能,在此基础上的集合公理为一阶公理。按照任何事物都可分的原理,元素 X 实际上有两种或两种以上性质,在此基础上推出的公理为二重性或二阶公理。一阶公理导致悖论的产生,二阶公理却彻底避免了悖论的出现。元素是集合的对象,又是集合的基础;元素的性质决定公理的性质,公理的性质又决定集合的性质。问题是如何去认识元素的性质,元素 X 是单一的还是可分的,只具有一重性,还是具有二重性?笔者认为元素是可分的,它具有二重性。如果在元素二重性基础上找到新的集合公理,那么集合在元素一重性基础上所遇到的悖论问题就会得到更合理的解决。  相似文献   

4.
以HCM提出的信号配时算法为基础,对总损失时间和饱和流率的敏感度进行了理论分析.利用3个交叉口的实地采集数据对HCM推荐的信号配时算法的准确性进行了验证.结果表明,信号周期时长的估算误差率和相位损失时间的估算误差率成正比.同时,为保障信号周期的预测精度,在不同的饱和条件下,饱和流率的估算值必须满足特定的要求.通过对3个交叉口车头时距数据的分析发现,若根据HCM的建议将第4辆车作为饱和车头时距标定的起点,则排队长度超过10辆车时总损失时间的误差高达40%,且排队长度大于15辆车时信号周期时长的计算误差很难小于15%.为提高HCM的实用价值,建议在对核心参数进行标定时对车头时距的分布规律进行更精确的描述.  相似文献   

5.
分析了基于链表的包过滤防火墙的性能和特点,针对其不足,将决策树和Bloom Filter引入到包过滤防火墙中,并对其中涉及到的算法和步骤的开销进行了分析.分析表明,这种方法可以有效提高包过滤防火墙的性能.  相似文献   

6.
集合中的元素具有确定性、互异性及无序性.确定性是指元素所具有的属性明确而不含糊,也就是说,任何一个元素只能属于某集合或不属于某集合,二者必居其一.互异性是指集合中的元素彼此不相同(只能出现一次).无序性是指集合中的元素可以随便排列.集合中元素的三性很易理解,但准确应用及灵活掌握并非易事.  相似文献   

7.
张艳 《教育技术导刊》2012,11(4):138-141
针对专业搜索引擎的特点,对基于词频统计的网页去重算法进行了改进。改进后形成的基于专业搜索引擎的网页去重算法通过两步进行:首先,通过计算文档用词重叠度,判断文档中使用的专业关键词集合是否大致相同。第二步,在满足上一步判断基础上,进一步判断两篇文档在各专业关键词用词频率上是否相同。  相似文献   

8.
针对移动平台计算资源匮乏问题,提出一种适用于移动平台的交互式图像边缘删除快速算法。利用通用图像边缘检测算法提取图像中的边缘,将图像的连续边缘存储成链表格式,再将链表格式边缘重映射成二维图像边缘。因移动平台触摸屏刷新频率低与图像离散化,导致手画删除边缘标记线间断及标记线与需要删除的边缘之间无交点,为克服该问题,利用线性插值与形态学膨胀运算估算出删除边缘标记线的间断点与加粗删除标记线,从而增加算法的鲁棒性。结果表明,图像边缘删除算法的时间复杂度与图像边缘像素点数无关,能够胜任移动平台的实时操作。  相似文献   

9.
灰度图像增强在数字图像处理中有着广泛的应用,不少研究者通常研究特定环境下不同图像增强算法效果的优劣,而缺乏对同一图像采用不同算法多重处理的研究.针对这一不足,通过开发交互式多重图像处理算法软件,对灰度图像采用多重增强算法进行处理,在实际应用中得到了比单一算法更好的图像增强效果.  相似文献   

10.
针对决策树算法C4.5在处理数据挖掘分类问题中出现的算法低效以及过拟合问题,提出一种改进的TM-C4.5算法。该算法主要改进了C4.5算法的分支和剪枝策略。首先,将升序排序后的属性按照边界定理,得出分割类别可能分布的切点,比较各点的信息增益和通过贝叶斯分类器得到的概率,使用条件判断确定最佳分割阈值;其次,使用简化的CCP(Cost-Complexity Pruning)方法和评价标准,对已生成决策树的子树根节点计算其表面误差率增益值和S值,从而判断是否删除决策树节点和分支。实验结果表明,用该算法生成的决策树进行分类更为精确、合理,表明TM-C4.5算法有效。  相似文献   

11.
An efficient enhanced k-means clustering algorithm   总被引:9,自引:0,他引:9  
INTRODUCTION The huge amount of data collected and stored in databases increases the need for effective analysis methods to use the information contained implicitly there. One of the primary data analysis tasks is cluster analysis, intended to help a user understand the natural grouping or structure in a dataset. Therefore, the development of improved clustering algorithms has received much attention. The goal of a clustering algorithm is to group the objects of a database into a set of m…  相似文献   

12.
Many heuristic search methods exhibit a remarkable variability in the time required to solve some particular problem instances. Their cost distributions are often heavy-tailed. It has been demonstrated that, in most cases, rapid restart (RR) method can prominently suppress the heavy-tailed nature of the instances and improve computation efficiency. However, it is usually time-consuming to check whether an algorithm on a specific instance is heavy-tailed or not. Moreover, if the heavy-tailed distribution is confirmed and the RR method is relevant, an optimal RR threshold should be chosen to facilitate the RR mechanism. In this paper, an approximate approach is proposed to quickly check whether an algorithm on a specific instance is heavy-tailed or not. The method is realized by means of calculating the maximal Lyapunov exponent of its generic running trace. Then a statistical formula to estimate the optimal RR threshold is educed. The method is based on common nonparametric estimation, e.g. , Kernel estimation. Two heuristic methods are selected to verify our method. The experimental results are consistent with the theoretical consideration perfectly.  相似文献   

13.
一种利用等效模型与遗传算法的动态有限元模型修正方法   总被引:3,自引:0,他引:3  
为了解决现有动态有限元模型修正方法计算效率不高或者可能获得局部最优解的问题,提出了一种利用等效模型和遗传算法的动态有限元模型修正新方法.首先,在设计参数的取值范围内,根据预设的多项式模型的阶次以及自变量的个数,利用试验设计方法获得拟合响应面模型所需要的最优样本点;通过有限元分析获得样本数据,并利用回归分析获得响应面模型,从而以响应面模型逼近结构特征与设计参数之间的函数关系.然后,在遗传算法的适应度评估环节,利用响应面模型替代有限元模型计算对应于一组设计参数的结构特征,并计算遗传个体的适应度,最终通过进化获得最优解,即为修正后的设计参数.以汽车车架模型为例,对其进行有限元分析与模态试验,并利用所提出的方法进行模型修正.修正后,模态频率误差的均方值小于2%.用修改后结构的动态特性的测试结果,对修正后有限元模型的预测能力进行检验,模态频率预测误差的均方值小于2%.  相似文献   

14.
文章提出了一种新型的源端的分布式拒绝服务攻击检测方法。首先用布隆过滤器结构对出入不同接口的数据包的数量进行简单计算,然后用无参数CUSUM(CumulativeSum)方法检测。本方法不仅能够在源端检测出分布式拒绝服务攻击的存在,而且各种类型的分布式拒绝服务攻击都能够被成功检测。实验表明,本方法检测结果精确,使用的资源更少。  相似文献   

15.
Despite the widespread popularity of growth curve analysis, few studies have investigated robust growth curve models. In this article, the t distribution is applied to model heavy-tailed data and contaminated normal data with outliers for growth curve analysis. The derived robust growth curve models are estimated through Bayesian methods utilizing data augmentation and Gibbs sampling algorithms. The analysis of mathematical development data shows that the robust latent basis growth curve model better describes the mathematical growth trajectory than the corresponding normal growth curve model and can reveal the individual differences in mathematical development. Simulation studies further confirm that the robust growth curve models significantly outperform the normal growth curve models for both heavy-tailed t data and normal data with outliers but lose only slight efficiency for normal data. It appears convincing to replace the normal distribution with the t distribution for growth curve analysis. Three information criteria are evaluated for model selection. Online software is also provided for conducting robust analysis discussed in this study.  相似文献   

16.
关联规则挖掘用于发现大量数据中项集之间有趣的关联或相关联系,在关联规则挖掘过程中,频繁项集的产生是最重要的步骤。本文提出一种新的频繁项集生成算法,基于项分组的思想,利用矩阵来存储各项的频率信息.只需扫描数据库一次。由于对项进行了分组,充分利用了各个事务的重复信息,因此在项数很多时算法效率仍然较高,实践证明,这是一个高效的频繁项集生成算法。  相似文献   

17.
INTRODUCTION Most packing problems (Dowsland and Dow-sland, 1992) are NP-hard (Garey and Johnson, 1979); among which are bin-packing, floorplan, rectangle packing, packing a set of circles into a large circle or square, non-rectangular packing problems and so on (Li and Milenkovic, 1995; Liang et al., 2002; Lip-nitskii, 2002; Milenkovic and Daniels, 1996; Milenk-ovic et al., 1991; Osogami and Okano, 2003; Wang, 2002). Some of these such as bin-packing problem and rectangle packing p…  相似文献   

18.
Vocabulary is one of the major obstacles to attaining reading fluency in a second language. The major European literary languages have vocabularies of many tens of thousands of items. For efficient learning, the vocabulary systems must be structured in terms of frequency groupings so that the more frequent items are mastered before the less frequent ones. The learner, however, has no way of determining the relative frequency of the words in his text. The solution involves: 1) the establishment of various word frequency groups and 2) marking the words in the reading text so that the learner has a clear set of rational priorities. Statistical studies suggest that approximately the most frequent 5,000 words constitute a minimum vocabulary for liberated reading and account for about 90% of the different words in an average text. The learning of the less frequent items should be deferred until these are mastered. Further, the presentation of the higher frequency words within the 1,000–5,000 range should be sequenced by groups in terms of their relative frequencies. Each group might correspond to a particular level of language proficiency. This goal can be attained by means of a system in which the frequency category of each text word is marked so that the learner knows its relative importance and can structure his vocabulary acquisition accordingly. A marking procedure by frequency is integrated with a marginal translation or glossing routine. The article proposes a set of frequency groups and describes an algorithm for the implementation of a frequency identification and marking procedure on an IBM 360 computer. A sample page of a Russian text book utilizing the technique is given and several other potential utilizations are described.  相似文献   

19.
Creating a sense of community in online classes contributes to student retention and to their overall satisfaction with the course itself. This study aimed to develop a scale of sense of community of students attending online university courses. A series of ordinal exploratory factor analyses were conducted on data obtained from 839 students enrolled in Italian universities. Using an item analysis method, we were able to select the 36 most valid items from an original set of 60 items we had previously defined. These items are distributed across three related factors measuring membership, influence, and fulfillment of needs. This factorial structure replicates the McMillan and Chavis’s model of sense of community, upon the basis of which this scale was developed. The three factors presented good ordinal alpha and adequate convergent/divergent validity coefficients. The scale represents an efficient tool for the design, monitoring, and evaluation of online courses.  相似文献   

20.
This paper presents a new efficient algorithm for mining frequent closed itemsets. It enumerates the closed set of frequent itemsets by using a novel compound frequent itemset tree that facilitates fast growth and efficient pruning of search space. It also employs a hybrid approach that adapts search strategies, representations of projected transaction subsets, and projecting methods to the characteristics of the dataset. Efficient local pruning, global subsumption checking, and fast hashing methods are detailed in this paper. The principle that balances the overheads of search space growth and pruning is also discussed. Extensive experimental evaluations on real world and artificial datasets showed that our algorithm outperforms CHARM by a factor of five and is one to three orders of magnitude more efficient than CLOSET and MAFIA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号