首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
医学图像配准方法主要分为三类:灰度信息法、变换域法、特征法,其中最常用的是灰度信息法。在灰度信息法中,研究方法最多的为基于互信息的图像配准方法。互信息用来比较两幅图像的统计依赖性,是两个随机变量相关性量度。近年来,基于互信息的多模态医学图像配准方法层次不穷,每种方法都有其优点和弊端,本文详细阐述基于互信息的医学图像配准方法的原理,算法以及优缺点。  相似文献   

2.
首先对多模医学图像进行预处理;然后提取CT图像的边缘算子,并对应到CT图像感兴趣区域的边缘提取分布点(即特征点);最后通过矩主轴法得到两幅图像的质心和主轴,计算出两幅图像的平移量和旋转量,实现图像的全局粗配准。利用矩主轴法对多模医学图像进行粗配准能达到快速配准的目的,但是不能够清晰的反映出图像的反馈信息。  相似文献   

3.
图像配准是图像处理技术中重要的一部分,被广泛应用于计算机视觉、遥感测量、三维重建等多领域中。对于SURF而言,其在图像配准中应用广泛,但是该方法在提取特征点时其误匹配率高,造成图像配准精度较低问题。鉴于此,提出一种基于Harris-SURF描述符的图像配准方法。利用Harris算法的优势对图像进行特征点的检测,然后对特征点进行描述符的计算寻找点对之间的对应点对。最后使用RANSAC对不正确的对应点对进行删除并计算出最终的几何转换关系,完成图像配准,实验结果表明,提出的方法能有效提高配准的准确率。  相似文献   

4.
基于特征的遥感图像配准中,特征提取是整个配准过程的第一步,特征提取的质量对于最终的配准效果有着十分重要的影响。在目前的遥感图像配准中,点、线和面三种特征都有使用,其中点特征的使用最为频繁,这是因为使用点特征可以大大减少数据量从而提高运算的速度,而所谓的点特征一般称为兴趣点。主要介绍了SUSAN兴趣点的检测算法,并对该算法进行讨论研究,最后用IDL实现了该算法。  相似文献   

5.
方景龙  耿彩英 《科技通报》2010,26(2):269-272
针对多幅图像之间的图像配准问题,该文提出了一种新的图像自动配准算法。该算法应用Harris角点检测方法获得角点信息。在匹配过程中,采用圆形区域和双向相关系数法进行相似性度量。采用圆形区域,很好的解决了旋转问题;采用双向相关系数法,更加保证了匹配的精确度,减少误匹配率。初步实验结果表明:该方法可以高效短时地实现图像间的自动配准。  相似文献   

6.
医学图像配准是对于一幅图像寻求一种空间变换,使该图像与另一幅图像中的对应点达到空间上的一致。将对ITK进行介绍,并讨论交互信息配准,二维刚性配准,仿射变换3种方法的实现及其结果。  相似文献   

7.
基于Hough直线检测的深度图像配准方法   总被引:1,自引:0,他引:1  
针对传统的图像配准方法中寻找图像之间点对应关系这一难点问题,提出一种基于Hough直线检测的深度图像配准方法.利用Hough变换检测深度图像上的直线,确定不同视点图像上直线之间的对应关系.根据对应直线三维空间上的方向向量确定两幅图像之间的刚体变换参数.最后用模拟深度图像验证方法的有效性并给出三维重建结果.  相似文献   

8.
基于改进粒子群算法的散乱点云数据配准   总被引:1,自引:0,他引:1  
提出了一种基于改进的粒子群与ICP算法相结合的点云数据配准算法,该算法主要依据点云数据之间的曲率相似度函数,采用改进的粒子群算法在两组待配准点云中搜寻到与之相匹配的点对集合进行初始配准,再将得到的配准结果作为迭代ICP算法的初始位置进行二次精细配准,从而实现两组散乱点云的配准。实验表明,该算法可以有效避免遗传算法可能陷入的局部最小值,与仅使用ICP算法相比,配准的运行时间大大缩短了,且稳定性和可靠性较好。  相似文献   

9.
针对传统配准方法存在配准精度低的问题,提出一种新的弹性配准插值算法。新方法基于B样条的自由变形模型,将最大互信息准则作为形变图像与目标图像的相似性测度对相邻的片层图像进行配准。然后,提出一种改进的双线性插值方法,通过构造插值点的最小环绕四边形计算该插值点的值,从而得到最终的插值图像。  相似文献   

10.
随着社会经济的发展,银行业务量日渐增大,过去的手工识别方法已不能满足社会所需。印鉴图像配准方法作为自动识别方法的基础,是对标准印鉴和待测印鉴进行处理,为判断这两个印鉴是否出于同一个印章做必要的准备。本文采用一维化的印鉴图像配准方法,该方法包括印章图像灰度化、去除噪声、二值化和背景分割等预处理以及图像配准两部分。  相似文献   

11.
图像检索趋势   总被引:1,自引:0,他引:1  
吴春玉  徐寒  吴磊明 《情报科学》2005,23(5):764-766
介绍基于图像内容的文本检索方法、基于图像画面的压缩过程特征提取方和与其相应的压缩域检索现状,提出今后图像检索要基于图像内容的文本检索和基于图像画面内容的压缩域检索并重的观点。  相似文献   

12.
多源信息融合技术在干旱区盐渍地信息提取中的应用   总被引:3,自引:0,他引:3  
土壤盐渍化是干旱区绿洲稳定与可持续发展面临的主要环境问题之一,因此借助遥感手段及时准确地提取盐渍地信息并掌握其空间分布有着重要的现实意义。本文以渭干河-库车河三角洲绿洲为例,使用RadarsatSAR与LandsatTM影像进行主成分融合,同时与HIS和Brovey变换的融合效果作定量比较,并利用BP神经网络模型,以相同的训练样本分别对融合前后的影像进行分类。结果表明:盐渍地主要分布在绿洲的和沙漠之间的交错带,盐渍地的分布在绿洲内部呈条形状分布,而在绿洲外部呈片状分布,且绿洲外部重度盐渍地交错分布在中轻度盐渍地中;主成份变换融合影像的光谱信息保持性、信息量都优于其它常用的融合方法,且分类精度比单一LANDSATTM多光谱影像有较大提高,是监测干旱区盐渍地变化的有效手段。  相似文献   

13.
Content-based image retrieval for medical images is a primary technique for computer-aided diagnosis. While it is a premise for computer-aided diagnosis system to build an efficient medical image database which is paid less attention than that it deserves. In this paper, we provide an efficient approach to develop the archives of large brain CT medical data. Medical images are securely acquired along with relevant diagnosis reports and then cleansed, validated and enhanced. Then some sophisticated image processing algorithms including image normalization and registration are applied to make sure that only corresponding anatomy regions could be compared in image matching. A vector of features is extracted by non-negative tensor factorization and associated with each image, which is essential for the content-based image retrieval. Our experiments prove the efficiency and promising prospect of this database building method for computer-aided diagnosis system. The brain CT image database we built could provide radiologists with a convenient access to retrieve pre-diagnosed, validated and highly relevant examples based on image content and obtain computer-aided diagnosis.  相似文献   

14.
王志杰  吴娜 《科技通报》2012,28(6):80-81
多传感器图像融合是将不同传感器得到的多个图像,根据某个像素的对应位置进行叠加处理,以得到一个满足某种需求的新图像。本文采用一种冲突惩罚因子的多传感器图像融合算法,实时运用冲突惩罚因子,选取高信息量的像素进行平滑融合。仿真实验表明,该算法能最大化的保证融合后的图像信息,得到高清晰度的融合图像。  相似文献   

15.
贝叶斯理论在多波段SAR图像分类融合中的应用   总被引:1,自引:0,他引:1  
将贝叶斯理论用于多波段SAR图像的分类.分析了常见的乘积方法、平均方法以及 中值方法,并在贝叶斯平均方法的基础上,利用SAR图像分类精度与距离因子之间的关系,提 出3种改进方法.实验结果表明,多波段融合可以结合各波段的优势和互补信息,获得单波段 分类无法获取的分类结果.改进方法通过加权减小了错误分类信息的影响,进一步提高分类精度.  相似文献   

16.
提出了一种人脸关键点检测方法,该方法用了少量的正面图像,不用归一化人脸图像,而传统的人脸关键点检测方法需要对图像进行严格预处理。随机森林是一种分类器融合算法,可以很好地解决多类分类问题,虽然LBP特征简单,但其可以包含大量的纹理信息。利用改进的LBP特征与随机森林相结合,构成一种对人脸关键点检测的方法。通过高斯平滑图像的LBP特征的提取,对每个点生成特征,计算出有用的特征作为正例,并且与反例集合变为训练集。通过随机森林分类器进行分类,误差率较低,仅在10%左右。  相似文献   

17.
涂娟 《科技广场》2014,(1):53-56
作为像素级融合的方法之一,小波变换具有多分辨分析和多尺度分析的特点,目前被广泛应用于红外和可见光图像融合中。本文通过深入研究现有的图像融合算法,研究小波基的提升方案,并对其提升处理后的小波性能进行比较。  相似文献   

18.
Multi-feature fusion has achieved gratifying performance in image retrieval. However, some existing fusion mechanisms would unfortunately make the result worse than expected due to the domain and visual diversity of images. As a result, a burning problem for applying feature fusion mechanism is how to figure out and improve the complementarity of multi-level heterogeneous features. To this end, this paper proposes an adaptive multi-feature fusion method via cross-entropy normalization for effective image retrieval. First, various low-level features (e.g., SIFT) and high-level semantic features based on deep learning are extracted. Under each level of feature representation, the initial similarity scores of the query image w.r.t. the target dataset are calculated. Second, we use an independent reference dataset to approximate the tail of the attained initial similarity score ranking curve by cross-entropy normalization. Then the area under the ranking curve is calculated as the indicator of the merit of corresponding feature (i.e., a smaller area indicates a more suitable feature.). Finally, fusion weights of each feature are assigned adaptively by the statistically elaborated areas. Extensive experiments on three public benchmark datasets have demonstrated that the proposed method can achieve superior performance compared with the existing methods, improving the metrics mAP by relatively 1.04% (for Holidays), 1.22% (for Oxf5k) and the N-S by relatively 0.04 (for UKbench), respectively.  相似文献   

19.
XMage is introduced in this paper as a method for partial similarity searching in image databases. Region-based image retrieval is a method of retrieving partially similar images. It has been proposed as a way to accurately process queries in an image database. In region-based image retrieval, region matching is indispensable for computing the partial similarity between two images because the query processing is based upon regions instead of the entire image. A naive method of region matching is a sequential comparison between regions, which causes severe overhead and deteriorates the performance of query processing. In this paper, a new image contents representation, called Condensed eXtended Histogram (CXHistogram), is presented in conjunction with a well-defined distance function CXSim() on the CX-Histogram. The CXSim() is a new image-to-image similarity measure to compute the partial similarity between two images. It achieves the effect of comparing regions of two images by simply comparing the two images. The CXSim() reduces query space by pruning irrelevant images, and it is used as a filtering function before sequential scanning. Extensive experiments were performed on real image data to evaluate XMage. It provides a significant pruning of irrelevant images with no false dismissals. As a consequence, it achieves up to 5.9-fold speed-up in search over the R*-tree search followed by sequential scanning.  相似文献   

20.
Content-based image retrieval (CBIR) with global features is notoriously noisy, especially for image queries with low percentages of relevant images in a collection. Moreover, CBIR typically ranks the whole collection, which is inefficient for large databases. We experiment with a method for image retrieval from multimedia databases, which improves both the effectiveness and efficiency of traditional CBIR by exploring secondary media. We perform retrieval in a two-stage fashion: first rank by a secondary medium, and then perform CBIR only on the top-K items. Thus, effectiveness is improved by performing CBIR on a ‘better’ subset. Using a relatively ‘cheap’ first stage, efficiency is also improved via the fewer CBIR operations performed. Our main novelty is that K is dynamic, i.e. estimated per query to optimize a predefined effectiveness measure. We show that our dynamic two-stage method can be significantly more effective and robust than similar setups with static thresholds previously proposed. In additional experiments using local feature derivatives in the visual stage instead of global, such as the emerging visual codebook approach, we find that two-stage does not work very well. We attribute the weaker performance of the visual codebook to the enhanced visual diversity produced by the textual stage which diminishes codebook’s advantage over global features. Furthermore, we compare dynamic two-stage retrieval to traditional score-based fusion of results retrieved visually and textually. We find that fusion is also significantly more effective than single-medium baselines. Although, there is no clear winner between two-stage and fusion, the methods exhibit different robustness features; nevertheless, two-stage retrieval provides efficiency benefits over fusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号