• Title/Summary/Keyword: Image similarity

Search Result 1,055, Processing Time 0.029 seconds

Study of the New Distance for Image Retrieval (새로운 이미지 거리를 통한 이미지 검색 방안 연구)

  • Lee, Sung Im;Lim, Jo Han;Cho, Young Min
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.4
    • /
    • pp.382-387
    • /
    • 2014
  • Image retrieval is a procedure to find images based on the resemblance between query image and all images. In retrieving images, the crucial step that arises is how to define the similarity between images. In this paper, we propose a new similarity measure which is based on distribution of color. We apply the new measure to retrieving two different types of images, wallpaper images and the logo of automobiles, and compare its performance to other existing similarity measures.

Deep Learning Similarity-based 1:1 Matching Method for Real Product Image and Drawing Image

  • Han, Gi-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.59-68
    • /
    • 2022
  • This paper presents a method for 1:1 verification by comparing the similarity between the given real product image and the drawing image. The proposed method combines two existing CNN-based deep learning models to construct a Siamese Network. After extracting the feature vector of the image through the FC (Fully Connected) Layer of each network and comparing the similarity, if the real product image and the drawing image (front view, left and right side view, top view, etc) are the same product, the similarity is set to 1 for learning and, if it is a different product, the similarity is set to 0. The test (inference) model is a deep learning model that queries the real product image and the drawing image in pairs to determine whether the pair is the same product or not. In the proposed model, through a comparison of the similarity between the real product image and the drawing image, if the similarity is greater than or equal to a threshold value (Threshold: 0.5), it is determined that the product is the same, and if it is less than or equal to, it is determined that the product is a different product. The proposed model showed an accuracy of about 71.8% for a query to a product (positive: positive) with the same drawing as the real product, and an accuracy of about 83.1% for a query to a different product (positive: negative). In the future, we plan to conduct a study to improve the matching accuracy between the real product image and the drawing image by combining the parameter optimization study with the proposed model and adding processes such as data purification.

Wine Label Recognition System using Image Similarity (이미지 유사도를 이용한 와인라벨 인식 시스템)

  • Jung, Jeong-Mun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang;Kim, Sun-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.125-137
    • /
    • 2011
  • Recently the research on the system using images taken from camera phones as input is actively conducted. This paper proposed a system that shows wine pictures which are similar to the input wine label in order. For the calculation of the similarity of images, the representative color of each cell of the image, the recognized text color, background color and distribution of feature points are used as the features. In order to calculate the difference of the colors, RGB is converted into CIE-Lab and the feature points are extracted by using Harris Corner Detection Algorithm. The weights of representative color of each cell of image, text color and background color are applied. The image similarity is calculated by normalizing the difference of color similarity and distribution of feature points. After calculating the similarity between the input image and the images in the database, the images in Database are shown in the descent order of the similarity so that the effort of users to search for similar wine labels again from the searched result is reduced.

Analysis of Image Similarity Index of Woven Fabrics and Virtual Fabrics - Application of Textile Design CAD System and Shuttle Loom - (직물과 가상소재의 화상 유사성 분석 연구 - 수직기 및 텍스타일 CAD시스템 활용 -)

  • Yoon, Jung-Won;Kim, Jong-Jun
    • Fashion & Textile Research Journal
    • /
    • v.15 no.6
    • /
    • pp.1010-1017
    • /
    • 2013
  • Current global textiles and fashion industries have gradually shifted focus to high value-added, high sensibility, and multi-functional products based on new human-friendliness and sustainable growth technologies. Textile design CAD systems have been developed in conjunction with computer hardware and software sector advances. This study compares the patterns or images of actual woven fabrics and virtual fabrics prepared with a textile design CAD system. In this study, several weave structures (such as fancy yarn weave and patterns) were prepared with a shuttle loom. The woven textile images were taken using a CCD camera. The same weave structure data and yarn data were fed into a textile design CAD system in order to simulate fabric images as similarly as possible. Similarity Index analysis methods allowed for an analysis of the index between the actual fabric specimen and the simulated image of the corresponding fabric. The results showed that repeated small pattern weaves provide superior similarity index values than those of a fancy yarn weave that indicate some irregularities due to fancy yarn attributes. A Complex Wavelet Structural Similarity(CW-SSIM) index resulted in a better index than other methods such as Multi-Scale(MS) SSIM, and Feature Similarity(FS) SSIM, across fabric specimen images. A correlation analysis of the similarity index based on an image analysis and a similarity evaluation by panel members was also implemented.

Similarity Analysis Between SAR Target Images Based on Siamese Network (Siamese 네트워크 기반 SAR 표적영상 간 유사도 분석)

  • Park, Ji-Hoon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.5
    • /
    • pp.462-475
    • /
    • 2022
  • Different from the field of electro-optical(EO) image analysis, there has been less interest in similarity metrics between synthetic aperture radar(SAR) target images. A reliable and objective similarity analysis for SAR target images is expected to enable the verification of the SAR measurement process or provide the guidelines of target CAD modeling that can be used for simulating realistic SAR target images. For this purpose, this paper presents a similarity analysis method based on the siamese network that quantifies the subjective assessment through the distance learning of similar and dissimilar SAR target image pairs. The proposed method is applied to MSTAR SAR target images of slightly different depression angles and the resultant metrics are compared and analyzed with qualitative evaluation. Since the image similarity is somewhat related to recognition performance, the capacity of the proposed method for target recognition is further checked experimentally with the confusion matrix.

An Effective Similarity Measure for Content-Based Image Retrieval using MPEG-7 Dominant Color Descriptor (내용기반 이미지 검색을 위한 MPEG-7 우위컬러 기술자의 효과적인 유사도)

  • Lee, Jong-Won;Nang, Jong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.837-841
    • /
    • 2010
  • This paper proposes an effective similarity measure for content-based image retrieval using MPEG-7 DCD. The proposed method can measure the similarity of images with the percentage of dominant colors extracted from images. As the result of experiments, we achieved a significant improvement of 18.92% with global DCD and 47.22% with local DCD in ANMRR than the result by QHDM. This result shows that the proposed method is an effective similarity measure for content-based image retrieval. Especially, our method is useful for region-based image retrieval.

An Image Segmentation Method and Similarity Measurement Using fuzzy Algorithm for Object Recognition (물체인식을 위한 영상분할 기법과 퍼지 알고리듬을 이용한 유사도 측정)

  • Kim, Dong-Gi;Lee, Seong-Gyu;Lee, Moon-Wook;Kang, E-Sok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.2
    • /
    • pp.125-132
    • /
    • 2004
  • In this paper, we propose a new two-stage segmentation method for the effective object recognition which uses region-growing algorithm and k-means clustering method. At first, an image is segmented into many small regions via region growing algorithm. And then the segmented small regions are merged in several regions so that the regions of an object may be included in the same region using typical k-means clustering method. This paper also establishes similarity measurement which is useful for object recognition in an image. Similarity is measured by fuzzy system whose input variables are compactness, magnitude of biasness and orientation of biasness of the object image, which are geometrical features of the object. To verify the effectiveness of the proposed two-stage segmentation method and similarity measurement, experiments for object recognition were made and the results show that they are applicable to object recognition under normal circumstance as well as under abnormal circumstance of being.

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

Implementation of Image Retrieval System using Complex Image Features (복합적인 영상 특성을 이용한 영상 검색 시스템 구현)

  • 송석진;남기곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1358-1364
    • /
    • 2002
  • Presently, Multimedia data are increasing suddenly in broadcasting and internet fields. For retrieval of still images in multimedia database, content-based image retrieval system is implemented in this paper that user can retrieve similar objects from image database after choosing a wanted query region of object. As to extract color features from query image, we transform color to HSV with proposed method that similarity is obtained it through histogram intersection with database images after making histogram. Also, query image is transformed to gray image and induced to wavelet transformation by which spatial gray distribution and texture features are extracted using banded autocorrelogram and GLCM before having similarity values. And final similarity values is determined by adding two similarity values. In that, weight value is applied to each similarity value. We make up for defects by taking color image features but also gray image features from query image. Elevations of recall and precision are verified in experiment results.

Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning (딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구)

  • Cho, Woo-Yun;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.2
    • /
    • pp.96-109
    • /
    • 2024
  • The purpose of this study is to propose a method for evaluating the similarity of Show gardens using Deep Learning models, specifically VGG-16 and ResNet50. A model for judging the similarity of show gardens based on VGG-16 and ResNet50 models was developed, and was referred to as DRG (Deep Recognition of similarity in show Garden design). An algorithm utilizing GAP and Pearson correlation coefficient was employed to construct the model, and the accuracy of similarity was analyzed by comparing the total number of similar images derived at 1st (Top1), 3rd (Top3), and 5th (Top5) ranks with the original images. The image data used for the DRG model consisted of a total of 278 works from the Le Festival International des Jardins de Chaumont-sur-Loire, 27 works from the Seoul International Garden Show, and 17 works from the Korea Garden Show. Image analysis was conducted using the DRG model for both the same group and different groups, resulting in the establishment of guidelines for assessing show garden similarity. First, overall image similarity analysis was best suited for applying data augmentation techniques based on the ResNet50 model. Second, for image analysis focusing on internal structure and outer form, it was effective to apply a certain size filter (16cm × 16cm) to generate images emphasizing form and then compare similarity using the VGG-16 model. It was suggested that an image size of 448 × 448 pixels and the original image in full color are the optimal settings. Based on these research findings, a quantitative method for assessing show gardens is proposed and it is expected to contribute to the continuous development of garden culture through interdisciplinary research moving forward.