• Title/Summary/Keyword: Image Descriptors

Search Result 160, Processing Time 0.019 seconds

The Management of Smart Safety Houses Using The Deep Learning (딥러닝을 이용한 스마트 안전 축사 관리 방안)

  • Hong, Sung-Hwa
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.505-507
    • /
    • 2021
  • Image recognition technology is a technology that recognizes an image object by using the generated feature descriptor and generates object feature points and feature descriptors that can compensate for the shape of the object to be recognized based on artificial intelligence technology, environmental changes around the object, and the deterioration of recognition ability by object rotation. The purpose of the present invention is to implement a power management framework required to increase profits and minimize damage to livestock farmers by preventing accidents that may occur due to the improvement of efficiency of the use of livestock house power and overloading of electricity by integrating and managing a power fire management device installed for analyzing a complex environment of power consumption and fire occurrence in a smart safety livestock house, and to develop and disseminate a safe and optimized intelligent smart safety livestock house.

  • PDF

Image Identifier based on Local Feature's Histogram and Acceleration Technique using GPU (지역 특징 히스토그램 기반 영상식별자와 GPU 가속화)

  • Jeon, Hyeok-June;Seo, Yong-Seok;Hwang, Chi-Jung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.9
    • /
    • pp.889-897
    • /
    • 2010
  • Recently, a cutting-edge large-scale image database system has demanded these attributes: search with alarming speed, performs with high accuracy, archives efficiently and much more. An image identifier (descriptor) is for measuring the similarity of two images which plays an important role in this system. The extraction method of an image identifier can be roughly classified into two methods: a local and global method. In this paper, the proposed image identifier, LFH(Local Feature's Histogram), is obtained by a histogram of robust and distinctive local descriptors (features) constrained by a district sub-division of a local region. Furthermore, LFH has not only the properties of a local and global descriptor, but also can perform calculations at a magnificent clip to determine distance with pinpoint accuracy. Additionally, we suggested a way to extract LFH via GPU (OpenGL and GLSL). In this experiment, we have compared the LFH with SIFT (local method) and EHD (global method) via storage capacity, extraction and retrieval time along with accuracy.

A Post-Verification Method of Near-Duplicate Image Detection using SIFT Descriptor Binarization (SIFT 기술자 이진화를 이용한 근-복사 이미지 검출 후-검증 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.699-706
    • /
    • 2015
  • In recent years, as near-duplicate image has been increasing explosively by the spread of Internet and image-editing technology that allows easy access to image contents, related research has been done briskly. However, BoF (Bag-of-Feature), the most frequently used method for near-duplicate image detection, can cause problems that distinguish the same features from different features or the different features from same features in the quantization process of approximating a high-level local features to low-level. Therefore, a post-verification method for BoF is required to overcome the limitation of vector quantization. In this paper, we proposed and analyzed the performance of a post-verification method for BoF, which converts SIFT (Scale Invariant Feature Transform) descriptors into 128 bits binary codes and compares binary distance regarding of a short ranked list by BoF using the codes. Through an experiment using 1500 original images, it was shown that the near-duplicate detection accuracy was improved by approximately 4% over the previous BoF method.

Design of OpenScenario Structure for Content Creation Service Based on User Defined Story (사용자 정의 스토리 기반 콘텐츠 제작 서비스를 위한 오픈 시나리오 언어 구조 설계)

  • Lee, Hyejoo;Kwon, Ki-Ryong;Lee, Suk-Hwan;Park, Yun-Kyong;Moon, Kyong Deok
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.170-179
    • /
    • 2016
  • It is a story-based content creation service that provides any user with some proper contents based on a story written by the user in order to utilize a lot of contents accumulated on Internet. For this service, the story has to be described in computer-readable representation. In this paper, analyzing the structure of scenario, as known as screenplay or scripts, a structure of story representation, which is referred to as OpenScenario, is defined. We intend users to produce their own contents by using massive contents on Internet by the proposed method. The proposed method's OpenScenario consists two main parts, OSD (OpenScenario Descriptors) which is a set of descriptors to describe various objects of shots such as visual, aural and textual objects and OSS (OpenScenario Scripts) which is a set of scripts to add some effects such as image, caption, transition between shots, and background music. As an usecase of proposed method, we describe how to create new content using OpenScenario and discuss some required technologies to apply the proposed method effectively.

Enhanced Boundary Partition Color Descriptor for Deformable Object Retrieval (비정형객체 검색을 위한 향상된 분할영역 색 기술자)

  • Jung, Hyun-il;Kim, Hae-kwang
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.778-781
    • /
    • 2015
  • The paper presents a new way of visual descriptor for deformable object retrieval on the basis of partition based description. The proposed descriptor technology partitions a given object into boundary area and interior area and extracts a descriptor from each area. The final descriptor combines these descriptors. From a given image, deformable object is segmented. The center position of the deformable object is calculated. The object is partitioned into N × N blocks on the basis of the given center position. Blocks are classified as boundary area and interior area depending on the pixels in the block. The proposed descriptor consists of extracted MPEG-7 dominant descriptors from both the boundary and interior area. The performance of proposed method is tested on a database of 1,973 handbag images constructed with view point changes. ARR (Average Retrieval Rate) is used for the retrieval accuracy of the proposed algorithm, compared with MPEG-7 dominant color descriptor.

A High-performance Lane Recognition Algorithm Using Word Descriptors and A Selective Hough Transform Algorithm with Four-channel ROI (다중 ROI에서 영상 화질 표준화 및 선택적 허프 변환 알고리즘을 통한 고성능의 차선 인식 알고리즘)

  • Cho, Jae-Hyun;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.148-161
    • /
    • 2015
  • The examples that used camera in the vehicle is increasing with the growth of the automotive market, and the importance of the image processing technique is expanding. In particular, the Lane Departure Warning System (LDWS) and related technologies are under development in various fields. In this paper, in order to improve the lane recognition rate more than the conventional method, we extract a Normalized Luminance Descriptor value and a Normalized Contrast Descriptor value, and adjust image gamma values to modulate Normalized Image Quality by using the correlation between the extracted two values. Then, we apply the Hough transform using the optimized accumulator cells to the four-channel ROI. The proposed algorithm was verified in 27 frame/sec and $640{\times}480$ resolution. As a result, Lane recognition rate was higher than the average 97% in day, night, and late-night road environments. The proposed method also shows successful lane recognition in sections with curves or many lane boundary.

Emotion Image Retrieval through Query Emotion Descriptor and Relevance Feedback (질의 감성 표시자와 유사도 피드백을 이용한 감성 영상 검색)

  • Yoo Hun-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.141-152
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. Query emotion descriptors called query color code and query gray code are designed based on the human evaluation on 13 emotions('like', 'beautiful', 'natural', 'dynamic', 'warm', 'gay', 'cheerful', 'unstable', 'light' 'strong', 'gaudy' 'hard', 'heavy') when 30 random patterns with different color, intensity, and dot sizes are presented. For emotion image retrieval, once a query emotion is selected, associated query color code and query gray code are selected. Next, DB color code and DB gray code that capture color and, intensify and dot size are extracted in each database image and a matching process between two color codes and between two gray codes are peformed to retrieve relevant emotion images. Also, a new relevance feedback method is proposed. The method incorporates human intention in the retrieval process by dynamically updating weights of the query and DB color codes and weights of an intra query color code. For the experiments over 450 images, the number of positive images was higher than that of negative images at the initial query and increased according to the relevance feedback.

Image Information Retrieval Using DTW(Dynamic Time Warping) (DTW(Dynamic Time Warping)를 이용한 영상 정보 검색)

  • Ha, Jeong-Yo;Lee, Na-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.10 no.3
    • /
    • pp.423-431
    • /
    • 2009
  • There are various image retrieval methods using shape, color and texture features. One of the most active area is using shape and color information. A number of shape representations have been suggested to recognize shapes even under affine transformation. There are many kinds of method for shape recognition, the well-known method is Fourier descriptors and moment invariant. The other method is CSS(Curvature Scale Space). The maxima of curvature scale space image have already been used to represent 2-D shapes in different applications. Because preexistence CSS exists several problems, in this paper we use improved CSS method for retrieval image. There are two kinds of method, One is using RGB color information feature and the other is using HSI color information feature. In this paper we used HSI color model to represent color histogram before, then use it as comparison measure. The similarity is measured by using Euclidean distance and for reduce search time and accuracy, We use DTW for measure similarity. Compare with the result of using Euclidean distance, we can find efficiency elevated.

  • PDF

Fast and All-Purpose Area-Based Imagery Registration Using ConvNets (ConvNet을 활용한 영역기반 신속/범용 영상정합 기술)

  • Baek, Seung-Cheol
    • Journal of KIISE
    • /
    • v.43 no.9
    • /
    • pp.1034-1042
    • /
    • 2016
  • Together with machine-learning frameworks, area-based imagery registration techniques can be easily applied to diverse types of image pairs without predefined features and feature descriptors. However, feature detectors are often used to quickly identify candidate image patch pairs, limiting the applicability of these registration techniques. In this paper, we propose a ConvNet (Convolutional Network) "Dart" that provides not only the matching metric between patches, but also information about their distance, which are helpful in reducing the search space of the corresponding patch pairs. In addition, we propose a ConvNet "Fad" to identify the patches that are difficult for Dart to improve the accuracy of registration. These two networks were successfully implemented using Deep Learning with the help of a number of training instances generated from a few registered image pairs, and were successfully applied to solve a simple image registration problem, suggesting that this line of research is promising.

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF