• Title/Summary/Keyword: Image Extraction

Search Result 2,607, Processing Time 0.03 seconds

An Efficient Feature Point Extraction Method for 360˚ Realistic Media Utilizing High Resolution Characteristics

  • Won, Yu-Hyeon;Kim, Jin-Sung;Park, Byuong-Chan;Kim, Young-Mo;Kim, Seok-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.85-92
    • /
    • 2019
  • In this paper, we propose a efficient feature point extraction method that can solve the problem of performance degradation by introducing a preprocessing process when extracting feature points by utilizing the characteristics of 360-degree realistic media. 360-degree realistic media is composed of images produced by two or more cameras and this image combining process is accomplished by extracting feature points at the edges of each image and combining them into one image if they cover the same area. In this production process, however, the stitching process where images are combined into one piece can lead to the distortion of non-seamlessness. Since the realistic media of 4K-class image has higher resolution than that of a general image, the feature point extraction and matching process takes much more time than general media cases.

AUTOMATIC SELECTION AND ADJUSTMENT OF FEATURES FOR IMAGE CLASSIFICATION

  • Saiki, Kenji;Nagao, Tomoharu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.525-528
    • /
    • 2009
  • Recently, image classification has been an important task in various fields. Generally, the performance of image classification is not good without the adjustment of image features. Therefore, it is desired that the way of automatic feature extraction. In this paper, we propose an image classification method which adjusts image features automatically. We assume that texture features are useful in image classification tasks because natural images are composed of several types of texture. Thus, the classification accuracy rate is improved by using distribution of texture features. We obtain texture features by calculating image features from a current considering pixel and its neighborhood pixels. And we calculate image features from distribution of textures feature. Those image features are adjusted to image classification tasks using Genetic Algorithm. We apply proposed method to classifying images into "head" or "non-head" and "male" or "female".

  • PDF

Extraction of Lip Region using Chromaticity Transformation and Fuzzy Clustering (색도 변환과 퍼지 클러스터링을 이용한 입술영역 추출)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.806-817
    • /
    • 2014
  • The extraction of lip region is essential to Lip Reading, which is a field of image processing to get some meaningful information by the analysis of lip movement from human face image. Many conventional methods to extract lip region are proposed. One is getting the position of lip by using geometric face structure. The other discriminates lip and skin regions by using color information only. The former is more complex than the latter, however it can analyze black and white image also. The latter is very simple compared to the former, however it is very difficult to discriminate lip and skin regions because of close similarity between these two regions. And also, the accuracy is relatively low compared to the former. Conventional analysis of color coordinate systems are mostly based on specific extraction scheme for lip regions rather than coordinate system itself. In this paper, the method for selection of effective color coordinate system and chromaticity transformation to discriminate these two lip and skin region are proposed.

Analysis of Feature Extraction Algorithms Based on Deep Learning (Deep Learning을 기반으로 한 Feature Extraction 알고리즘의 분석)

  • Kim, Gyung Tae;Lee, Yong Hwan;Kim, Yeong Seop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.60-67
    • /
    • 2020
  • Recently, artificial intelligence related technologies including machine learning are being applied to various fields, and the demand is also increasing. In particular, with the development of AR, VR, and MR technologies related to image processing, the utilization of computer vision based on deep learning has increased. The algorithms for object recognition and detection based on deep learning required for image processing are diversified and advanced. Accordingly, problems that were difficult to solve with the existing methodology were solved more simply and easily by using deep learning. This paper introduces various deep learning-based object recognition and extraction algorithms used to detect and recognize various objects in an image and analyzes the technologies that attract attention.

Content-Based Image Retrieval System using Feature Extraction of Image Objects (영상 객체의 특징 추출을 이용한 내용 기반 영상 검색 시스템)

  • Jung Seh-Hwan;Seo Kwang-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.27 no.3
    • /
    • pp.59-65
    • /
    • 2004
  • This paper explores an image segmentation and representation method using Vector Quantization(VQ) on color and texture for content-based image retrieval system. The basic idea is a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture space. These schemes are used for object-based image retrieval. Features for image retrieval are three color features from HSV color model and five texture features from Gray-level co-occurrence matrices. Once the feature extraction scheme is performed in the image, 8-dimensional feature vectors represent each pixel in the image. VQ algorithm is used to cluster each pixel data into groups. A representative feature table based on the dominant groups is obtained and used to retrieve similar images according to object within the image. The proposed method can retrieve similar images even in the case that the objects are translated, scaled, and rotated.

Metadata Processing Technique for Similar Image Search of Mobile Platform

  • Seo, Jung-Hee
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.36-41
    • /
    • 2021
  • Text-based image retrieval is not only cumbersome as it requires the manual input of keywords by the user, but is also limited in the semantic approach of keywords. However, content-based image retrieval enables visual processing by a computer to solve the problems of text retrieval more fundamentally. Vision applications such as extraction and mapping of image characteristics, require the processing of a large amount of data in a mobile environment, rendering efficient power consumption difficult. Hence, an effective image retrieval method on mobile platforms is proposed herein. To provide the visual meaning of keywords to be inserted into images, the efficiency of image retrieval is improved by extracting keywords of exchangeable image file format metadata from images retrieved through a content-based similar image retrieval method and then adding automatic keywords to images captured on mobile devices. Additionally, users can manually add or modify keywords to the image metadata.

Line feature extraction in a noisy image

  • Lee, Joon-Woong;Oh, Hak-Seo;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.137-140
    • /
    • 1996
  • Finding line segments in an intensity image has been one of the most fundamental issues in computer vision. In complex scenes, it is hard to detect the locations of point features. Line features are more robust in providing greater positional accuracy. In this paper we present a robust "line features extraction" algorithm which extracts line feature in a single pass without using any assumptions and constraints. Our algorithm consists of five steps: (1) edge scanning, (2) edge normalization, (3) line-blob extraction, (4) line-feature computation, and (5) line linking. By using edge scanning, the computational complexity due to too many edge pixels is drastically reduced. Edge normalization improves the local quantization error induced from the gradient space partitioning and minimizes perturbations on edge orientation. We also analyze the effects of edge processing, and the least squares-based method and the principal axis-based method on the computation of line orientation. We show its efficiency with some real images.al images.

  • PDF

Fire Image Processing Using OpenCV (OpenCV를 사용한 화재 영상 처리)

  • Kang, Suk Won;Lee, Soon Yi;Park, Ji Wong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.79-82
    • /
    • 2009
  • In this paper, we propose new image processing method to detect fire image. At captured image from camera, we using OpenCV library to implement various image processing techniques such like differential image, binarization image, contour extraction, remove noise(morphology open, close), pixel calculation, flickering extraction, etc.

  • PDF

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing

  • Lee, Min-Chul;Inoue, Kotaro;Konishi, Naoki;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.4
    • /
    • pp.275-279
    • /
    • 2015
  • There are several methods to record three dimensional (3D) information of objects such as lens array based integral imaging, synthetic aperture integral imaging (SAII), computer synthesized integral imaging (CSII), axially distributed image sensing (ADS), and axially distributed stereo image sensing (ADSS). ADSS method is capable of recording partially occluded 3D objects and reconstructing high-resolution slice plane images. In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental stereo image pairs are recorded by simply moving the stereo camera along the optical axis and the recorded elemental image pairs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement and simple block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.