• Title/Summary/Keyword: 영상 세그멘테이션

Search Result 56, Processing Time 0.031 seconds

Speed Sign Recognition by Using Hierarchical Application of Color Segmentation and Normalized Template Matching (컬러 세그멘테이션 및 정규화 템플릿 매칭의 계층적 적용에 의한 속도 표지판 인식)

  • Lee, Kang-Ho;Lee, Kyu-Won
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.257-262
    • /
    • 2009
  • A method of the region extraction and recognition of a speed sign in the real road environment is proposed. The region of speed sign is extracted by using color information and then numbers are segmented in the region. We improve the recognition rate by performing an incline compensation of the speed sign for directions clockwise and counterclockwise. In image sequences of the real road environment, a robust recognition results are achieved with speed signs at normal condition as well as inclined.

Purchase Information Extraction Model From Scanned Invoice Document Image By Classification Of Invoice Table Header Texts (인보이스 서류 영상의 테이블 헤더 문자 분류를 통한 구매 정보 추출 모델)

  • Shin, Hyunkyung
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.383-387
    • /
    • 2012
  • Development of automated document management system specified for scanned invoice images suffers from rigorous accuracy requirements for extraction of monetary data, which necessiate automatic validation on the extracted values for a generative invoice table model. Use of certain internal constraints such as "amount = unit price times quantity" is typical implementation. In this paper, we propose a noble invoice information extraction model with improved auto-validation method by utilizing table header detection and column classification.

An Optimal 2D Quadrature Polar Separable Filter for Texture Analysis (조직분석을 위한 최적 2차원 Quadrature Polar Separable 필터)

  • 이상신;문용선;박종안
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.3
    • /
    • pp.288-296
    • /
    • 1992
  • This paper describes an improved 2D QPS(quadrature polar separable) filter design and its applications to texture processing. The filter kernel pair consists of the product of a radial weighting function based on the finite PSS (prolate spheroidal sequences) and an exponential at tenuation function for the orientational angle. It is quadrature and polar separable in the frequency domain. It is near optimal in the energy loss because we let the orientational angle function approximate the radial weighting function. The filter frequency characteristics is easy to control as it depends only upon the design specifications such as the bandwidth, the directional angle, the attenuation constant, and the shift constant of the central frequency. Some applications of the filter in texture processing, such as the generation of the texture image, the estimation of orientation angles, and the segmentations for the synthetic texture image, are considered. The result shows that the filter with the wide bandwidth can be used for the generation of discrimination of the strong orientational textures and the segmentation results are good.

  • PDF

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.

Development of Analysis Software for Quantitative Assessment of Sarcopenia in Medical Imaging (의료영상에서 근감소증 정량평가를 위한 분석 소프트웨어 개발)

  • Kim, Seung-Jin;Jeong, Chang-Won;Kim, Tae-Hoon;Jun, Hong Yong;No, Si-Hyeong;Kim, Ji-Eon;Lee, Chung-Sub;Yoon, Kwon-Ha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.291-292
    • /
    • 2019
  • 본 논문은 의료영상을 기반으로 근감소증의 정량적 평가를 위한 특화된 분석 소프트웨어에 대하여 기술한다. 특히, 제안한 분석 소프트웨어는 복부 CT영상에서 근감소증 영상분석에 중요한 인자인 근육, 피하지방 그리고 내장지방의 영역을 반자동 방식으로 세그멘테이션하여 정량화 할 수 있다. 또한 각각의 영역별 레이블링 영상을 다양한 포맷으로 생성할 수 있다. 분석 소프트웨어는 근감소증의 진단 및 정량적 평가를 정의하는 출발점이 될 것으로 기대하고 있으며, 다양한 질환에 대해 분석에 적용이 가능하다.

Drone Image AI Analysis Model for Ecological Environment Investigation (생태 환경 조사를 위한 드론영상 AI분석 모델)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.355-356
    • /
    • 2021
  • Geological and biological surveys are conducted every year to investigate the state of tidal flat loss and ecological changes in the Saemangeum embankment. In addition, various activities for forest monitoring and large-scale environmental monitoring are being actively carried out throughout Korea. Due to the recent development of drone technology and artificial intelligence technology, various studies are being conducted to perform these activities more efficiently and economically. In this study, we propose an image segmentation technique using semantic segmentation to efficiently investigate and analyze large-scale ecological environments using Drone.

  • PDF

A Study of plate Number Extraction and Segmentation using domain Knowledge (사전 정보를 이용한 자동차 번호판의 문자 위치 추출과 세그멘테이션에 관한 연구)

  • 김병훈;고미애;김영모
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.259-261
    • /
    • 2003
  • 차량 번호판 인식 시스템의 번호판 인식과정은 영상획득 및 번호판 영역 추출, 개별문자 추출, 문자 인식의 3가지 핵심부분으로 구성된다. 이 중에서도 번호판 추출의 정확성은 시스템 전체의 결과에 영향을 줄 수 있는 부분이며 다양한 주변 환경에도 정확한 추출과 빠른 수행 시간을 요구한다. 본 논문에서는 검출 시간의 단축을 위하여 명암값의 차이와 사전정보를 이용하여 먼저 인식대상의 주목표인 등록번호의 위치를 추출 및 검증하고 등록번호에 대한 지역명의 상대적인 위치 정보를 이용하여 문자의 대략적인 위치를 선정, 각 요소들의 외곽 근접 선들의 투영(protection)과 이동을 통하여 번호판의 모든 문자 요소의 위치를 추출한다.

  • PDF

Face Recognition using the Feature Space and the Image Vector (세그멘테이션에 의한 특징공간과 영상벡터를 이용한 얼굴인식)

  • 김선종
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.7
    • /
    • pp.821-826
    • /
    • 1999
  • This paper proposes a face recognition method using feature spaces and image vectors in the image plane. We obtain the 2-D feature space using the self-organizing map which has two inputs from the axis of the given image. The image vector consists of its weights and the average gray levels in the feature space. Also, we can reconstruct an normalized face by using the image vector having no connection with the size of the given face image. In the proposed method, each face is recognized with the best match of the feature spaces and the maximum match of the normally retrieval face images, respectively. For enhancing recognition rates, our method combines the two recognition methods by the feature spaces and the retrieval images. Simulations are conducted on the ORL(Olivetti Research laboratory) images of 40 persons, in which each person has 10 facial images, and the result shows 100% recognition and 14.5% rejection rates for the 20$\times$20 feature sizes and the 24$\times$28 retrieval image size.

  • PDF

Facial Caricaturing System using Facial Features information (얼굴 특징정보를 이용한 캐리커처 생성 시스템)

  • 이옥경;박연출;오해석
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.404-406
    • /
    • 2000
  • 캐리커처 생성 시스템은 입력된 인물 사진을 세그먼테이션을 통하여 특징(이목구비)을 추출하고, 추출된 특징정보를 이용하여 그와 유사한 특징정보를 가지는 캐리커처 이미지를 검색하여 매핑시키는 시스템이다. 캐리커처 생성 시스템에서는 얼굴의 대칭 구조를 이용하고 색상과 모양에 대한 정보를 이용하여 얼굴 각각의 특징(이목구비)을 캐리커처의 특징을 구분하는 특징정보로써 활용한다. 본 논문은 인물 사진을 세그멘테이션 처리하여 얻은 부분 영역 특징정보를 이용하여 그와 유사한 캐리커처를 자동으로 생성하는데 목적이 있다. 이 때 사용하는 대칭 구조는 씨앗 픽셀(seed pixel)을 추출한다. 특징정보는 색상의 경우 지역적인 색상정보는 이목구비를 더 뚜렷이 해주고, 전체적인 색상정보는 그 이미지의 피부색의 정보를 나타낸다. 모양의 경우 이목구비의 특징정보를 위해 불변모멘트가 주요하게 사용된다. 또한 데이터베이스는 얼굴의 세부사항(이목구비)에 대한 각각의 캐리커처로 구축되어 있고, 각 세부사항은 특징별 분류되어 있어야 한다. 이런 데이터베이스의 캐리커처와 추출된 얼굴 영상에서의 세부사항을 비교하여 유사도를 계산하고 이를 매핑하므로 개인의 특징을 가진 캐리커처를 자동으로 생성한다.

  • PDF

Detection of eye using optimal edge technique and intensity information (눈 영역에 적합한 에지 추출과 밝기값 정보를 이용한 눈 검출)

  • Mun, Won-Ho;Choi, Yeon-Seok;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.196-199
    • /
    • 2010
  • The human eyes are important facial landmarks for image normalization due to their relatively constant interocular distance. This paper introduces a novel approach for the eye detection task using optimal segmentation method for eye representation. The method consists of three steps: (1)edge extraction method that can be used to accurately extract eye region from the gray-scale face image, (2)extraction of eye region using labeling method, (3)eye localization based on intensity information. Experimental results show that a correct eye detection rate of 98.9% can be achieved on 2408 FERET images with variations in lighting condition and facial expressions.

  • PDF