• Title/Summary/Keyword: 영상사상

Search Result 226, Processing Time 0.027 seconds

Background Segmentation in Color Image Using Self-Organizing Feature Selection (자기 조직화 기법을 활용한 컬러 영상 배경 영역 추출)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.407-412
    • /
    • 2008
  • Color segmentation is one of the most challenging problems in image processing especially in case of handling the images with cluttered background. Great amount of color segmentation methods have been developed and applied to real problems. In this paper, we suggest a new methodology. Our approach is focused on background extraction, as a complimentary operation to standard foreground object segmentation, using self-organizing feature selective property of unsupervised self-learning paradigm based on the competitive algorithm. The results of our studies show that background segmentation can be achievable in efficient manner.

View-Invariant Body Pose Estimation based on Biased Manifold Learning (편향된 다양체 학습 기반 시점 변화에 강인한 인체 포즈 추정)

  • Hur, Dong-Cheol;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.960-966
    • /
    • 2009
  • A manifold is used to represent a relationship between high-dimensional data samples in low-dimensional space. In human pose estimation, it is created in low-dimensional space for processing image and 3D body configuration data. Manifold learning is to build a manifold. But it is vulnerable to silhouette variations. Such silhouette variations are occurred due to view-change, person-change, distance-change, and noises. Representing silhouette variations in a single manifold is impossible. In this paper, we focus a silhouette variation problem occurred by view-change. In previous view invariant pose estimation methods based on manifold learning, there were two ways. One is modeling manifolds for all view points. The other is to extract view factors from mapping functions. But these methods do not support one by one mapping for silhouettes and corresponding body configurations because of unsupervised learning. Modeling manifold and extracting view factors are very complex. So we propose a method based on triple manifolds. These are view manifold, pose manifold, and body configuration manifold. In order to build manifolds, we employ biased manifold learning. After building manifolds, we learn mapping functions among spaces (2D image space, pose manifold space, view manifold space, body configuration manifold space, 3D body configuration space). In our experiments, we could estimate various body poses from 24 view points.

Construction of Spatio-Temporal Images in Main Flow Direction for Surface Image Velocimetry (표면영상유속계를 위한 주흐름 방향 시공간 영상의 구성)

  • Kwonkyu Yu;Yoonho Lee;Byungman Yoon;Namjoo Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.303-303
    • /
    • 2023
  • 실용적인 표면영상유속계를 만들기 위해서는 적절한 하드웨어와 소프트웨어로 시스템을 구성해야 한다. 이를 위해서 본 연구에서는 하드웨어로 CCTV를 선택하고, 초음파 수위계를 이용하여 수위를 지속적으로 읽어들이도록 구성하였다. 한편, 소프트웨어적으로는 11변수 투영법을 적용하여 변화하는 수위에 따라 정확한 측정점을 재구성하도록 하고, 아울러 각 측정점에서 주흐름방향으로 정확한 시공간영상을 작성하고, 이 시공간영상(spatio-temporal image)들을 분석하였다. 그결과, 5분 간격으로 촬영된 1분 길이의 영상을 지속적으로 촬영하고 분석하여 유량을 산정하는 표면영상유속계측 시스템을 구축하였다. 본고에서는 이러한 소프트웨어 개선방향중 하나인 주흐름방향의 시공간영상 작성법을 소개한다. 먼저, 11변수 투영법을 이용하여 하천의 표면영상에 대한 좌표변환계수를 산정하였다. 그리고 이 좌표변환계수를 이용하여, 하천의 수위변화에 따라 표면영상내의 측정점이 적절히 수정될 수 있도록 하였다. 그 다음 이 측정점에서 측정횡단면과 수직이 되는 방향을 선정하고, 이 방향이 영상내에서 하천 측정횡단면과 수직인 방향, 즉 주흐름방향이 되도록 하였다. 촬영된 1분간의 동영상의 각 측정점 위치에서 잘라낸 시공간체적(spatio-temporal image volume)에서 주흐름방향의 시공간영상을 잘라내고 이를 상호상관법이나 고속푸리에변환을 이용하여 분석하였다. 이 때 만들어진 시공간영상은 주흐름방향과 정확하게 일치하여, 기존의 표면영상유속계의 문제이던, 일부 측정점의 유속벡터가 주흐름방향과 일치하지 않던 문제를 해결할 수 있었다. 개발된 방법으로 표면영상유속계를 제작하여 인수천에 시험 설치하고 호우 사상에 대해 검토한 결과 정확하고 신속하며 연속적인 유량측정이 가능하였다.

  • PDF

FPGA Implementation of Real-time 2-D Wavelet Image Compressor (실시간 2차원 웨이블릿 영상압축기의 FPGA 구현)

  • 서영호;김왕현;김종현;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.7A
    • /
    • pp.683-694
    • /
    • 2002
  • In this paper, a digital image compression codec using 2D DWT(Discrete Wavelet Transform) is designed using the FPGA technology for real time operation The implemented image compression codec using wavelet decomposition consists of a wavelet kernel part for wavelet filtering process, a quantizer/huffman coder for quantization and huffman encoding of wavelet coefficients, a memory controller for interface with external memories, a input interface to process image pixels from A/D converter, a output interface for reconstructing huffman codes, which has irregular bit size, into 32-bit data having regular size data, a memory-kernel buffer to arrage data for real time process, a PCI interface part, and some modules for setting timing between each modules. Since the memory mapping method which converts read process of column-direction into read process of the row-direction is used, the read process in the vertical-direction wavelet decomposition is very efficiently processed. Global operation of wavelet codec is synchronized with the field signal of A/D converter. The global hardware process pipeline operation as the unit of field and each field and each field operation is classified as decomposition levels of wavelet transform. The implemented hardware used FPGA hardware resource of 11119(45%) LAB and 28352(9%) ESB in FPGA device of APEX20KC EP20k600CB652-7 and mapped into one FPGA without additional external logic. Also it can process 33 frames(66 fields) per second, so real-time image compression is possible.

A Study on Automatic Target Recognition Using SAR Imagery (SAR 영상을 이용한 자동 표적 식별 기법에 대한 연구)

  • Park, Jong-Il;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.11
    • /
    • pp.1063-1069
    • /
    • 2011
  • NCTR(Non-Cooperative Target Recognition) and ATR(Automatic Target Recognition) are methodologies to identify military targets using radar, optical, and infrared images. Among them, a strategy to recognize ground targets using synthetic aperature radar(SAR) images is called SAR ATR. In general, SAR ATR consists of three sequential stages: detection, discrimination and classification. In this paper, a modification of the polar mapping classifier(PMC) to identify inverse SAR(ISAR) images has been made in order to apply it to SAR ATR. In addition, a preprocessing scheme can mitigate the effect from the clutter, and information on the shadow is employed to improve the classification accuracy.

Analysis of Flood Inundated Area Using Multitemporal Satellite Synthetic Aperture Radar (SAR) Imagery (시계열 위성레이더 영상을 이용한 침수지 조사)

  • Lee, Gyu-Seong;Kim, Yang-Su;Lee, Seon-Il
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.4
    • /
    • pp.427-435
    • /
    • 2000
  • It is often crucial to obtain a map of flood inundated area with more accurate and rapid manner. This study attempts to evaluate the potential of satellite synthetic aperture radar (SAR) data for mapping of flood inundated area in Imjin river basin. Multitemporal RADARSAT SAR data of three different dates were obtained at the time of flooding on August 4 and before and after the flooding. Once the data sets were geometrically corrected and preprocessed, the temporal characteristics of relative radar backscattering were analyzed. By comparing the radar backscattering of several surface features, it was clear that the flooded rice paddy showed the distinctive temporal pattern of radar response. Flooded rice paddy showed significantly lower radar signal while the normally growing rice paddy show high radar returns, which also could be easily interpreted from the color composite imagery. In addition to delineating the flooded rice fields, the multitemporal radar imagery also allow us to distinguish the afterward condition of once-flooded rice field.

  • PDF

Texture Mapping and 3D Face Modeling using Two Views of 2D Face Images (2장의 2차원 얼굴영상을 이용한 텍스쳐 생성과 자동적인 3차원 얼굴모델링)

  • Weon, Sun-Hee;Kim, Gye-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.705-709
    • /
    • 2009
  • In this paper, we propose 3d face modeling using two orthogonal views of 2D face images and automatically facial feature extraction. Th proposed technique consists of 2 parts, personalization of 3d face model and texture mapping.

Geometric Correction of Mouth Based Key Points of Lips (입술 특징점에 기반한 입의 기하학적 왜곡 보정)

  • 황동국;박희정;전병민
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.11a
    • /
    • pp.271-275
    • /
    • 2003
  • In this paper, we propose a method that corrects the geometric distortion of mouth in an image. the method is composed of two steps - detecting key points and correcting geometric distortion. First, key points of lips in source and destination images are found by using lips detection algorithm. Then, the two images are mapped by using affine transformation and information found in first step. In experiment result for various mouths with different geometric distortion, we found that the proposed method have satisfactory efficiency.

  • PDF

Applicability of Multi-temporal VCI and SVI for Spring Drought Assessment (봄 가뭄 평가를 위한 다중시기 VCI와 SVI의 적용성 분석)

  • Park, Jung-Sool;Kim, Kyung-Tak
    • Proceedings of the KSRS Conference
    • /
    • 2008.03a
    • /
    • pp.119-124
    • /
    • 2008
  • 2000년대 들어 주기적으로 발생하고 있는 봄 가뭄에 대한 적절한 대책 마련을 위해서는 가뭄을 모니터링 할 수 있는 감시체계가 필요하며 가뭄의 심도를 정량적으로 나타내기 위한 지표가 요구된다. 또한, 가뭄의 거동 및 지역적인 심도 분석을 위해서는 면 단위의 공간적인 분석이 요구된다. 위성영상은 공간정보를 신속하고 주기적으로 제공할 수 있는 도구로 위성영상의 밴드 조합을 통해 제작된 식생지수는 1990년대 중반 이후 건조지역을 중심으로 가뭄 모니터링을 위한 도구로 활용 중이다. 본 연구에서는 MODIS 영상으로부터 제작된 정규식생지수(NDVI)를 이용하여 식생상태지수(VCI)와 정규화된 식생지수(SVI)를 제작하였으며 2000년$\sim$2007년을 대상으로 가뭄발생연도, 각 가뭄사상에 대한 심도, 가뭄다발 시기 및 다발지역을 분석하였다.

  • PDF

3D Video Quality Improvement for 3D TV using Color Compensation (색상 보정을 통한 3차원 TV의 입체영상 화질 개선)

  • Jung, Kil-Soo;Kang, Min-Sung;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.757-767
    • /
    • 2010
  • In this paper, we have studied the color compensation method for 3D that enables 3D color presentation similar to 2D. The color compensation method uses the difference of color presentation in 2D and 3D mode. First, the RGB I/O relationship curve was derived in 2D and 3D mode based on the input RGB color bar images. The relationship was modeled in modified power-law forms. Based on the modeling information, we generated color mapping tables, which can be used for compensating the difference of colors. The proposed color mapping block can be added at the output block of a 3DTV system, where the 2D content can be bypassed but the 3D content RGB data can be processed using the color mapping table. The experimental results show that the proposed method improves color presentation of a 3DTV system using a proper color compensation based on 2D presentation.