• Title/Summary/Keyword: 에지 방향성 히스토그램

Search Result 29, Processing Time 0.026 seconds

Real Time Hand Shape Recognition for Window Program Control (윈도우 프로그램 제어를 위한 실시간 손 형상 인식)

  • Wi, Seung-Jung;Kim, Jong-Min;Yang, Hwan-Seok;Lee, Woong-Ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.741-744
    • /
    • 2004
  • 본 연구는 손의 형상을 복잡한 배경환경에서 손 영역을 안정적으로 검출, 인식하여 윈도우 플레이어의 기능을 제어하는 시스템을 제안하였다. 손은 형상이 매우 복잡하기 때문에 2차원 형상의 불변량에 해당하는 에지의 방향성 히스토그램을 이용하여 인식을 행한다. 이 방법은 복잡한 배경에서 피부색을 지닌 손 영역이 정확히 추출되며 손 형상을 인식하는데 있어서 수행속도가 빠르고 조명변화에 덜 민감하기 때문에 실시간 손 형상 인식에 적합하다. 본 논문에서 제안한 방법을 윈도우 플레이어 제어에 적용한 결과 안정적으로 제어 할 수 있었다.

  • PDF

Method of Human Detection using Edge Symmetry and Feature Vector (에지 대칭과 특징 벡터를 이용한 사람 검출 방법)

  • Byun, Oh-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.57-66
    • /
    • 2011
  • In this paper, it is proposed for algorithm to detect human efficiently using a edge symmetry and gradient directional characteristics in realtime by the feature extraction in a single input image. Proposed algorithm is composed of three stages, preprocessing, region partition of human candidates, verification of candidate regions. Here, preprocessing stage is strong the image regardless of the intensity and brightness of surrounding environment, also detects a contour with characteristics of human as considering the shape features size and the condition of human for characteristic of human. And stage for region partition of human candidates has separated the region with edge symmetry for human and size in the detected contour, also divided 1st candidates region with applying the adaboost algorithm. Finally, the candidate region verification stage makes excellent the performance for the false detection by verifying the candidate region using feature vector of a gradient for divided local area and classifier. The results of the simulations, which is applying the proposed algorithm, the processing speed of the proposed algorithms is improved approximately 1.7 times, also, the FNR(False Negative Rate) is confirmed to be better 3% than the conventional algorithm which is a single structure algorithm.

Segmentation and Recognition of Traffic Signs using Shape Information and Edge Image in Real Image (실영상에서 형태 정보와 에지 영상을 이용한 교통 표지판 영역 추출과 인식)

  • Kwak, Hyun-Wook;Oh,Jun-Taek;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.149-158
    • /
    • 2004
  • This study proposes a method for segmentation and recognition of traffic signs using shape information and edge image in real image. It first segments traffic sign candidate regions by connected component algorithm from binary images, obtained by utilizing the RGB color ratio of each pixel in the image, and then extracts actual traffic signs based on their symmetries on X- and Y-axes. Histogram equalization is performed for unsegmented candidate regions caused by low contrast in the image. In the recognition stage, it utilizes shape information including projection profiles on X- and Y-axes, moment, and the number of crossings and distance which concentric circular patterns and 8-directional rays from region center intersects with edges of traffic signs. It finally performs recognition by measuring similarity with the templates in the database. It will be shown from several experimental results that the system is robust to environmental factors, such as light and weather condition.

Principal Component Analysis as a Preprocessing Method for Protein Structure Comparison (단백질 구조 비교를 위한 전처리 기법으로서의 주성분 분석)

  • Park Sung Hee;Park Chan Yong;Kim Dae Hee;Park Soo-Jun;Park Seon Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.805-808
    • /
    • 2004
  • 본 논문에서는 두 단백질의 구조적 유사성을 기반으로 한 단백질 비교를 위해서 전처리 기법으로서의 주성분분석기법을 소개한다. 기존의 백본 및 알파탄소 간의 거리행렬(distance matrix), 2차 구조 비교기법, 구역(segment)단위의 비교 기법과 같은 단백질 비교 기법들은 위치이동(translation)와 회전(rotation)에 불변한(invariant) 차이를 구하기 위하여 거리행렬을 이용하였다. 그리고, 난 다음 이들의 최적화 과정을 거쳤다. 그러나, 본 논문에서 제시하는 전처리 기법으로서의 주성분분석기법은 단백질 구조를 전체적인 구조 관점에서 위치를 정렬시킨 후에 단백질 간의 구조를 비교하는 방식이다. 단백질의 구조의 방향성(Orientation)을 맞춘 다음에는 다양한 단백질 표현으로 구를 비교할 수 있다. 본 논문에서는 두 단백질의 구조의 유사성을 측정하기 위한 간결한 단백질 표현(representation)으로 3 차원 에지 히스토그램을 사용하였다. 이 기법은 방향성을 정렬하기 위하여 기존의 방법에서 사용되었던 반복적인 거리계산을 통한 최적화하는 과정을 없앰으로써 단백질 구조 비교 시간을 단축할 수 있는 새로운 단백질 구조 비교 패러다임을 가능하게 한다. 따라서, 이 패러다임을 통하여 적절한 단백질 구조 방향성 정렬과 단백질 구조 표현을 이용한 단백질 구조 비교 검색 시스템은 많은 양의 단백질 구조 정보로부터 원하는 형태의 단백질 구조를 빠른 시간에 검색할 수 있는 장점을 가질 수 있다.

  • PDF

Component Based Face Detection for PC Camera (PC카메라 환경을 위한 컴포넌트 기반 얼굴 검출)

  • Cho, Chi-Young;Kim, Soo-Hwan
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.988-992
    • /
    • 2006
  • 본 논문은 PC카메라 환경에서 명암왜곡에 강인한 얼굴검출을 위한 컴포넌트 기반 얼굴검출 기법을 제시한다. 영상 내의 얼굴검출을 위해 에지(edge) 분석, 색상 분석, 형판정합(template matching), 신경망(Neural Network), PCA(Principal Component Analysis), LDA(Linear Discriminant Analysis) 등의 기법들이 사용되고 있고, 영상의 왜곡을 보정하기 위해 히스토그램 분석(평활화, 명세화), gamma correction, log transform 등의 영상 보정 방법이 사용되고 있다. 그러나 기존의 얼굴검출 방법과 영상보정 방법은 검출대상 객체의 부분적인 잡음 및 조명의 왜곡에 대처하기가 어려운 단점이 있다. 특히 PC카메라 환경에서 획득된 이미지와 같이 전면과 후면, 상하좌우에서 비추어지는 조명에 의해 검출 대상 객체의 일부분이 왜곡되는 상황이 발생될 경우 기존의 방법으로는 높은 얼굴 검출 성능을 기대할 수 없는 상황이 발생된다. 본 논문에서는 기울어진 얼굴 및 부분적으로 명암 왜곡된 얼굴을 효율적으로 검출할 수 있도록 얼굴의 좌우 대칭성을 고려한 가로방향의 대칭평균화로 얼굴검출을 위한 모델을 생성하여 얼굴검출에 사용한다. 이 방법은 부분적으로 명암왜곡된 얼굴이미지를 기존의 영상 보정기법을 적용한 것 보다 잘 표현하며, 얼굴이 아닌 후보는 비얼굴 이미지의 형상을 가지게 하는 특성이 있다.

  • PDF

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

3D conversion of 2D video using depth layer partition (Depth layer partition을 이용한 2D 동영상의 3D 변환 기법)

  • Kim, Su-Dong;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.44-53
    • /
    • 2011
  • In this paper, we propose a 3D conversion algorithm of 2D video using depth layer partition method. In the proposed algorithm, we first set frame groups using cut detection algorithm. Each divided frame groups will reduce the possibility of error propagation in the process of motion estimation. Depth image generation is the core technique in 2D/3D conversion algorithm. Therefore, we use two depth map generation algorithms. In the first, segmentation and motion information are used, and in the other, edge directional histogram is used. After applying depth layer partition algorithm which separates objects(foreground) and the background from the original image, the extracted two depth maps are properly merged. Through experiments, we verify that the proposed algorithm generates reliable depth map and good conversion results.

Text Detection and Recognition in Outdoor Korean Signboards for Mobile System Applications (모바일 시스템 응용을 위한 실외 한국어 간판 영상에서 텍스트 검출 및 인식)

  • Park, J.H.;Lee, G.S.;Kim, S.H.;Lee, M.H.;Toan, N.D.
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.44-51
    • /
    • 2009
  • Text understand in natural images has become an active research field in the past few decades. In this paper, we present an automatic recognition system in Korean signboards with a complex background. The proposed algorithm includes detection, binarization and extraction of text for the recognition of shop names. First, we utilize an elaborate detection algorithm to detect possible text region based on edge histogram of vertical and horizontal direction. And detected text region is segmented by clustering method. Second, the text is divided into individual characters based on connected components whose center of mass lie below the center line, which are recognized by using a minimum distance classifier. A shape-based statistical feature is adopted, which is adequate for Korean character recognition. The system has been implemented in a mobile phone and is demonstrated to show acceptable performance.

Detection of Artificial Caption using Temporal and Spatial Information in Video (시·공간 정보를 이용한 동영상의 인공 캡션 검출)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.115-126
    • /
    • 2012
  • The artificial captions appearing in videos include information that relates to the videos. In order to obtain the information carried by captions, many methods for caption extraction from videos have been studied. Most traditional methods of detecting caption region have used one frame. However video include not only spatial information but also temporal information. So we propose a method of detection caption region using temporal and spatial information. First, we make improved Text-Appearance-Map and detect continuous candidate regions through matching between candidate-regions. Second, we detect disappearing captions using disappearance test in candidate regions. In case of captions disappear, the caption regions are decided by a merging process which use temporal and spatial information. Final, we decide final caption regions through ANNs using edge direction histograms for verification. Our proposed method was experienced on many kinds of captions with a variety of sizes, shapes, positions and the experiment result was evaluated through Recall and Precision.