• Title/Summary/Keyword: Optical feature

Search Result 402, Processing Time 0.025 seconds

Optical Flow Orientation Histogram for Hand Gesture Recognition (손 동작 인식을 위한 Optical Flow Orientation Histogram)

  • Aurrahman, Dhi;Setiawan, Nurul Arif;Oh, Chi-Min;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.517-521
    • /
    • 2008
  • Hand motion classification problem is considered as basis for sign or gesture recognition. We promote optical flow as main feature extracted from images sequences to simultaneously segment the motion's area by its magnitude and characterize the motion' s directions by its orientation. We manage the flow orientation histogram as motion descriptor. A motion is encoded by concatenating the flow orientation histogram from several frames. We utilize simple histogram matching to classify the motion sequences. Attempted experiments show the feasibility of our method for hand motion localization and classification.

  • PDF

Design and Test of an Experimental Optical Cross-Connect

  • Lee, Sung-Un;Seo, Wan-Seok
    • Journal of Electrical Engineering and information Science
    • /
    • v.3 no.3
    • /
    • pp.336-341
    • /
    • 1998
  • We describe the architecture of an optical cross-connect (OXC) which is modular in structure and is able to be upgraded to a virtual wavelength path by adding wavelength converters. The additional feature of the OXC is the all-optical nature. It can be implemented with commercial components including mechanical optical switches. As a result of the test on the experimental OXC, it has been shown that 2.5 Gb/s signal can be transmitted via the OXC through 100 km length o an ordinary single mode fiber with 3 dB penalty.

  • PDF

Moving Target Tracking System using Optical BPEJTC (광 BPEJTC를 이용한 이동표적 추적시스템)

  • 김은수
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.105-116
    • /
    • 1995
  • In this paper, we propose a new a new EOST by using the optical JTC(joint transform correlator) as the feature extraction park, because the JTC can adaptively detect the relative displacements of moving targest. Firstly, we derive the BPEJCT(binary phase extraction JTC) which is a phase type JTC and can remove the intracless correlation peaks of the conventional JTC. Then, especially we hardware construction for driving the BPEJTC in real, and with the Kalman target estimation alogorithm, we carried out a target tracking experiment only to show the possibility of real-time implementation of the EOST.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Electrooptic pattern recognition system by the use of line-orientation and eigenvector features (방향선소와 고유벡터 특징을 이용한 전기광학적 패턴인식 시스템)

  • 신동학;장주석
    • Korean Journal of Optics and Photonics
    • /
    • v.8 no.5
    • /
    • pp.403-409
    • /
    • 1997
  • We proposed a system that can perform pattern recognition based on parrallel optical feature extraction and performed experiments on this. The feature to be extracted are both 6 simple line orientations and two eigenvectors of the covariance matrix of the patterns that cannot be distinguished with the line orientation features alone. Our system consists of a feature-extraction part and a pattern-recognition part. The former that extracts the features in parallel with the multiplexed Vander Lugt filters was implemented optically, while the latter that performs the pattern recognition by the use of the extracted features was implemented in a computer. In the pattern recognition part, two methods are tested;one is to use an artificial neural network, which is trained to recognize the features directly, the other is to count the numbers of specific features simply and then to compare them with the stored reference feature numbers. We report the preliminary experimental results tested for 15 alpabet patterns with only straight line segments.

  • PDF

Axial motion stereo method (로보트 팔에 부착된 카메라를 이용한 3차원 측정방법)

  • 이상용;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.1192-1197
    • /
    • 1991
  • This paper describes a method of extracting the 3-D coordinates of feature points of an object from two images taken by one camera. The first image is from a CCD camera before approaching the object and the second image is from same camera after approaching the object along the optical axis. In the two images, the feature points appear at different position on the screen due to image enlargement. From the change of positions of feature points their world coordinates are calculated. In this paper, the correspondence problem is solved by image shrinking and correlation.

  • PDF

A Study on Adaptive Feature-Factors Based Fingerprint Recognition (적응적 특징요소 기반의 지문인식에 관한 연구)

  • 노정석;정용훈;이상범
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1799-1802
    • /
    • 2003
  • This paper has been studied a Adaptive feature-factors based fingerprints recognition in many biometrics. we study preprocessing and matching method of fingerprints image in various circumstances by using optical fingerprint input device. The Fingerprint Recognition Technology had many development until now. But, There is yet many point which the accuracy improves with operation speed in the side. First of all we study fingerprint classification to reduce existing preprocessing step and then extract a Feature-factors with direction information in fingerprint image. Also in the paper, we consider minimization of noise for effective fingerprint recognition system.

  • PDF

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

CNN-based Opti-Acoustic Transformation for Underwater Feature Matching (수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환)

  • Jang, Hyesu;Lee, Yeongjun;Kim, Giseop;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.