• 제목/요약/키워드: Position recognition

검색결과 902건 처리시간 0.025초

AGV의 장애물 판별을 위한 스테레오 비젼시스템의 거리오차 해석 (Analysis of Distance Error of Stereo Vision System for Obstacle Recognition System of AGV)

  • 조연상;배효준;원두원;박흥식
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2001년도 춘계학술대회 논문집
    • /
    • pp.170-173
    • /
    • 2001
  • To apply stereo vision system to obstacle recognition system of AGV, we constructed algorithm of stereo matching and distance measuring with stereo image for positioning of object in area. And using this system, we look into the error between real position and measured position, and studied relationship of compensation.

  • PDF

회전된 지문에 강인한 지문 인식 시스템에 관한 연구 (Rotation Robust for Fingerprint Recognition System)

  • 김원중;조성원
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.542-545
    • /
    • 2002
  • Position transfer and turning rotation between fingerprint and inputted fingerprint that is registered in automatic fingerprint recognition system are one of main cause that mistaken acknowledgment expression happens. Therefore, in this research, conformity algorithm development that do it so that is unrelated in position translation and rotation of fingerprint at feature point conformity step to secure higher correct recognition rate.

  • PDF

방사형 캘리브레이터률 이용한 웨이퍼 위치 인식시스템 (Wafer Position Recognition System Using Radial Shape Calibrator)

  • 이병국;이준재
    • 한국멀티미디어학회논문지
    • /
    • 제14권5호
    • /
    • pp.632-641
    • /
    • 2011
  • 본 논문에서는 반도체 생산 공정 중 클리닝 공정 설비에서, 웨이퍼의 장착 위치를 인식하는 영상 인식 시스템을 제안한다. 제안한 시스템은 웨이퍼의 위치 이탈에 따른 위치오차 발생 시 이를 클리닝 설비에 전달하여, 웨이퍼 클리닝 장비의 파손을 방지하여 시스템의 신뢰성과 경제성을 높이기 위한 것이다. 제안한 방법은 기존의 시스템에서 체스보드 형태의 캘리브레이터를 사용시 발생되는 오차를 줄이기 위하여 방사형 캘리브레이터를 디자인 및 제작하고 이의 매핑합수를 구하는데 있다. 제안한 시스템은 고 신뢰성과 고 정밀의 위치인식 알고리즘을 사용하여, 효율적으로 웨이퍼 인라인 공정에 설치함을 목표로 하며 실험결과 기존의 방법에 비해 충분한 허용 기준 내에서 오차를 검출해내는 좋은 성능을 보여준다.

지정맥 인식을 위한 특징 검출 알고리즘 개발 (Development of Feature Extraction Algorithm for Finger Vein Recognition)

  • 김태훈;이상준
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권9호
    • /
    • pp.345-350
    • /
    • 2018
  • 본 연구는 지정맥 인식에 중요한 정맥 패턴 특징검출을 위한 알고리즘이다. 특징검출 알고리즘은 패턴인식 시 인식결과에 많은 영향을 끼치므로 중요하다. 인식률은 손가락 위치 변화에 따라 기준도 변화되므로 저하되는 특징을 가지고 있다. 또한, 손가락에 적외선 광을 조사하여 획득한 영상은 영상 배경과 혈관 패턴을 분리하기에 어렵고, 영상 전처리과정을 수행하므로 검출시간이 증대되는 특징을 가지고 있다. 이를 위해, 제시하는 알고리즘은 영상 전처리과정이 없이 수행되어 검출 시간을 줄일 수 있고, 지정맥 영상에 SWDA(Shifted Waveform Data Analysis) 알고리즘을 적용하여 손가락 마디 위치 및 정맥 패턴 검출이 가능한 특징을 가지고 있다. 적외선 투과율이 낮아 상대적으로 어두운 정맥 영상도 검출 오류 최소화가 가능한 특징을 보였다. 또한, 손가락 마디 위치는 분류 단계에서 기준으로 활용하면 인식률 저하를 보완할 수 있는 특징을 가지고 있다. 추후 손바닥, 손목 등 신체 여러 인식분야에 제안하는 알고리즘을 적용한다면 생체 특징 검출 정확도 향상 및 인식 수행 시간 감소에 기여할 것으로 기대된다.

3차원 안면 자동 인식기(3D-FARA)의 안면 위치변화에 따른 정확도 검사 (Precision Test of 3D Face Automatic Recognition Apparatus(3D-FARA) by Rotation)

  • 석재화;조경래;조용범;유정희;곽창규;이수경;고병희;김종원;김규곤;이의주
    • 사상체질의학회지
    • /
    • 제18권3호
    • /
    • pp.57-63
    • /
    • 2006
  • 1. Objectives The Face is an important standard for the classification of Sasang Contitutions. Now We are developing 3D Face Automatic Recognition Apparatus to analyse the facial characteristics. This apparatus show us 3D image of man's face and measure facial figure. We should examine accuracy of position recognition in 3D Face Automatic Recognition Apparatus. 2. Methods We took a photograph of Face status with Land Mark 8 times using Face Automatic Recognition Apparatus. Each taking-photo, We span Face statusby 10 degree. At last time, We took a photograph of Face status's lateral face. And We analysed Error Averige of Distance between seven Land Marks. So We examined the accuracy of position recognition in 3D Face Automatic Recognition Apparatus at indirectly in degree changing of Face status. 3. Results and Conclusions According to degree change of Face status, Error Averige of Distance between Seven Land Marks is 0.1848mm. In conclusion, We assessed that accuracy of position recognition in 3D Face Automatic Recognition Apparatus is considerably good in spite of degree changing of Face status

  • PDF

Analysis on the special quantitative variation of dot model by the position transform

  • Kim, Jeong-lae;Kim, Kyung-seop
    • International Journal of Advanced Culture Technology
    • /
    • 제5권3호
    • /
    • pp.67-72
    • /
    • 2017
  • Transform variation technique is constituted the vibration status of the flash-gap recognition level (FGRL) on the distribution recognition function. The recognition level condition by the distribution recognition function system is associated with the scattering vibration system. As to search a position of the dot model, we are consisted of the distribution value with character point by the output signal. The concept of recognition level is composed the reference of flash-gap level for variation signal by the distribution vibration function. For displaying a variation of the FGRL of the maximum-average in terms of the vibration function, and distribution position vibration that was the a distribution value of the far variation of the $Dis-rf-FA-{\alpha}_{MAX-AVG}$ with $5.74{\pm}1.12$ units, that was the a distribution value of the convenient variation of the $Dis-rf-CO-{\alpha}_{MAX-AVG}$ with $1.64{\pm}0.16$ units, that was the a distribution value of the flank variation of the $Dis-rf-FL-{\alpha}_{MAX-AVG}$ with $0.74{\pm}0.24$ units, that was the a distribution value of the vicinage variation of the $Dis-rf-VI-{\alpha}_{MAX-AVG}$ with $0.12{\pm}0.01$ units. The scattering vibration will be to evaluate at the ability of the vibration function with character point by the distribution recognition level on the FGRL that is showed the flash-gap function by the recognition level system. Scattering recognition system will be possible to control of a function by the special signal and to use a distribution data of scattering vibration level.

Object Recognition Using Planar Surface Segmentation and Stereo Vision

  • Kim, Do-Wan;Kim, Sung-Il;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1920-1925
    • /
    • 2004
  • This paper describes a new method for 3D object recognition which used surface segment-based stereo vision. The position and orientation of an objects is identified accurately enabling a robot to pick up, even though the objects are multiple and partially occluded. The stereo vision is used to get the 3D information as 3D sensing, and CAD model with its post processing is used for building models. Matching is initially performed using the model and object features, and calculate roughly the object's position and orientation. Though the fine adjustment step, the accuracy of the position and orientation are improved.

  • PDF

퍼지 클러스터링과 스트링 매칭을 통합한 형상 인식법 (Pattern Recognition Method Using Fuzzy Clustering and String Matching)

  • 남원우;이상조
    • 대한기계학회논문집
    • /
    • 제17권11호
    • /
    • pp.2711-2722
    • /
    • 1993
  • Most of the current 2-D object recognition systems are model-based. In such systems, the representation of each of a known set of objects are precompiled and stored in a database of models. Later, they are used to recognize the image of an object in each instance. In this thesis, the approach method for the 2-D object recognition is treating an object boundary as a string of structral units and utilizing string matching to analyze the scenes. To reduce string matching time, models are rebuilt by means of fuzzy c-means clustering algorithm. In this experiments, the image of objects were taken at initial position of a robot from the CCD camera, and the models are consturcted by the proposed algorithm. After that the image of an unknown object is taken by the camera at a random position, and then the unknown object is identified by a comparison between the unknown object and models. Finally, the amount of translation and rotation of object from the initial position is computed.

이동로봇의 안전한 엘리베이터 탑승을 위한 RGB-D 센서 기반의 엘리베이터 인식 및 위치추정 (Elevator Recognition and Position Estimation based on RGB-D Sensor for Safe Elevator Boarding)

  • 장민경;조현준;송재복
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.70-76
    • /
    • 2020
  • Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.

천장 부착 컬러 표식을 이용한 이동로봇의 자기위치추정 (Localization of Mobile Robot Using Color Landmark mounted on Ceiling)

  • 오종규;이찬호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.91-94
    • /
    • 2001
  • In this paper, we proposed localization method of mobile robot using color landmark mounted on ceiling. This work is composed 2 parts : landmark recognition part which finds the position of multiple landmarks in image and identifies them and absolute position estimation part which estimates the location and orientation of mobile robot in indoor environment. In landmark recognition part, mobile robot detects artificial color landmarks using simple histogram intersection method in rg color space which is insensitive to the change of illumination. Then absolute position estimation part calculates relative position of the mobile robot to the detected landmarks. For the verification of proposed algorithm, ceiling-orientated camera was installed on a mobile robot and performance of localization was examined by designed artificial color landmarks. As the result of test, mobile robot could achieve the reliable landmark detection and accurately estimate the position of mobile robot in indoor environment.

  • PDF