• Title/Summary/Keyword: Vision recognition

Search Result 1,044, Processing Time 0.029 seconds

An Automated Machine-Vision-based Feeding System for Engine Mount Parts (머신비젼 기반의 엔진마운트 부품 자동공급시스템)

  • Lee, Hyeong-Geun;Lee, Moon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.5
    • /
    • pp.177-185
    • /
    • 2001
  • This paper describes a machine-vision-based prototype system for automatically feeding engine-mount parts to a swaging machine which assembles engine mounts. The system developed consists of a robot, a feeding device with two cylinders and two photo sensors, and a machine vision system. The machine vision system recognizes the type of different parts being fed from the feeding device and estimates the angular difference between the inner-hole center of the part and the point predetermined for assembling. The robot then picks up each part and rotated it through the estimated angle such that the parts are well assembled together as specified. An algorithm has been developed to recognize different part types and estimate the angular difference. The test results obtained for a set of real specimens indicate that the algorithm performs well enough to be applied to prototype system.

  • PDF

Design of Intelligent Robot Vision System for Automatic Inspection of Steam Generator of Nuclear Plant (원자력 발전소 스팀제너레이터의 자동검사를 위한 지능형 로봇 비젼 시스템 설계)

  • 한성현
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.9 no.6
    • /
    • pp.19-33
    • /
    • 2000
  • In this paper, we propose anew approach to the development of the automatic vision system to examine and repair the steam generator tubes at remote distance. In nuclear power plants, workers are reluctant of works in steam generator because of the high radiation environment and limited working space. It is strongly recommended that the examination and maintenance works be done by an automatic system for the protection of the operator from the radiation exposure. Digital signal processors are used in implementing real time recognition and examination of steam generator tubes in the proposed vision system. Performance of proposed digital vision system is illustrated by simulation and experiment for similar steam generator model.

  • PDF

A Study on Joint Tracking for Multipass Arc Welding using Vision Sensor (비전 센서를 이용한 다층 아크 용접에서 용접선 추적에 관한 연구)

  • 이정익;장인선;이세현;엄기원
    • Journal of Welding and Joining
    • /
    • v.16 no.3
    • /
    • pp.85-94
    • /
    • 1998
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system, is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. In this paper, developed vision processing techniques are detailed, and their application in welding fabrication is covered. The software for joint tracking system is finally proposed.

  • PDF

Vision based Fast Hand Motion Recognition Method for an Untouchable User Interface of Smart Devices (스마트 기기의 비 접촉 사용자 인터페이스를 위한 비전 기반 고속 손동작 인식 기법)

  • Park, Jae Byung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.300-306
    • /
    • 2012
  • In this paper, we propose a vision based hand motion recognition method for an untouchable user interface of smart devices. First, an original color image is converted into a gray scaled image and its spacial resolution is reduced, taking the small memory and low computational power of smart devices into consideration. For robust recognition of hand motions through separation of horizontal and vertical motions, the horizontal principal area (HPA) and the vertical principal area (VPA) are defined respectively. From the difference images of the consecutively obtained images, the center of gravity (CoG) of the significantly changed pixels caused by hand motions is obtained, and the direction of hand motion is detected by defining the least mean squared line for the CoG in time. For verifying the feasibility of the proposed method, the experiments are carried out with a vision system.

Hardware Implementation of Depth Image Stabilization Method for Efficient Computer Vision System (효율적인 컴퓨터 비전 시스템을 위한 깊이 영상 안정화 방법의 하드웨어 구현)

  • Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.8
    • /
    • pp.1805-1810
    • /
    • 2015
  • Increasing of depth data accessibility, depth data is used in many researches. Motion recognition of computer vision also widely use depth image. More accuracy motion recognition system needs more stable depth data. But depth sensor has a noise. This noise affect accuracy of the motion recognition system, we should noise suppression. In this paper, we propose using spatial domain and temporal domain stabilization for depth image and makes it hardware IP. We adapted our hardware to floor removing algorithm and verification its effect. we did realtime verification using FPGA and APU. Designed hardware has maximum frequency 202.184MHz.

CNN-based People Recognition for Vision Occupancy Sensors (비전 점유센서를 위한 합성곱 신경망 기반 사람 인식)

  • Lee, Seung Soo;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.274-282
    • /
    • 2018
  • Most occupancy sensors installed in buildings, households and so forth are pyroelectric infra-red (PIR) sensors. One of disadvantages is that PIR sensor can not detect the stationary person due to its functionality of detecting the variation of thermal temperature. In order to overcome this problem, the utilization of camera vision sensors has gained interests, where object tracking is used for detecting the stationary persons. However, the object tracking has an inherent problem such as tracking drift. Therefore, the recognition of humans in static trackers is an important task. In this paper, we propose a CNN-based human recognition to determine whether a static tracker contains humans. Experimental results validated that human and non-humans are classified with accuracy of about 88% and that the proposed method can be incorporated into practical vision occupancy sensors.

Active Vision from Image-Text Multimodal System Learning (능동 시각을 이용한 이미지-텍스트 다중 모달 체계 학습)

  • Kim, Jin-Hwa;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.795-800
    • /
    • 2016
  • In image classification, recent CNNs compete with human performance. However, there are limitations in more general recognition. Herein we deal with indoor images that contain too much information to be directly processed and require information reduction before recognition. To reduce the amount of data processing, typically variational inference or variational Bayesian methods are suggested for object detection. However, these methods suffer from the difficulty of marginalizing over the given space. In this study, we propose an image-text integrated recognition system using active vision based on Spatial Transformer Networks. The system attempts to efficiently sample a partial region of a given image for a given language information. Our experimental results demonstrate a significant improvement over traditional approaches. We also discuss the results of qualitative analysis of sampled images, model characteristics, and its limitations.

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.

A Survey of Shape Descriptors in Computer Vision (컴퓨터비전에서 사용되는 모양표시자의 현황)

  • 유헌우;장동식
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.2
    • /
    • pp.131-139
    • /
    • 2003
  • Shape descriptors play an important role in systems for object recognition, retrieval, registration, and analysis. Seven well-known descriptors including MPEG-7 visual descriptors arebriefly reviewed and a new robust pattern recognition descriptor is proposed. Performance comparison among descriptors are presented. Experiments show that the newly proposed descriptor yields better performance results than Fourier, invariant moment, and edge histogram descriptors.

Visual Attention Algorithm for Object Recognition (물체 인식을 위한 시각 주목 알고리즘)

  • Ryu, Gwang-Geun;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.306-308
    • /
    • 2006
  • We propose an attention based object recognition system, to recognize object fast and robustly. For this we calculate visual stimulus degrees and make saliency maps. Through this map we find a strongly attentive part of image by stimulus degrees, where local features are extracted to recognize objects.

  • PDF