• Title/Summary/Keyword: 비전 센서

Search Result 296, Processing Time 0.024 seconds

Learning Similarity between Hand-posture and Structure for View-invariant Hand-posture Recognition (관측 시점에 강인한 손 모양 인식을 위한 손 모양과 손 구조 사이의 학습 기반 유사도 결정 방법)

  • Jang Hyo-Yeong;Jeong Jin-U;Byeon Jeung-Nam
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.187-191
    • /
    • 2006
  • 본 논문에서는 비전 기술에 기반을 둔 손 모양 인식 시스템의 성능 향상을 위해 학습을 통해 손 모양과 손 구조 간 유사도를 결정하는 방법을 제안한다. 비전 센서에 기반을 둔 손 모양 인식은 손의 높은 자유도로 인한 자체 가림 현상과 관찰 방향 변화에 따른 입력 영상의 다양함으로 인해 인식에 어려움이 따른다. 따라서 비전 기반 손 모양 인식의 경우, 카메라와 손 간의 상대적인 각도에 제한을 두거나 여러 대의 카메라를 배치하는 것이 일반적이다. 그러나 카메라와 손 간의 상대적 각도에 제한을 두는 경우에는 사용자의 움직임에 제약이 따르게 되며, 여러 대의 카메라를 사용할 경우에는 각 입력된 영상에 대한 인식 결과를 최종 인식 결과에 반영하는 방식에 대해서 추가적으로 고려해야 한다. 본 논문에서는 비전 기반 손 모양 인식의 이러한 문제점을 개선하기 위하여 인식의 과정에서 사용되는 손 모양 특징을 손 구조적인 각도 정보와 손 영상 특징으로 나누고, 학습을 통해 각 특징 간 연관성을 정의한다.

  • PDF

Real-time People Occupancy Detection by Camera Vision Sensor (카메라 비전 센서를 활용하는 실시간 사람 점유 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.774-784
    • /
    • 2017
  • Occupancy sensors installed in buildings and households turn off the light if the space is vacant. Currently PIR (pyroelectric infra-red) motion sensors have been utilized. Recently, the researches using camera sensors have been carried out in order to overcome the demerit of PIR that can not detect static people. If the tradeoff of cost and performance is satisfied, the camera sensors are expected to replace the current PIRs. In this paper, we propose vision sensor-based occupancy detection being composed of tracking, recognition and detection. Our softeware is designed to meet the real-time processing. In experiments, 14.5fps is achieved at 15fps USB input. Also, the detection accuracy reached 82.0%.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

Development of a Lane Departure Avoidance System using Vision Sensor and Active Steering Control (비전 센서 및 능동 조향 제어를 이용한 차선 이탈 방지 시스템 개발)

  • 허건수;박범찬;홍대건
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.6
    • /
    • pp.222-228
    • /
    • 2003
  • Lane departure avoidance system is one of the key technologies for the future active-safety passenger cars. The lane departure avoidance system is composed of two subsystems; lane sensing algorithm and active-steering controller. In this paper, the road image is obtained by vision sensor and the lane parameters are estimated using image processing and Kalman Filter technique. The active-steering controller is designed to prevent the lane departure. The developed active-steering controller can be realized by steer-by-wire actuator. The lane-sensing algorithm and active-steering controller are implemented into the steering HILS(Hardware-In-the-Loop Simulation) and their performance is evaluated with a human driver in the loop.

Development of a Lane Sensing Algorithm Using Vision Sensors (비전 센서를 이용한 차선 감지 알고리듬 개발)

  • Park, Yong-Jun;Heo, Geon-Su
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.8
    • /
    • pp.1666-1671
    • /
    • 2002
  • A lane sensing algorithm using vision sensors is developed based on lane geometry models. The parameters of the lane geometry models are estimated by a Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from image plane to global coordinate assumes earth to be flat, but roll and pitch motions of a vehicle are considered from the perspective of the lane sensing. The proposed algorithm shows robust lane sensing performance compared to the conventional algorithms.

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

Development of Vision system for Back Light Unit of Defect (백라이트 유닛의 결함 검사를 위한 비전 시스템 개발)

  • Cho, Sang-Hee;Han, Chang-Ho;Oh, Choon-Suk;Ryu, Young-Kee
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.127-129
    • /
    • 2005
  • 본 연구에서는 백라이트 유닛의 검사를 위한 머신비전 시스템을 구축한다. 시스템은 크게 하드웨어와 소프트웨어로 나눌 수 있고 하드웨어는 조명부, 영상획득부, 로봇 암 제어부로 분류된다. 조명부는 36W FPL램프로 구성되었고 조명부의 상판에 아크릴판을 거치대로 이용하여 백라이트 유닛을 거치한다. 로봇 암 제어부는 2축 로봇 암을 제어하여 로봇 암의 센서부착 지지대에 부착된 CCD 센서를 이동시킨다. 이와 동시에 영상획득부에서는 이미지를 획득하여 PC로 전송한다. 소프트웨어의 화상처리 검사 알고리즘은 일정 패턴이 있는 도광판에 대한 검사 알고리즘과 일정패턴이 없근 백라이트 유닛에 대한 검사 알고리즘으로 분리된다. 일정 패턴이 인쇄되어 있는 패널에 대한 검사 알고리즘은 모폴로지 연산을 이용하는 템플릿 체크방법과 블록 매칭 방법이 사용되었고 일정패턴이 없는 유닛에 대한 검사는 개선된 Otsu 방법을 이용하여 얼룩이나 흐릿한 결함에 대한 결함을 검출하였다. 실험결과 불균일한 결함과 밝기가 일정하지 않은 결함일지라고 90% 이상의 검출율로 뛰어난 성능을 입증하였다.

  • PDF

The Multipass Joint Tracking System by Vision Sensor (비전센서를 이용한 다층 용접선 추적 시스템)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.14-23
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. However, in this paper, multipass tracking more than single pass tracking is performed by conventional seam tracking algorithm and developed one. And tracking performances of two algorithm are compared in multipass tracking. As the result, tracking performance in multi-pass welding shows superior conventional seam tracking algorithm to developed one.

A Study on Joint Tracking for Multipass Arc Welding using Vision Sensor (비전 센서를 이용한 다층 아크 용접에서 용접선 추적에 관한 연구)

  • 이정익;장인선;이세현;엄기원
    • Journal of Welding and Joining
    • /
    • v.16 no.3
    • /
    • pp.85-94
    • /
    • 1998
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system, is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. In this paper, developed vision processing techniques are detailed, and their application in welding fabrication is covered. The software for joint tracking system is finally proposed.

  • PDF

A Study of Inspection of Weld Bead Defects using Laser Vision Sensor (레이저 비전 센서를 이용한 용접비드의 외부결함 검출에 관한 연구)

  • 이정익;이세헌
    • Journal of Welding and Joining
    • /
    • v.17 no.2
    • /
    • pp.53-60
    • /
    • 1999
  • Conventionally, CCD camera and vision sensor using the projected pattern of light is generally used to inspect the weld bead defects. But with this method, a lot of time is needed for image preprocessing, stripe extraction and thinning, etc. In this study, laser vision sensor using the scanning beam of light is used to shorten the time required for image preprocessing. The software for deciding whether the weld bead is in proper shape or not in real time is developed. The criteria are based upon the classification of imperfections in metallic fusion welds(ISO 6520) and limits for imperfections(ISO 5817).

  • PDF