• Title/Summary/Keyword: 비젼 센서

Search Result 71, Processing Time 0.026 seconds

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Design and Implementation of Vision Box Based on Embedded Platform (Embedded Platform 기반 Vision Box 설계 및 구현)

  • Kim, Pan-Kyu;Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 2007
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and vehicle recognition is ole of them. There have been many proposals about algorithm of vehicle recognition. But have complex calculation processing. So they need long processing time and sometimes they make problems. In this research we suggested vehicle type recognition system using vision bpx based on embedded platform. As a result of testing this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting, noise and angle, rate of recognition is decreased as pattern score is lowered and recognition speed is slowed.

Three-dimensional Machine Vision System based on moire Interferometry for the Ball Shape Inspection of Micro BGA Packages (마이크로 BGA 패키지의 볼 형상 시각검사를 위한 모아레 간섭계 기반 3차원 머신 비젼 시스템)

  • Kim, Min-Young
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.19 no.1
    • /
    • pp.81-87
    • /
    • 2012
  • This paper focuses on three-dimensional measurement system of micro balls on micro Ball-Grid-Array(BGA) packages in-line. Most of visual inspection system still suffers from sophisticate reflection characteristics of micro balls. For accurate shape measurement of them, a specially designed visual sensor system is proposed under the sensing principle of phase shifting moire interferometry. The system consists of a pattern projection system with four projection subsystems and an imaging system. In the projection system, four subsystems have spatially different projection directions to make target objects experience the pattern illuminations with different incident directions. For the phase shifting, each grating pattern of subsystem is regularly moved by PZT actuator. To remove specular noise and shadow area of BGA balls efficiently, a compact multiple-pattern projection and imaging system is implemented and tested. Especially, a sensor fusion algorithm to integrate four information sets, acquired from multiple projections, into one is proposed with the basis of Bayesian sensor fusion theory. To see how the proposed system works, a series of experiments is performed and the results are analyzed in detail.

The Technique of Human tracking using ultrasonic sensor for Human Tracking of Cooperation robot based Mobile Platform (모바일 플랫폼 기반 협동로봇의 사용자 추종을 위한 초음파 센서 활용 기법)

  • Yum, Seung-Ho;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.638-648
    • /
    • 2020
  • Currently, the method of user-follwoing in intelligent cooperative robots usually based in vision system and using Lidar is common and have excellent performance. But in the closed space of Corona 19, which spread worldwide in 2020, robots for cooperation with medical staff were insignificant. This is because Medical staff are all wearing protective clothing to prevent virus infection, which is not easy to apply with existing research techniques. Therefore, in order to solve these problems in this paper, the ultrasonic sensor is separated from the transmitting and receiving parts, and based on this, this paper propose that estimating the user's position and can actively follow and cooperate with people. However, the ultrasonic sensors were partially applied by improving the Median filter in order to reduce the error caused by the short circuit in communication between hard reflection and the number of light reflections, and the operation technology was improved by applying the curvature trajectory for smooth operation in a small area. Median filter reduced the error of degree and distance by 70%, vehicle running stability was verified through the training course such as 'S' and '8' in the result.

Design and implementation of a 3-axis Motion Sensor based SWAT Hand-signal Motion-recognition System (3축 모션 센서 기반 SWAT 수신호 모션 인식 시스템 설계 및 구현)

  • Yun, June;Pyun, Kihyun
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.33-42
    • /
    • 2014
  • Hand-signal is an effective communication means in the situation where voice cannot be used for expression especially for soldiers. Vision-based approaches using cameras as input devices are widely suggested in the literature. However, these approaches are not suitable for soldiers that have unseen visions in many cases. in addition, existing special-glove approaches utilize the information of fingers only. Thus, they are still lack for soldiers' hand-signal recognition that involves not only finger motions, but also additional information such as the rotation of a hand. In this paper, we have designed and implemented a new recognition system for six military hand-signal motions, i. e., 'ready', 'move', quick move', 'crawl', 'stop', and 'lying-down'. For this purpose, we have proposed a finger-recognition method and motion-recognition methods. The finger-recognition method discriminate how much each finger is bended, i. e., 'completely flattened', 'slightly flattened', 'slightly bended', and 'completely bended'. The motion-recognition algorithms are based on the characterization of each hand-signal motion in terms of the three axes. Through repetitive experiments, our system have shown 91.2% of correct recognition.

Smooth Haptic Interaction Methods in Augmented Reality Haptics (증강 현실에서의 부드러운 촉각 상호작용 방법)

  • Lee, Beom-Chan;Hwang, Sun-Uk;Kim, Hyun-Gon;Lee, Yong-Gu;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.2072-2072
    • /
    • 2009
  • 최근 연구들에서, 증강 현실(Augmented Reality; AR) 환경에서의 촉각 상호작용에 대한 가능성이 논의되었다. 비젼 기반의 트래킹을 기초로 한 증강 현실 기술은 미리 정의된 2차원 마커(marker)를 이용하여, 카메라로부터 획득된 실시간 영상 위에 가상 물체를 증강한다. 그러나, 카메라로부터 획득된 데이터는 몇몇 오차 요인들, 예를 들어 마커의 위치를 인식하는데 나타나는 오차, 카메라 안에 존재하는 센서 잡음 등으로 인해서 마커 잡음(마커를 인식하면서 나타나는 잡음)이 불가결하게 발생하게 된다. 이러한 이유로 인해서, 사용자가 한 손에는 마커를, 다른 한 손으로는 촉감 장치를 이용하여, 마커에 증강된 물체를 만질 때, 마커 잡음은 힘의 떨림(force trembling)을 발생시킨다. 심지어, 이러한 현상은 정지된 마커에 증강된, 마커가 움직이지 않는 상황에서도 발생한다. 게다가, 마커 위에 증강된 물체가 약간 빠른 속도로 이동하게 될 경우, 측정된 이동 거리는 연속적인 프레임(frame)들 간의 불연속적일 수 있다. 만약 사용자가, 대략 30Hz로 위치와 방향이 갱신되는 가상물체를 촉각적으로 상호작용하려 한다면, 계산되는 반력은 급작스런 힘의 변화를 생성하게 될 수도 있다. 이러한 현상을 극복하기 위해서, 마커 잡음을 최소화하기 위해서 정적 임계값(constant threshold)을 이용할 뿐만 아니라, 보간법을 같이 사용한 방법이 있었다. 하지만, 이러한 방법은 정적 임계값을 이용하고, 영상 프레임 갱신 속도와(video frame rate)와 촉각 프레임 갱신 속도가 일정하다는 가정을 사용하였기 때문에, 여전히 힘의 불연속적인 발생이 나타난다. 따라서, 이 논문에서는 두 가지 방법을 이용하여 증강 현실 내에서, 발생할 수 있는 힘의 불연속적인 변화를 보정하는 두 가지 방법, 잡음 제거를 위한 확장된 칼만 필터(Extend Kalman Filter)와 영상과 촉각 갱신 속도 차이에 따른 갑작스런 힘의 변화를 제거하기 위한 적응적 외삽법(Adaptive Extrapolation method)을 제안한다.

  • PDF

Real 3-D Shape Restoration using Lookup Table (룩업 테이블을 이용한 물체의 3-D 형상복원)

  • Kim, Kuk-Se;Lee, Jeong-Gi;Song, Gi-Beom;Kim, Choong-Won;Lee, Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1096-1101
    • /
    • 2004
  • The 3-D shape use to effect of movie, animation, industrial design, medical treatment service, education, engineering etc.... But it's not easy to make 3-D shape from the information of 2-D image. There are two methods in restoring 3-D video image through 2-D image; First the method of using a laser; Secondly the method of acquiring 3-D image through stereo vision. Instead of doing two methods with many difficulties, I figure out the method of simple 3-D image in this research paper. We present here a simple and efficient method, called direct calibration, which doesn't require any equations at all. The direct calibration procedure builds a lookup table(LUT) linking image and 3-D coordinates by a real 3-D triangulation system. The LUT is built by measuring the image coordinates of a grid of known 3-D points, and recording both image and world coordinates for each point; the depth values of all other visible points are obtained by interpolation.

Real-Time Object Recognition Using Local Features (지역 특징을 사용한 실시간 객체인식)

  • Kim, Dae-Hoon;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.3
    • /
    • pp.224-231
    • /
    • 2010
  • Automatic detection of objects in images has been one of core challenges in the areas such as computer vision and pattern analysis. Especially, with the recent deployment of personal mobile devices such as smart phone, such technology is required to be transported to them. Usually, these smart phone users are equipped with devices such as camera, GPS, and gyroscope and provide various services through user-friendly interface. However, the smart phones fail to give excellent performance due to limited system resources. In this paper, we propose a new scheme to improve object recognition performance based on pre-computation and simple local features. In the pre-processing, we first find several representative parts from similar type objects and classify them. In addition, we extract features from each classified part and train them using regression functions. For a given query image, we first find candidate representative parts and compare them with trained information to recognize objects. Through experiments, we have shown that our proposed scheme can achieve resonable performance.

Detecting Nighttime Pedestrians for PDS Using Camera in Visible Spectrum (가시 스펙트럼 대역 카메라를 사용하는 PDS를 위한 야간 보행자 검출)

  • Lee, Wang-Hee;Yoo, Hyeon-Joong;Kim, Hyoung-Suk;Jang, Young-Bum
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.9
    • /
    • pp.2280-2289
    • /
    • 2009
  • The death rate of pedestrians in car accidents in Korea is about 2.5 times higher than the average of OECD countries'. If a system that can detect pedestrians and send alarm to driver is built and reduces the rate, it is worth developing such a pedestrian detection system (PDS). Since the accident rate in which pedestrians are involved is higher at nighttime than in daytime, the adoption of nighttime PDS is being standardized by big auto companies. However, they are usually using expensive night visions or multiple sensors for their PDS. In this paper we propose a method for nighttime PDS using a monochrome visible spectrum camera. We could verify its superiority in both performance and real?time operation to existing algorithm through tests against video data taken in several different environments.

Development and Evaluation of the V-Catch Vision System

  • Kim, Dong Keun;Cho, Yongjoo;Park, Kyoung Shin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.45-52
    • /
    • 2022
  • A tangible sports game is an exercise game that uses sensors or cameras to track the user's body movements and to feel a sense of reality. Recently, VR indoor sports room systems installed to utilize tangible sports game for physical activity in schools. However, these systems primarily use screen-touch user interaction. In this research, we developed a V-Catch Vision system that uses AI image recognition technology to enable tracking of user movements in three-dimensional space rather than two-dimensional wall touch interaction. We also conducted a usability evaluation experiment to investigate the exercise effects of this system. We tried to evaluate quantitative exercise effects by measuring blood oxygen saturation level, the real-time ECG heart rate variability, and user body movement and angle change of Kinect skeleton. The experiment result showed that there was a statistically significant increase in heart rate and an increase in the amount of body movement when using the V-Catch Vision system. In the subjective evaluation, most subjects found the exercise using this system fun and satisfactory.