• Title/Summary/Keyword: vision-based method

Search Result 1,463, Processing Time 0.031 seconds

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Vision-Based Piano Music Transcription System (비전 기반 피아노 자동 채보 시스템)

  • Park, Sang-Uk;Park, Si-Hyun;Park, Chun-Su
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.249-253
    • /
    • 2019
  • Most of music-transcription systems that have been commercialized operate based on audio information. However, these conventional systems have disadvantages of environmental dependency, equipment dependency, and time latency. This paper studied a vision-based music-transcription system that utilizes video information rather than audio information, which is a traditional method of music-transcription programs. Computer vision technology is widely used as a field for analyzing and applying information from equipment such as cameras. In this paper, we created a program to generate MIDI file which is electronic music notes by using smart-phone cameras to record the play of piano.

A Vision Based Guideline Interpretation Technique for AGV Navigation (AGV 운행을 위한 비전기반 유도선 해석 기술)

  • Byun, Sungmin;Kim, Minhwan
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1319-1329
    • /
    • 2012
  • AGVs are more and more utilized nowadays and magnetic guided AGVs are most widely used because their system has low cost and high speed. But this type of AGVs requires high infrastructure building cost and has poor flexibility of navigation path layout changing. Thus it is hard to applying this type of AGVs to a small quantity batch production system or a cooperative production system with many AGVs. In this paper, we propose a vision based guideline interpretation technique that uses the cheap, easily installable and changeable color tapes (or paint) as a guideline. So a vision-based AGV with color tapes is effectively applicable to the production systems. For easy setting and changing of AGV navigation path, we suggest an automatic method for interpreting a complex guideline layout including multi-branches and joins of branches. We also suggest a trace direction decision method for stable navigation of AGVs. Through several real-time navigation tests with an industrial AGV installed with the suggested technique, we confirmed that the technique is practically and stably applicable to real industrial field.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

Skeleton-Based Local-Path Planning for a Mobile Robot with a Vision System (비전센서를 사용하는 이동로봇의 골격지도를 이용한 지역경로계획 알고리즘)

  • Kwon, Ji-Wook;Yang, Dong-Hoon;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1958-1959
    • /
    • 2006
  • This paper proposes a local path-planning algorithm that enables a mobile robot with vision sensor in a local area.The proposed method based on projective geometry and a wavefront method finds local-paths to avoid collisions using 3-D walls or obstacles map generated using projective geometry. Simulation results show the feasibility of the proposed method

  • PDF

Vision-based Kinematic Modeling of a Worm's Posture (시각기반 웜 자세의 기구학적 모형화)

  • Do, Yongtae;Tan, Kok Kiong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.250-256
    • /
    • 2015
  • We present a novel method to model the body posture of a worm for vision-based automatic monitoring and analysis. The worm considered in this study is a Caenorhabditis elegans (C. elegans), which is popularly used for research in biological science and engineering. We model the posture by an open chain of a few curved or rigid line segments, in contrast to previously published approaches wherein a large number of small rigid elements are connected for the modeling. Each link segment is represented by only two parameters: an arc angle and an arc length for a curved segment, or an orientation angle and a link length for a straight line segment. Links in the proposed method can be readily related using the Denavit-Hartenberg convention due to similarities to the kinematics of an articulated manipulator. Our method was tested with real worm images, and accurate results were obtained.

A Lane Change Recognition System for Smart Cars (스마트카를 위한 차선변경 인식시스템)

  • Lee, Yong-Jin;Yang, Jeong-Ha;Kwak, Nojun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.46-51
    • /
    • 2015
  • In this paper, we propose a vision-based method to recognize lane changes of an autonomous vehicle. The proposed method is based on six states of driving situations defined by the positional relationship between a vehicle and its nearest lane detected. With the combinations of these states, the lane change is detected. The proposed method yields 98% recognition accuracy of lane change even in poor situations with partially invisible lanes.

Implementation of Visual Data Compressor for Vision Sensor of Mobile Robot (이동로봇의 시각센서를 위한 동영상 압축기 구현)

  • Kim Hyung O;Cho Kyoung Su;Baek Moon Yeal;Kee Chang Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.99-106
    • /
    • 2005
  • In recent years, vision sensors are widely used to mobile robot for navigation or exploration. The analog signal transmission of visual data being used in this area, however, has some disadvantages including noise weakness in view of the data storage. A large amount of data also makes it difficult to use this method for a mobile robot. In this paper, a digital data compressing technology based on MPEG4 which substitutes for analog technology is proposed to overcome the disadvantages by using DWT(Discreate Wavelet Transform) instead of DCT(Discreate Cosine Transform). The TI Company's DSP chip, TMS320C6711, is used for the image encoder, and the performance of the proposed method is evaluated by PSNR(Peake Signal to Noise Rates), QP(Quantization Parameter) and bitrate.

An Vision System for Automatic Detection of Vehicle Passenger (차량 승객 자동탐지를 위한 비젼시스템)

  • Lee Young-Sik;Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.622-626
    • /
    • 2005
  • This paper presents an active vision system for ITS(intelligent transportation system : ITS). We have described a novel method to provide high quality imaging signals to a system that will perform passenger detection and counting in the roadway. The method calls for two co-registered near-infrared cameras with spectral sensitivity above (upper-band) and below (lower-band) the quality of the signal remains. We propose a novel system based on fusion of near-infrared imaging signals and we demonstrate its adequacy with theoretical and experimental arguments.