• Title/Summary/Keyword: Vision Based Sensor

Search Result 428, Processing Time 0.089 seconds

Resolution improvement of a CMOS vision chip for edge detection by separating photo-sensing and edge detection circuits (수광 회로와 윤곽 검출 회로의 분리를 통한 윤곽 검출용 시각칩의 해상도 향상)

  • Kong, Jae-Sung;Suh, Sung-Ho;Kim, Sang-Heon;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.112-119
    • /
    • 2006
  • Resolution of an image sensor is very significant parameter to improve. It is hard to improve the resolution of the CMOS vision chip for edge detection based on a biological retina using a resistive network because the vision chip contains additional circuits such as a resistive network and some processing circuits comparing with general image sensors such as CMOS image sensor (CIS). In this paper, we proved the problem of low resolution by separating photo-sensing and signal processing circuits. This type of vision chips occurs a problem of low operation speed because the signal processing circuits should be commonly used in a row of the photo-sensors. The low speed problem of operation was proved by using a reset decoder. A vision chip for edge detection with $128{\times}128$ pixel array has been designed and fabricated by using $0.35{\mu}m$ 2-poly 4-metal CMOS technology. The fabricated chip was integrated with optical lens as a camera system and investigated with real image. By using this chip, we could achieved sufficient edge images for real application.

Asynchronous Sensor Fusion using Multi-rate Kalman Filter (다중주기 칼만 필터를 이용한 비동기 센서 융합)

  • Son, Young Seop;Kim, Wonhee;Lee, Seung-Hi;Chung, Chung Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.11
    • /
    • pp.1551-1558
    • /
    • 2014
  • We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.

Hierarchical Deep Belief Network for Activity Recognition Using Smartphone Sensor (스마트폰 센서를 이용하여 행동을 인식하기 위한 계층적인 심층 신뢰 신경망)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1421-1429
    • /
    • 2017
  • Human activity recognition has been studied using various sensors and algorithms. Human activity recognition can be divided into sensor based and vision based on the method. In this paper, we proposed an activity recognition system using acceleration sensor and gyroscope sensor in smartphone among sensor based methods. We used Deep Belief Network (DBN), which is one of the most popular deep learning methods, to improve an accuracy of human activity recognition. DBN uses the entire input set as a common input. However, because of the characteristics of different time window depending on the type of human activity, the RBMs, which is a component of DBN, are configured hierarchically by combining them from different time windows. As a result of applying to real data, The proposed human activity recognition system showed stable precision.

Implementation of Visual Data Compressor for Vision Sensor of Mobile Robot (이동로봇의 시각센서를 위한 동영상 압축기 구현)

  • Kim Hyung O;Cho Kyoung Su;Baek Moon Yeal;Kee Chang Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.99-106
    • /
    • 2005
  • In recent years, vision sensors are widely used to mobile robot for navigation or exploration. The analog signal transmission of visual data being used in this area, however, has some disadvantages including noise weakness in view of the data storage. A large amount of data also makes it difficult to use this method for a mobile robot. In this paper, a digital data compressing technology based on MPEG4 which substitutes for analog technology is proposed to overcome the disadvantages by using DWT(Discreate Wavelet Transform) instead of DCT(Discreate Cosine Transform). The TI Company's DSP chip, TMS320C6711, is used for the image encoder, and the performance of the proposed method is evaluated by PSNR(Peake Signal to Noise Rates), QP(Quantization Parameter) and bitrate.

A Study on Detection of Object Position and Displacement for Obstacle Recognition of UCT (무인 컨테이너 운반차량의 장애물 인식을 위한 물체의 위치 및 변위 검출에 관한 연구)

  • 이진우;이영진;조현철;손주한;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.321-332
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of UCT(unmanned container transporters) with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects by neural network. We can transform pixel points to objects position of the real space using the proposed viewport. This proposed technique is used by the single vision system based on floor map.

  • PDF

Multiple Vehicle Recognition based on Radar and Vision Sensor Fusion for Lane Change Assistance (차선 변경 지원을 위한 레이더 및 비전센서 융합기반 다중 차량 인식)

  • Kim, Heong-Tae;Song, Bongsob;Lee, Hoon;Jang, Hyungsun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • This paper presents a multiple vehicle recognition algorithm based on radar and vision sensor fusion for lane change assistance. To determine whether the lane change is possible, it is necessary to recognize not only a primary vehicle which is located in-lane, but also other adjacent vehicles in the left and/or right lanes. With the given sensor configuration, two challenging problems are considered. One is that the guardrail detected by the front radar might be recognized as a left or right vehicle due to its genetic characteristics. This problem can be solved by a guardrail recognition algorithm based on motion and shape attributes. The other problem is that the recognition of rear vehicles in the left or right lanes might be wrong, especially on curved roads due to the low accuracy of the lateral position measured by rear radars, as well as due to a lack of knowledge of road curvature in the backward direction. In order to solve this problem, it is proposed that the road curvature measured by the front vision sensor is used to derive the road curvature toward the rear direction. Finally, the proposed algorithm for multiple vehicle recognition is validated via field test data on real roads.

Analysis of 3D Motion Recognition using Meta-analysis for Interaction (기존 3차원 인터랙션 동작인식 기술 현황 파악을 위한 메타분석)

  • Kim, Yong-Woo;Whang, Min-Cheol;Kim, Jong-Hwa;Woo, Jin-Cheol;Kim, Chi-Jung;Kim, Ji-Hye
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.925-932
    • /
    • 2010
  • Most of the research on three-dimensional interaction field have showed different accuracy in terms of sensing, mode and method. Furthermore, implementation of interaction has been a lack of consistency in application field. Therefore, this study is to suggest research trends of three-dimensional interaction using meta-analysis. Searching relative keyword in database provided with 153 domestic papers and 188 international papers covering three-dimensional interaction. Analytical coding tables determined 18 domestic papers and 28 international papers for analysis. Frequency analysis was carried out on method of action, element, number, accuracy and then verified accuracy by effect size of the meta-analysis. As the results, the effect size of sensor-based was higher than vision-based, but the effect size was extracted to small as 0.02. The effect size of vision-based using hand motion was higher than sensor-based using hand motion. Therefore, implementation of three-dimensional sensor-based interaction and vision-based using hand motions more efficient. This study was significant to comprehensive analysis of three-dimensional motion recognition for interaction and suggest to application directions of three-dimensional interaction.

Vision-Based Relative State Estimation Using the Unscented Kalman Filter

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.24-36
    • /
    • 2011
  • A new approach for spacecraft absolute attitude estimation based on the unscented Kalman filter (UKF) is extended to relative attitude estimation and navigation. This approach for nonlinear systems has faster convergence than the approach based on the standard extended Kalman filter (EKF) even with inaccurate initial conditions in attitude estimation and navigation problems. The filter formulation employs measurements obtained from a vision sensor to provide multiple line(-) of(-) sight vectors from the spacecraft to another spacecraft. The line-of-sight measurements are coupled with gyro measurements and dynamic models in an UKF to determine relative attitude, position and gyro biases. A vector of generalized Rodrigues parameters is used to represent the local error-quaternion between two spacecraft. A multiplicative quaternion-error approach is derived from the local error-quaternion, which guarantees the maintenance of quaternion unit constraint in the filter. The scenario for bounded relative motion is selected to verify this extended application of the UKF. Simulation results show that the UKF is more robust than the EKF under realistic initial attitude and navigation error conditions.

Vision Sensor and Deep Learning-based Around View Monitoring System for Ship Berthing (비전 센서 및 딥러닝 기반 선박 접안을 위한 어라운드뷰 모니터링 시스템)

  • Kim, Hanguen;Kim, Donghoon;Park, Byeolteo;Lee, Seung-Mok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.71-78
    • /
    • 2020
  • This paper proposes vision sensors and deep learning-based around view monitoring system for ship berthing. Ship berthing to the port requires precise relative position and relative speed information between the mooring facility and the ship. For ships of Handysize or higher, the vesselships must be docked with the help of pilots and tugboats. In the case of ships handling dangerous cargo, tug boats push the ship and dock it in the port, using the distance and velocity information receiving from the berthing aid system (BAS). However, the existing BAS is very expensive and there is a limit on the size of the vessel that can be measured. Also, there is a limitation that it is difficult to measure distance and speed when there are obstacles near the port. This paper proposes a relative distance and speed estimation system that can be used as a ship berthing assist system. The proposed system is verified by comparing the performance with the existing laser-based distance and speed measurement system through the field tests at the actual port.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.