• Title/Summary/Keyword: vision tracking system

Search Result 437, Processing Time 0.026 seconds

Real-time pupil motion recognition and efficient character selection system using FPGA and OpenCV (FPGA와 OpenCV를 이용한 실시간 눈동자 모션인식과 효율적인 문자 선택 시스템)

  • Lee, Hee Bin;Heo, Seung Won;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.393-394
    • /
    • 2018
  • In this paper, the new system which improve the previously reported "Implementation to human-computer interface system with motion tracking using OpenCV and FPGA" is introduced and in this system, a character selection system for the physically uncomfortable patients is proposed. Using OpenCV, the eye area is detected, the pupil position is determined, and then the results are sent to the FPGA, and the character is selected finally. The method to minimize the pupil movement of the patient is used to output the character according to the user's intention. Using OpenCV, various computer vision algorithms can be easily applied, and using programmable FPGA, a pupil motion recognition and character selection system are implemented with a low cost.

  • PDF

Distance Measurement Using a Single Camera with a Rotating Mirror

  • Kim Hyongsuk;Lin Chun-Shin;Song Jaehong;Chae Heesung
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.4
    • /
    • pp.542-551
    • /
    • 2005
  • A new distance measurement method with the use of a single camera and a rotating mirror is presented. A camera in front of a rotating mirror acquires a sequence of reflected images, from which distance information is extracted. The distance measurement is based on the idea that the corresponding pixel of an object point at a longer distance moves at a higher speed in a sequence of images in this type of system setting. Distance measurement based on such pixel movement is investigated. Like many other image-based techniques, this presented technique requires matching corresponding points in two images. To alleviate such difficulty, two kinds of techniques of image tracking through the sequence of images and the utilization of multiple sets of image frames are described. Precision improvement is possible and is one attractive merit. The presented approach with a rotating mirror is especially suitable for such multiple measurements. The imprecision caused by the physical limit could be improved through making several measurements and taking an average. In this paper, mathematics necessary for implementing the technique is derived and presented. Also, the error sensitivities of related parameters are analyzed. Experimental results using the real camera-mirror setup are reported.

An Improved Cast Shadow Removal in Object Detection (객체검출에서의 개선된 투영 그림자 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Kim, Yu-Sung;Kim, Jae-Min
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.889-894
    • /
    • 2009
  • Accompanied by the rapid development of Computer Vision, Visual surveillance has achieved great evolution with more and more complicated processing. However there are still many problems to be resolved for robust and reliable visual surveillance, and the cast shadow occurring in motion detection process is one of them. Shadow pixels are often misclassified as object pixels so that they cause errors in localization, segmentation, tracking and classification of objects. This paper proposes a novel cast shadow removal method. As opposed to previous conventional methods, which considers pixel properties like intensity properties, color distortion, HSV color system, and etc., the proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the background scene. Then, the product of the outcomes of application determines whether the blob pixels in the foreground mask comes from object blob regions or shadow regions. The proposed method is simple but turns out practically very effective for Gaussian Mixture Model, which is verified through experiments.

  • PDF

A survey of traffic monitoring systems based on image analysis (영상 분석에 기반한 교통 모니터링 시스템에 관한 조사)

  • Lee Dae-Ho;Park Young-Tae
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.69-79
    • /
    • 2006
  • A number of researches on vision-based traffic monitoring system have been carried out. Most of traffic monitoring schemes belong to one of two categories: analyzing of entire traffic scene and examining of local region. However, the proposed methods suffer from severe performance deterioration when applied in different operating conditions because of the loss of robustness. This paper is aimed at surveying various methods proposed and analyzing the advantages and disadvantages of these methods. Also we propose and investigate appropriate approaches to solve the problems in specific applications.

The Development for Vision-Based Realtime Speed Measuring Algorithm (영상처리를 이용한 여행시간 및 속도 계측 알고리즘의 개발)

  • 오영태;조형기;정의환
    • Journal of Korean Society of Transportation
    • /
    • v.14 no.4
    • /
    • pp.107-129
    • /
    • 1996
  • Recently, surveillance system designed to collect various trsffic information are becoming new areas of development . Among these, the image detector is a ayatem which can measure the travel time and speed in realtime and this is emerging as the most effcient tool to be available in future related areas. But in measuring wide-area information in realtime, the image detector are yet full of problem in its accuracy. The aim of this ahesis is to develop an algorithms which can collect wide-area information such as travel time and travel speed in urban networks and freeways in realtime. The information on wide-area such as travel time and travel speed is important in accomplishing strategic function in traffic control. The algorithm developed from this study is based on the image tracking model which tracks a moving vehicle form image datas collected continuously, and is constructed to perform realtime measurement. To evaluate the performance of the developed algorithms, 600 ind vidual vehicles in total were used as data for the study, and this evaluation was carried out with the differenciation of day and night condition at the access roads in front of AJou University, In the statistical analysis results, the error rate was recorded as 5.69% and it has proved to be applicable on the field in both day and noght conditions.

  • PDF

The Comparative Analysis of Visual Perceptual Function and Impulse on Players Chagi in Taekwondo Events (태권도 종목별 선수들의 차기에 대한 시지각기능 및 충격량 비교 분석)

  • Lee, Young-Rim;Ha, Chul-Soo
    • Korean Journal of Applied Biomechanics
    • /
    • v.20 no.2
    • /
    • pp.205-212
    • /
    • 2010
  • The purpose of this study was to compare the efficiency of visual perception and impulse according to the three types of Taekwondo players to be able to supply an efficient training method, for this a total of 12 representative Taekwondo players of the Korean National team, 4 poomsae players, 4 kyokpa players and 4 kyorugi players weighting between 68 to 74 kg, and the results from the motion analysis system, eye tracker and Electronic hogu are as follows. For the visual perceptual function, the total body reaction time was slowest for the kyokpa group, and for the visible reaction and vision fixation time was longest of the poomsae group, while the performance movement was fastest for the kyorugi group. As for description of the two kicking motions dollyo chagi and dolgae chagi the longer visual fixation helps the accuracy of the kick. In conclusion, as there was a difference between the groups, this information could help to train the visual perception of players according to what event they are participating in.

Global Map Building and Navigation of Mobile Robot Based on Ultrasonic Sensor Data Fusion

  • Kang, Shin-Chul;Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.198-204
    • /
    • 2007
  • In mobile robotics, ultrasonic sensors became standard devices for collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. The global map building based on multi-sensor data fusion is applied for recognition an obstacle free path from a starting position to a known goal region, and simultaneously build a map of straight line segment geometric primitives based on the application of the Hough transform from the actual and noisy sonar data. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, Hough transform, since there exist several recent thorough books and review paper on this paper. Experimental results with a real Pioneer DX2 mobile robot will demonstrate the effectiveness of the discussed methods.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Development of Real-Time Vision Aided Navigation Using EO/IR Image Information of Tactical Unmanned Aerial System in GPS Denied Environment (GPS 취약 환경에서 전술급 무인항공기의 주/야간 영상정보를 기반으로 한 실시간 비행체 위치 보정 시스템 개발)

  • Choi, SeungKie;Cho, ShinJe;Kang, SeungMo;Lee, KilTae;Lee, WonKeun;Jeong, GilSun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.6
    • /
    • pp.401-410
    • /
    • 2020
  • In this study, a real-time Tactical UAS position compensation system based on image information developed to compensate for the weakness of location navigation information during GPS signal interference and jamming / spoofing attack is described. The Tactical UAS (KUS-FT) is capable of automatic flight by switching the mode from GPS/INS integrated navigation to DR/AHRS when GPS signal is lost. However, in the case of location navigation, errors accumulate over time due to dead reckoning (DR) using airspeed and azimuth which causes problems such as UAS positioning and data link antenna tracking. To minimize the accumulation of position error, based on the target data of specific region through image sensor, we developed a system that calculates the position using the UAS attitude, EO/IR (Electric Optic/Infra-Red) azimuth and elevation and numerical map data and corrects the calculated position in real-time. In addition, function and performance of the image information based real-time UAS position compensation system has been verified by ground test using GPS simulator and flight test in DR mode.

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.