• Title/Summary/Keyword: Vision based tracking

Search Result 405, Processing Time 0.026 seconds

Object detection within the region of interest based on gaze estimation (응시점 추정 기반 관심 영역 내 객체 탐지)

  • Seok-Ho Han;Hoon-Seok Jang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.117-122
    • /
    • 2023
  • Gaze estimation, which automatically recognizes where a user is currently staring, and object detection based on estimated gaze point, can be a more accurate and efficient way to understand human visual behavior. in this paper, we propose a method to detect the objects within the region of interest around the gaze point. Specifically, after estimating the 3D gaze point, a region of interest based on the estimated gaze point is created to ensure that object detection occurs only within the region of interest. In our experiments, we compared the performance of general object detection, and the proposed object detection based on region of interest, and found that the processing time per frame was 1.4ms and 1.1ms, respectively, indicating that the proposed method was faster in terms of processing speed.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Development of Real-Time Vision Aided Navigation Using EO/IR Image Information of Tactical Unmanned Aerial System in GPS Denied Environment (GPS 취약 환경에서 전술급 무인항공기의 주/야간 영상정보를 기반으로 한 실시간 비행체 위치 보정 시스템 개발)

  • Choi, SeungKie;Cho, ShinJe;Kang, SeungMo;Lee, KilTae;Lee, WonKeun;Jeong, GilSun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.6
    • /
    • pp.401-410
    • /
    • 2020
  • In this study, a real-time Tactical UAS position compensation system based on image information developed to compensate for the weakness of location navigation information during GPS signal interference and jamming / spoofing attack is described. The Tactical UAS (KUS-FT) is capable of automatic flight by switching the mode from GPS/INS integrated navigation to DR/AHRS when GPS signal is lost. However, in the case of location navigation, errors accumulate over time due to dead reckoning (DR) using airspeed and azimuth which causes problems such as UAS positioning and data link antenna tracking. To minimize the accumulation of position error, based on the target data of specific region through image sensor, we developed a system that calculates the position using the UAS attitude, EO/IR (Electric Optic/Infra-Red) azimuth and elevation and numerical map data and corrects the calculated position in real-time. In addition, function and performance of the image information based real-time UAS position compensation system has been verified by ground test using GPS simulator and flight test in DR mode.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Single Image Dehazing Using Dark Channel Prior and Minimal Atmospheric Veil

  • Zhou, Xiao;Wang, Chengyou;Wang, Liping;Wang, Nan;Fu, Qiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.341-363
    • /
    • 2016
  • Haze or fog is a common natural phenomenon. In foggy weather, the captured pictures are difficult to be applied to computer vision system, such as road traffic detection, target tracking, etc. Therefore, the image dehazing technique has become a hotspot in the field of image processing. This paper presents an overview of the existing achievements on the image dehazing technique. The intent of this paper is not to review all the relevant works that have appeared in the literature, but rather to focus on two main works, that is, image dehazing scheme based on atmospheric veil and image dehazing scheme based on dark channel prior. After the overview and a comparative study, we propose an improved image dehazing method, which is based on two image dehazing schemes mentioned above. Our image dehazing method can obtain the fog-free images by proposing a more desirable atmospheric veil and estimating atmospheric light more accurately. In addition, we adjust the transmission of the sky regions and conduct tone mapping for the obtained images. Compared with other state of the art algorithms, experiment results show that images recovered by our algorithm are clearer and more natural, especially at distant scene and places where scene depth jumps abruptly.

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

The Technique of Human tracking using ultrasonic sensor for Human Tracking of Cooperation robot based Mobile Platform (모바일 플랫폼 기반 협동로봇의 사용자 추종을 위한 초음파 센서 활용 기법)

  • Yum, Seung-Ho;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.638-648
    • /
    • 2020
  • Currently, the method of user-follwoing in intelligent cooperative robots usually based in vision system and using Lidar is common and have excellent performance. But in the closed space of Corona 19, which spread worldwide in 2020, robots for cooperation with medical staff were insignificant. This is because Medical staff are all wearing protective clothing to prevent virus infection, which is not easy to apply with existing research techniques. Therefore, in order to solve these problems in this paper, the ultrasonic sensor is separated from the transmitting and receiving parts, and based on this, this paper propose that estimating the user's position and can actively follow and cooperate with people. However, the ultrasonic sensors were partially applied by improving the Median filter in order to reduce the error caused by the short circuit in communication between hard reflection and the number of light reflections, and the operation technology was improved by applying the curvature trajectory for smooth operation in a small area. Median filter reduced the error of degree and distance by 70%, vehicle running stability was verified through the training course such as 'S' and '8' in the result.

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

Hand posture recognition robust to rotation using temporal correlation between adjacent frames (인접 프레임의 시간적 상관 관계를 이용한 회전에 강인한 손 모양 인식)

  • Lee, Seong-Il;Min, Hyun-Seok;Shin, Ho-Chul;Lim, Eul-Gyoon;Hwang, Dae-Hwan;Ro, Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1630-1642
    • /
    • 2010
  • Recently, there is an increasing need for developing the technique of Hand Gesture Recognition (HGR), for vision based interface. Since hand gesture is defined as consecutive change of hand posture, developing the algorithm of Hand Posture Recognition (HPR) is required. Among the factors that decrease the performance of HPR, we focus on rotation factor. To achieve rotation invariant HPR, we propose a method that uses the property of video that adjacent frames in video have high correlation, considering the environment of HGR. The proposed method introduces template update of object tracking using the above mentioned property, which is different from previous works based on still images. To compare our proposed method with previous methods such as template matching, PCA and LBP, we performed experiments with video that has hand rotation. The accuracy rate of the proposed method is 22.7%, 14.5%, 10.7% and 4.3% higher than ordinary template matching, template matching using KL-Transform, PCA and LBP, respectively.