• Title/Summary/Keyword: Robot vision

Search Result 878, Processing Time 0.034 seconds

Positive Random Forest based Robust Object Tracking (Positive Random Forest 기반의 강건한 객체 추적)

  • Cho, Yunsub;Jeong, Soowoong;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.107-116
    • /
    • 2015
  • In compliance with digital device growth, the proliferation of high-tech computers, the availability of high quality and inexpensive video cameras, the demands for automated video analysis is increasing, especially in field of intelligent monitor system, video compression and robot vision. That is why object tracking of computer vision comes into the spotlight. Tracking is the process of locating a moving object over time using a camera. The consideration of object's scale, rotation and shape deformation is the most important thing in robust object tracking. In this paper, we propose a robust object tracking scheme using Random Forest. Specifically, an object detection scheme based on region covariance and ZNCC(zeros mean normalized cross correlation) is adopted for estimating accurate object location. Next, the detected region will be divided into five regions for random forest-based learning. The five regions are verified by random forest. The verified regions are put into the model pool. Finally, the input model is updated for the object location correction when the region does not contain the object. The experiments shows that the proposed method produces better accurate performance with respect to object location than the existing methods.

The Technique of Human tracking using ultrasonic sensor for Human Tracking of Cooperation robot based Mobile Platform (모바일 플랫폼 기반 협동로봇의 사용자 추종을 위한 초음파 센서 활용 기법)

  • Yum, Seung-Ho;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.638-648
    • /
    • 2020
  • Currently, the method of user-follwoing in intelligent cooperative robots usually based in vision system and using Lidar is common and have excellent performance. But in the closed space of Corona 19, which spread worldwide in 2020, robots for cooperation with medical staff were insignificant. This is because Medical staff are all wearing protective clothing to prevent virus infection, which is not easy to apply with existing research techniques. Therefore, in order to solve these problems in this paper, the ultrasonic sensor is separated from the transmitting and receiving parts, and based on this, this paper propose that estimating the user's position and can actively follow and cooperate with people. However, the ultrasonic sensors were partially applied by improving the Median filter in order to reduce the error caused by the short circuit in communication between hard reflection and the number of light reflections, and the operation technology was improved by applying the curvature trajectory for smooth operation in a small area. Median filter reduced the error of degree and distance by 70%, vehicle running stability was verified through the training course such as 'S' and '8' in the result.

Design of Multi-Sensor-Based Open Architecture Integrated Navigation System for Localization of UGV

  • Choi, Ji-Hoon;Oh, Sang Heon;Kim, Hyo Seok;Lee, Yong Woo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.1 no.1
    • /
    • pp.35-43
    • /
    • 2012
  • The UGV is one of the special field robot developed for mine detection, surveillance and transportation. To achieve successfully the missions of the UGV, the accurate and reliable navigation data should be provided. This paper presents design and implementation of multi-sensor-based open architecture integrated navigation for localization of UGV. The presented architecture hierarchically classifies the integrated system into four layers and data communications between layers are based on the distributed object oriented middleware. The navigation manager determines the navigation mode with the QoS information of each navigation sensor and the integrated filter performs the navigation mode-based data fusion in the filtering process. Also, all navigation variables including the filter parameters and QoS of navigation data can be modified in GUI and consequently, the user can operate the integrated navigation system more usefully. The conventional GPS/INS integrated system does not guarantee the long-term reliability of localization when GPS solution is not available by signal blockage and intentional jamming in outdoor environment. The presented integration algorithm, however, based on the adaptive federated filter structure with FDI algorithm can integrate effectively the output of multi-sensor such as 3D LADAR, vision, odometer, magnetic compass and zero velocity to enhance the accuracy of localization result in the case that GPS is unavailable. The field test was carried out with the UGV and the test results show that the presented integrated navigation system can provide more robust and accurate localization performance than the conventional GPS/INS integrated system in outdoor environments.

Boundary Depth Estimation Using Hough Transform and Focus Measure (허프 변환과 초점정보를 이용한 경계면 깊이 추정)

  • Kwon, Dae-Sun;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.78-84
    • /
    • 2015
  • Depth estimation is often required for robot vision, 3D modeling, and motion control. Previous method is based on the focus measures which are calculated for a series of image by a single camera at different distance between and object. This method, however, has disadvantage of taking a long time for calculating the focus measure since the mask operation is performed for every pixel in the image. In this paper, we estimates the depth by using the focus measure of the boundary pixels located between the objects in order to minimize the depth estimate time. To detect the boundary of an object consisting of a straight line and a circle, we use the Hough transform and estimate the depth by using the focus measure. We performed various experiments for PCB images and obtained more effective depth estimation results than previous ones.

Development of Automatic Hole Position Measurement System using the CCD-camera (CCD-카메라를 이용한 홀 변위 자동측정시스템 개발)

  • 김병규;최재영;강희준;노영식
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.127-130
    • /
    • 2004
  • For the quality control of the industrial products, an automatic hole measuring system has been developed. The measurement device allows X-Y movement due to contact forces between a hole and its own circular cone and the device is attached to an industrial robot. Its measurement accuracy is about 0.04mm. This movement of the plate is measured by two LVDT sensor system. But this system using the LVDT sensors is restricted by high cost and precision of measurement and correspondence of environment so particularly, a vision system with CCD-Camera is discussed in this paper for the above mentioned purpose. The device consists of two of two links jointed with hinge pins basically and, they guarantee free movement of the touch prove attached on the second link in the same plane. These links are returned to home position by the spring plungers automatically after each process for the next one. On the surface of the touch prove, it has a circular white mark for camera recognition. The system detect and notify the center coordinate of capture mark image through the image processing. Its measuring accuracy has been proved to be about $\pm$0.01mm through the repeated implementation over 200 times. This technique will shows the advantage of touch-indirect image capture idea using cone-shaped touch prove in various symmetrical shaped holes particulary, like tapped holes, chamfered holes, etc As a result, we attained our object in a view of the accuracy, economical efficiency, and functionality

  • PDF

Distance Measurement of the Multi Moving Objects using Parallel Stereo Camera in the Video Monitoring System (영상감시 시스템에서 평행식 스테레오 카메라를 이용한 다중 이동물체의 거리측정)

  • 김수인;이재수;손영우
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.1
    • /
    • pp.137-145
    • /
    • 2004
  • In this paper, a new algorithm for the segmentation of the multi moving objects at the 3 dimension space and the method of measuring the distance from the camera to the moving object by using stereo video monitoring system is proposed. It get the input image of left and right from the stereo video monitoring system, and the area of the multi moving objects segmented by using adaptive threshold and PRA(pixel recursive algorithm). Each of the object segmented by window mask, then each coordinate value and stereo disparity of the multi moving objects obtained from the window masks. The distance of the multi moving objects can be calculated by this disparity, the feature of the stereo vision system and the trigonometric function. From the experimental results, the error rate of a distance measurement be existed within 7.28%, therefore, in case of implementation the proposed algorithm, the stereo security system, the automatic moving robot system and the stereo remote control system will be applied practical application.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

OWC based Smart TV Remote Controller Design Using Flashlight

  • Mariappan, Vinayagam;Lee, Minwoo;Choi, Byunghoon;Kim, Jooseok;Lee, Jisung;Choi, Seongjhin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.71-76
    • /
    • 2018
  • The technology convergence of television, communication, and computing devices enables the rich social and entertaining experience through Smart TV in personal living space. The powerful smart TV computing platform allows to provide various user interaction interfaces like IR remote control, web based control, body gesture based control, etc. The presently used smart TV interaction user control methods are not efficient and user-friendly to access different type of media content and services and strongly required advanced way to control and access to the smart TV with easy user interface. This paper propose the optical wireless communication (OWC) based remote controller design for Smart TV using smart device Flashlights. In this approach, the user smart device act as a remote controller with touch based interactive smart device application and transfer the user control interface data to smart TV trough Flashlight using visible light communication method. The smart TV built-in camera follows the optical camera communication (OCC) principle to decode data and control smart TV user access functions according. This proposed method is not harmful as radio frequency (RF) radiation does it on human health and very simple to use as well user does need to any gesture moves to control the smart TV.

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

Robust Human Silhouette Extraction Using Graph Cuts (그래프 컷을 이용한 강인한 인체 실루엣 추출)

  • Ahn, Jung-Ho;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.52-58
    • /
    • 2007
  • In this paper we propose a new robust method to extract accurate human silhouettes indoors with active stereo camera. A prime application is for gesture recognition of mobile robots. The segmentation of distant moving objects includes many problems such as low resolution, shadows, poor stereo matching information and instabilities of the object and background color distributions. There are many object segmentation methods based on color or stereo information but they alone are prone to failure. Here efficient color, stereo and image segmentation methods are fused to infer object and background areas of high confidence. Then the inferred areas are incorporated in graph cut to make human silhouette extraction robust and accurate. Some experimental results are presented with image sequences taken using pan-tilt stereo camera. Our proposed algorithms are evaluated with respect to ground truth data and proved to outperform some methods based on either color/stereo or color/contrast alone.