• Title/Summary/Keyword: Vision problem

Search Result 718, Processing Time 0.025 seconds

A Study on Obstacle Detection for Mobile Robot Navigation (이동형 로보트 주행을 위한 장애물 검출에 관한 연구)

  • Yun, Ji-Ho;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.587-589
    • /
    • 1995
  • The safe navigation of a mobile robot requires the recognition of the environment in terms of vision processing. To be guided in the given path, the robot should acquire the information about where the wall and corridor are located. Also unexpected obstacles should be detected as rapid as possible for the safe obstacle avoidance. In the paper, we assume that the mobile robot should be navigated in the flat surface. In terms of this assumption we simplify the correspondence problem by the free navigation surface and matching features in that coordinate system. Basically, the vision processing system adopts line segment of edge as the feature. The extracted line segments of edge out of both image are matched in the free nevigation surface. According to the matching result, each line segment is labeled by the attributes regarding obstacle and free surface and the 3D shape of obstacle is interpreted. This proposed vision processing method is verified in terms of various simulations and experimentation using real images.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Effects of Vision and Visual Feedback on Standing Posture in Patients With Hemiplegia (시각 및 시각되먹임이 펀마비 환자의 서기자세에 미치는 영향)

  • Kim, Myoung-Jin
    • Physical Therapy Korea
    • /
    • v.5 no.3
    • /
    • pp.42-47
    • /
    • 1998
  • Patients with hemiplegia usually show different body weight distribution as compared with normal subjects. Asymmetrical posture during static stance has been identified as a common problem in patients with hemiplegia. The purpose of this study was to identify the effects of vision and visual feedback on body weight distribution while standing under three conditions: eyes-closed, eyes-open and visual feedback condition. Fourteen patients with hemiplegia participated in the study. Their body weight distribution during standing for 20 seconds was measured by Limloader. The data were analysed by repeated measure of one-way ANOVA. The weight bearing on the paretic limb in the eyes-open condition was significantly higher than that of the eyes-closed condition. The weight bearing on the parietic limb in the visual feedback condition was significantly higher than that of the eyes-open condition. These results suggest that patients with hemiplegia can improve their symmetrical stance ability using visual feedback.

  • PDF

A Study on Development of a Vision System for the Test of Steam Generator in Nuclear Power Plants (원전 증기 발생기 세관 검사용 비젼 시스템 개발에 관한 연구)

  • 왕한홍
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1996.03a
    • /
    • pp.200-204
    • /
    • 1996
  • It is a great number of problem for the man to perform maintenance and repairing work owing to radioactive effusion for a nuclear fuel and the pollution of an related equipment in nuclear power plants. Therefore, the vision processing system presented in this research requires to maintain the good performance under the radioactive circumstances and to safety the real time processing system presented in this research requires to maintain the good performance under the radioactive circumstances and to safety the real time processing. The proposed vision scheme adapts the gradient and Laplacian operator to perform the high speed processing in an edge detection and the centroid formula at each direction to obtain the center position of a holes using DSPs

  • PDF

Control of Visual Tracking System with a Random Time Delay (랜덤한 시간 지연 요소를 갖는 영상 추적 시스템의 제어)

  • Oh, Nam-Kyu;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.21-28
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, a random time delay must be considered, because it works of a various time to get a result of an image processing in the system. It can be seen as an obstacle factor to a control of visual tracking in real system. In this paper, implementing two vision controllers each, first one is made up PID controller and the second one is consisted of a Smith Predictor, the possibility was shown to overcome a problem of a random time delay in a visual tracking system. A number of simulations and experiments were done to show the validity of this study.

Position Control of an Object Using Vision Sensor (비전 센서를 이용한 물체의 위치 제어)

  • Ha, Eun-Hyeon;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, the time delay must be considered, because it works of time to get the result of an image processing in the system. It can be seen as an obstacle factor to real-time control. In this paper, using the pattern matching technique, the location of two objects is recognized from one image which was acquired by a camera. And it is implemented to a position control system as feedback data. Also, a possibility was shown to overcome a problem of time delay using PID controller. A number of experiments were done to show the validity of this study.

High Accuracy Vision-Based Positioning Method at an Intersection

  • Manh, Cuong Nguyen;Lee, Jaesung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.114-124
    • /
    • 2018
  • This paper illustrates a vision-based vehicle positioning method at an intersection to support the C-ITS. It removes the minor shadow that causes the merging problem by simply eliminating the fractional parts of a quotient image. In order to separate the occlusion, it firstly performs the distance transform to analyze the contents of the single foreground object to find seeds, each of which represents one vehicle. Then, it applies the watershed to find the natural border of two cars. In addition, a general vehicle model and the corresponding space estimation method are proposed. For performance evaluation, the corresponding ground truth data are read and compared with the vision-based detected data. In addition, two criteria, IOU and DEER, are defined to measure the accuracy of the extracted data. The evaluation result shows that the average value of IOU is 0.65 with the hit ratio of 97%. It also shows that the average value of DEER is 0.0467, which means the positioning error is 32.7 centimeters.

Multi-Vision-based Inspection of Mask Ear Loops Attachment in Mask Production Lines (마스크 생산 라인에서 다중 영상 기반 마스크 이어링 검사 방법)

  • JiMyeong, Woo;SangHyeon, Lee;Heoncheol, Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.337-346
    • /
    • 2022
  • This paper addresses the problem of vision-based ear loops ansd attachment inspection in mask production lines. This paper focuses on connections with ear loops and mask filter by an efficient combined approach. The proposed method used a template matching, shape detection and summation of histogram with preprocessing. We had a parameter for detecting defects heuristically. If the shape vertices are lower than the parameters our proposed method will find defective mask automatically. After finding normal masks in mask ear loops attachment status inspection algorithm our proposed method conducts attachment amount inspection. Our experimental results showed that the precision is 1 and the recall is 0.99 in the mask attachment status inspection and attachment amount inspection.

Inspection of Weld Bead using High Speed Laser Vision Sensor

  • Lee, H.;Ahn, S.;Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.3 no.2
    • /
    • pp.53-59
    • /
    • 2003
  • Visual inspection using laser vision sensor was proposed for fast and economic inspection and was verified experimentally. Welding is one of the most important manufacturing processes for automotive and electronics industries as well as heavy industries. The weld zone influences the reliability of the products. There are two kinds of weld inspection tests, destructive and non­destructive test. Even though the destructive test is much more reliable, the product should be destroyed, and hence the non­destructive test such as ultrasonic or X­ray test was used to overcome this problem. However, these tests are not used for real time inspection.

  • PDF

Image Enhancement for Visual SLAM in Low Illumination (저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법)

  • Donggil You;Jihoon Jung;Hyeongjun Jeon;Changwan Han;Ilwoo Park;Junghyun Oh
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.