• Title/Summary/Keyword: vision-based method

Search Result 1,469, Processing Time 0.029 seconds

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

A design of window configuration for stereo matching (스테레오 매칭을 위한 Window 형상 설계)

  • 강치우;정영덕;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.1175-1180
    • /
    • 1991
  • The purpose of this paper is to improve the matching accuracy in identifying corresponding points in the area-based matching for the processing of stereo vision. For the selection of window size, a new method is proposed based on frequency domain analysis. The effectiveness of the proposed method is confirmed through a series of experiments. To overcome disproportionate distortion in stereo image pair, a new matching method using the warped window is also proposed. In the algorithm, the window is warped according to imaging geometry. Experiments on a synthetic image show that the matching accuracy is improved by 14.1% and 4.2% over the rectangular window method and image warping method each.

  • PDF

Vision-Based Identification of Personal Protective Equipment Wearing

  • Park, Man-Woo;Zhu, Zhenhua
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.313-316
    • /
    • 2015
  • Construction is one of the most dangerous job sectors, which reports tens of thousands of time-loss injuries and deaths every year. These disasters incur delays and additional costs to the projects. The safety management needs to be on the top primary tasks throughout the construction to avoid fatal accidents and to foster safe working environments. One of the safety regulations that are frequently violated is the wearing of personal protection equipment (PPE). In order to facilitate monitoring of the compliance of the PPE wearing regulations, this paper proposes a vision based method that automatically identifies whether workers wear hard hats and safety vests. The method involves three modules - human body detection, identification of safety vest wearing, and hard hat detection. First, human bodies are detected in the video frames captured by real-time on-site construction cameras. The detected human bodies are classified into with/without wearing safety vests based on the color features of their upper parts. Finally, hard hats are detected on the nearby regions of the detected human bodies and the locations of the detected hard hats and human bodies are correlated to reveal their corresponding matches. In this way, the proposed method provides any appearance of the workers without wearing hard hats or safety vests. The method has been tested on onsite videos and the results signify its potential to facilitate site safety monitoring.

  • PDF

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.3
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Predicting Accident Vulnerable Situation and Extracting Scenarios of Automated Vehicleusing Vision Transformer Method Based on Vision Data (Vision Transformer를 활용한 비전 데이터 기반 자율주행자동차 사고 취약상황 예측 및 시나리오 도출)

  • Lee, Woo seop;Kang, Min hee;Yoon, Young;Hwang, Kee yeon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.233-252
    • /
    • 2022
  • Recently, various studies have been conducted to improve automated vehicle (AV) safety for AVs commercialization. In particular, the scenario method is directly related to essential safety assessments. However, the existing scenario do not have objectivity and explanability due to lack of data and experts' interventions. Therefore, this paper presents the AVs safety assessment extended scenario using real traffic accident data and vision transformer (ViT), which is explainable artificial intelligence (XAI). The optimal ViT showed 94% accuracy, and the scenario was presented with Attention Map. This work provides a new framework for an AVs safety assessment method to alleviate the lack of existing scenarios.

On-Line Two-Dimensional Conducting Motion Analysis (실시간 이차원 지휘운동의 해석)

  • ;;;Zeung nam Bien
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.11
    • /
    • pp.876-885
    • /
    • 1991
  • This paper proposes an on-line method of understanding humans conducting action observed through a vision sensor. The vision system captures images of conducting action and extracts image coordinates of endpoint of the baton. A proposed algorithm based on the expert knowledge about conducting recognizes patterns of the conducting action from the extracted image coordimates and play the corresponding music score. Complementary algorithms are also proposed for identifying the first beat static point and dynamics through extensive experiments, this algorithm is found to detect lower edges and upper edges without error.

  • PDF

A Study on the Estimation of Object's Dimension based on the Vision System Model of Extended Kalman filtering (확장칼만 필터링의 비젼시스템 모델을 이용한 물체 치수 측정에 관한 연구)

  • Jang, W.S.;Ahn, H.C.;Kim, K.S.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.25 no.2
    • /
    • pp.110-116
    • /
    • 2005
  • It is very important to reduce the computational processing time for the application of the vision system in real time such as inspection, the determination of object's dimension and welding etc, because the vision system model involves a lot of measurement data acquired by CCD camera. Also, a lot of computation time is required in estimating the parameters in the vision system model if the iterative batch estimation method such as Newton Raphson is used. Thus, the effective computation method such as the Extended Kalman Filtering(EKF) is required to solve the above problems. The EKF has much advantages in that it takes explicitly into account the measurement uncertainties, and is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm to compute the parameters in the vision system model in real time. This vision system model involves the six parameters to account for the cameras inner and outer parameters. Also the EKF is applied to estimate the object's dimension. Finally, practicality of the estimation scheme of the vision system based on the EKF is verified experimently by performing the estimation of object's dimension.