• Title/Summary/Keyword: Vision Technique

Search Result 682, Processing Time 0.023 seconds

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

Stereo Vision-Based 3D Pose Estimation of Product Labels for Bin Picking (빈피킹을 위한 스테레오 비전 기반의 제품 라벨의 3차원 자세 추정)

  • Udaya, Wijenayake;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.8-16
    • /
    • 2016
  • In the field of computer vision and robotics, bin picking is an important application area in which object pose estimation is necessary. Different approaches, such as 2D feature tracking and 3D surface reconstruction, have been introduced to estimate the object pose accurately. We propose a new approach where we can use both 2D image features and 3D surface information to identify the target object and estimate its pose accurately. First, we introduce a label detection technique using Maximally Stable Extremal Regions (MSERs) where the label detection results are used to identify the target objects separately. Then, the 2D image features on the detected label areas are utilized to generate 3D surface information. Finally, we calculate the 3D position and the orientation of the target objects using the information of the 3D surface.

Autonomous Control System of Compact Model-helicopter

  • Kang, Chul-Ung;Jun Satake;Takakazu Ishimatsu;Yoichi Shimomoto;Jun Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.95-99
    • /
    • 1998
  • We introduce an autonomous flying system using a model-helicopter. A feature of the helicopter is that autonomous flight is realized on the low-cost compact model-helicopter. Our helicopter system is divided into two parts. One is on the helicopter, and the other is on the land. The helicopter is loaded with a vision sensor and an electronic compass including a tilt sensor. The control system on the land monitors the helicopter movement and controls. We firstly introduce the configuration of our helicopter system with a vision sensor and an electronic compass. To determine the 3-D position and posture of helicopter, a technique of image recognition using a monocular image is described based on the idea of the sensor fusion of vision and electronic compass. Finally, we show an experiment result, which we obtained in the hovering. The result shows the effectiveness of our system in the compact model-helicopter.

  • PDF

Vision-based Ground Test for Active Debris Removal

  • Lim, Seong-Min;Kim, Hae-Dong;Seong, Jae-Dong
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.279-290
    • /
    • 2013
  • Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

Vision-based support in the characterization of superelastic U-shaped SMA elements

  • Casciati, F.;Casciati, S.;Colnaghi, A.;Faravelli, L.;Rosadini, L.;Zhu, S.
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.641-648
    • /
    • 2019
  • The authors investigate the feasibility of applying a vision-based displacement-measurement technique in the characterization of a SMA damper recently introduced in the literature. The experimental campaign tests a steel frame on a uni-axial shaking table driven by sinusoidal signals in the frequency range from 1Hz to 5Hz. Three different cameras are used to collect the images, namely an industrial camera and two commercial smartphones. The achieved results are compared. The camera showing the better performance is then used to test the same frame after its base isolation. U-shaped, shape-memory-alloy (SMA) elements are installed as dampers at the isolation level. The accelerations of the shaking table and those of the frame basement are measured by accelerometers. A system of markers is glued on these system components, as well as along the U-shaped elements serving as dampers. The different phases of the test are discussed, in the attempt to obtain as much possible information on the behavior of the SMA elements. Several tests were carried out until the thinner U-shaped element went to failure.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.

Preliminary Study for Vision A.I-based Automated Quality Supervision Technique of Exterior Insulation and Finishing System - Focusing on Form Bonding Method - (인공지능 영상인식 기반 외단열 공법 품질감리 자동화 기술 기초연구 - 단열재 습식 부착방법을 중심으로 -)

  • Yoon, Sebeen;Lee, Byoungmin;Lee, Changsu;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.133-134
    • /
    • 2022
  • This study proposed vision artificial intelligence-based automated supervision technology for external insulation and finishing system, and basic research was conducted for it. The automated supervision technology proposed in this study consists of the object detection model (YOLOv5) and the part that derives necessary information based on the object detection result and then determines whether the external insulation-related adhesion regulations are complied with. As a result of a test, the judgement accuracy of the proposed model showed about 70%. The results of this study are expected to contribute to securing the external insulation quality and further contributing to the realization of energy-saving eco-friendly buildings. As further research, it is necessary to develop a technology that can improve the accuracy of the object detection model by supplementing the number of data for model training and determine additional related regulations such as the adhesive area ratio.

  • PDF

Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts (자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발)

  • Ju-Young Kim;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.3
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

A Runge-Kutta scheme for smart control mechanism with computer-vision robotics

  • ZY Chen;Huakun Wu;Yahui Meng;Timothy Chen
    • Smart Structures and Systems
    • /
    • v.34 no.2
    • /
    • pp.117-127
    • /
    • 2024
  • A novel approach that the smart control of robotics can be realized by a fuzzy controller and an appropriate Runge-Kutta scheme in this paper. A recently proposed integral inequality is selected based on the free weight matrix, and the less conservative stability criterion is given in the form of linear matrix inequalities (LMIs). We demonstrate that this target information obtained through image processing is subjected to smart control with computer-vision robotic to Arduino, and the infrared beacon was utilized for the operation of practical illustrations. A fuzzy controller derived with a fuzzy Runge-Kutta type functions is injected into the system and then the system is stabilized asymptotically. In this study, a fuzzy controller and a fuzzy observer are proposed via the parallel distributed compensation technique to stabilize the system. This paper achieves the goal of real-time following of three vehicles and there are many areas where improvements were made. Finally, each information is transmitted to Arduino via I2C to follow the self-propelled vehicle. The proposed calculation is approved in reproductions and ongoing smart control tests.