• Title/Summary/Keyword: 비전 센서

Search Result 294, Processing Time 0.02 seconds

System for Measuring the Welding Profile Using Vision and Structured Light (비전센서와 구조화빔을 이용한 용접 형상 측정 시스템)

  • Kim, Chang-Hyeon;Choe, Tae-Yong;Lee, Ju-Jang;Seo, Jeong;Park, Gyeong-Taek;Gang, Hui-Sin
    • Proceedings of the Korean Society of Laser Processing Conference
    • /
    • 2005.11a
    • /
    • pp.50-56
    • /
    • 2005
  • The robot systems are widely used in the many industrial field as well as welding manufacturing. The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot tracking, many kinds of contact and non-contact sensors are used. Recently, the vision is most popular. In this paper, the development of the system which measures the shape of the welding part is described. This system uses the line-type structured laser diode and the vision sensor. It includes the correction of radial distortion which is often found in the image taken by the camera with short focal length. The Direct Linear Transformation (DLT) method is used for the camera calibration. The three dimensional shape of the parent metal is obtained after simple linear transformation. Some demos are shown to describe the performance of the developed system.

  • PDF

A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features (가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합)

  • Ahn, Sung-Hwan;Choi, Jin-Woo;Choi, Min-Yong;Chung, Wan-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF

Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect (센서 구성을 고려한 비전 기반 차선 감지 시스템 개발)

  • Park Jaehak;Hong Daegun;Huh Kunsoo;Park Jahnghyon;Cho Dongil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

Deep Image Retrieval using Attention and Semantic Segmentation Map (관심 영역 추출과 영상 분할 지도를 이용한 딥러닝 기반의 이미지 검색 기술)

  • Minjung Yoo;Eunhye Jo;Byoungjun Kim;Sunok Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.230-237
    • /
    • 2023
  • Self-driving is a key technology of the fourth industry and can be applied to various places such as cars, drones, cars, and robots. Among them, localiztion is one of the key technologies for implementing autonomous driving as a technology that identifies the location of objects or users using GPS, sensors, and maps. Locilization can be made using GPS or LIDAR, but it is very expensive and heavy equipment must be mounted, and precise location estimation is difficult for places with radio interference such as underground or tunnels. In this paper, to compensate for this, we proposes an image retrieval using attention module and image segmentation maps using color images acquired with low-cost vision cameras as an input.

Primer Coating Inspection System Development for Automotive Windshield Assembly Automation Facilities (자동차 글라스 조립 자동화설비를 위한 프라이머 도포검사 비전시스템 개발)

  • Ju-Young Kim;Soon-Ho Yang;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.124-130
    • /
    • 2023
  • Implementing flexible production systems in domestic and foreign automotive design parts assembly has increased demand for automation and power reduction. Consequently, transition to a hybrid production method is observed where multiple vehicles are assembled in a single assembly line. Multiple robots, 3D vision sensors, mounting positions, and correction software have complex configurations in the automotive glass mounting system. Hence, automation is required owing to significant difficulty in the assembly process of automobile parts. This study presents a primer lighting and inspection algorithm that is robust to the assembly environment of real automotive design parts using high power 'ㄷ'-shaped LED inclined lighting. Furthermore, a 2D camera was developed in the primer coating inspection system-the core technology of the glass mounting system. A primer application demo line applicable to the actual automobile production line was established using the proposed high power lighting and algorithm. Furthermore, application inspection performance was verified using this demo system. Experimental results verified that the performance of the proposed system exceeded the level required to satisfy the automobile requirements.

Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts (자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발)

  • Ju-Young Kim;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.3
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF

A Study on the Selection and Applicability Analysis of 3D Terrain Modeling Sensor for Intelligent Excavation Robot (지능형 굴삭 로봇의 개발을 위한 로컬영역 3차원 모델링 센서 선정 및 현장 적용성 분석에 관한 연구)

  • Yoo, Hyun-Seok;Kwon, Soon-Wook;Kim, Young-Suk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2551-2562
    • /
    • 2013
  • Since 2006, an Intelligent Excavation Robot which automatically performs the earth-work without operator has been developed in Korea. The technologies for automatically recognizing the terrain of work environment and detecting the objects such as obstacles or dump trucks are essential for its work quality and safety. In several countries, terrestrial 3D laser scanner and stereo vision camera have been used to model the local area around workspace of the automated construction equipment. However, these attempts have some problems that require high cost to make the sensor system or long processing time to eliminate the noise from 3D model outcome. The objectives of this study are to analyze the advantages of the existing 3D modeling sensors and to examine the applicability for practical use by using Analytic Hierarchical Process(AHP). In this study, 3D modeling quality and accuracy of modeling sensors were tested at the real earth-work environment.

A Study on forest fires Prediction and Detection Algorithm using Intelligent Context-awareness sensor (상황인지 센서를 활용한 지능형 산불 이동 예측 및 탐지 알고리즘에 관한 연구)

  • Kim, Hyeng-jun;Shin, Gyu-young;Woo, Byeong-hun;Koo, Nam-kyoung;Jang, Kyung-sik;Lee, Kang-whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.6
    • /
    • pp.1506-1514
    • /
    • 2015
  • In this paper, we proposed a forest fires prediction and detection system. It could provide a situation of fire prediction and detection methods using context awareness sensor. A fire occurs wide range of sensing a fire in a single camera sensor, it is difficult to detect the occurrence of a fire. In this paper, we propose an algorithm for real-time by using a temperature sensor, humidity, Co2, the flame presence information acquired and comparing the data based on multiple conditions, analyze and determine the weighting according to fire in complex situations. In addition, it is possible to differential management of intensive fire detection and prediction for required dividing the state of fire zone. Therefore we propose an algorithm to determine the prediction and detection from the fire parameters as an temperature, humidity, Co2 and the flame in real-time by using a context awareness sensor and also suggest algorithm that provide the path of fire diffusion and service the secure safety zone prediction.

A Micro-robotic Platform for Micro/nano Assembly: Development of a Compact Vision-based 3 DOF Absolute Position Sensor (마이크로/나노 핸들링을 위한 마이크로 로보틱 플랫폼: 비전 기반 3자유도 절대위치센서 개발)

  • Lee, Jae-Ha;Breguet, Jean Marc;Clavel, Reymond;Yang, Seung-Han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.1
    • /
    • pp.125-133
    • /
    • 2010
  • A versatile micro-robotic platform for micro/nano scale assembly has been demanded in a variety of application areas such as micro-biology and nanotechnology. In the near future, a flexible and compact platform could be effectively used in a scanning electron microscope chamber. We are developing a platform that consists of miniature mobile robots and a compact positioning stage with multi degree-of-freedom. This paper presents the design and the implementation of a low-cost and compact multi degree of freedom position sensor that is capable of measuring absolute translational and rotational displacement. The proposed sensor is implemented by using a CMOS type image sensor and a target with specific hole patterns. Experimental design based on statistics was applied to finding optimal design of the target. Efficient algorithms for image processing and absolute position decoding are discussed. Simple calibration to eliminate the influence of inaccuracy of the fabricated target on the measuring performance also presented. The developed sensor was characterized by using a laser interferometer. It can be concluded that the sensor system has submicron resolution and accuracy of ${\pm}4{\mu}m$ over full travel range. The proposed vision-based sensor is cost-effective and used as a compact feedback device for implementation of a micro robotic platform.