• Title/Summary/Keyword: laser distance sensor

Search Result 139, Processing Time 0.024 seconds

A Study of a Dual-Electromagnetic Sensor for Automatic Weld Seam Tracking (용접선 자동추적을 위한 이중 전자기센서의 개발에 관한 연구)

  • 신준호;김재응
    • Journal of Welding and Joining
    • /
    • v.18 no.4
    • /
    • pp.70-75
    • /
    • 2000
  • The weld seam tracking system for arc welding process uses various kinds of sensors such as arc sensor, vision sensor, laser displacement and so on. Among the variety of sensors available, electro-magnetic sensor is one of the most useful methods especially in sheet metal butt-joint arc welding, primarily because it is hardly affected by the intense arc light and fume generated during the welding process, and also by the surface condition of weldments. In this study, a dual-electromagnetic sensor, which utilizes the induced current variation in the sensing coil due to the eddy current variation of the metal near the sensor, was developed for arc welding of sheet metal butt-joints. The dual-electromagnetic sensor thus detects the offset displacement of weld line from the center of sensor head even though there's no clearance in the joint. A set of design variables of the sensor were determined for the maximum sensing capability through the repeated experiments. Seam tracking is performed by correcting the position of sensor to the amount of offset displacement every sampling period. From the experimental results, the developed sensor showed the excellent capability of weld seam detection when the sensor to workpiece distance is near less than 5 mm, and it was revealed that the system has excellent seam tracking ability for the butt-joint of sheet metal.

  • PDF

Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner (카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종)

  • Kim, Hyoung-Rae;Cui, Xue-Nan;Lee, Jae-Hong;Lee, Seung-Jun;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

A Study on the Multipurpose Golf Putting Range Finder using IR Razer Sensor and Inertial Sensor (IR 레이저 센서 및 관성 센서를 이용한 다목적 골프 퍼팅 거리 측정기에 대한 연구)

  • Min-Seoung Shin;Dae-Woong Kang;Ki-Deok Kim;Ji-Hwan Kim;Chul-Sun Lee;Yun-Seok Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.4
    • /
    • pp.669-676
    • /
    • 2023
  • In this paper, a multi-purpose golf putting range finder based on an IR razer sensor and an inertial sensor was designed and made. It was designed to measure distance and slope within a 50m outdoor measurement range for the main purpose of golf putting distance measurement, and at the same time, it is designed to measure temperature information that affects putting. In addition, the distance meter supports house maintenance work by providing length and horizontality measurement values within the indoor 80m measurement range, and provides safety from indoor or vehicle fires by providing indoor temperature measurement values to mobile phones through linkage with the web server. In order to evaluate the accuracy of the proposed method and its interworking performance with a smartphone, a prototype was produced and a web server was built, and the usefulness was confirmed by showing an acceptable error rate within 5% in repeated experiments.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • v.35 no.4
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Map Building Based on Sensor Fusion for Autonomous Vehicle (자율주행을 위한 센서 데이터 융합 기반의 맵 생성)

  • Kang, Minsung;Hur, Soojung;Park, Ikhyun;Park, Yongwan
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

Pattern Recognition Using 2D Laser Scanner Shaking (2D 레이저 스캐너 흔듦을 이용한 패턴인식)

  • Kwon, Seongkyung;Jo, Haejoon;Yoon, Jinyoung;Lee, Hoseung;Lee, Jaechun;Kwak, Sungwoo;Choi, Haewoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.4
    • /
    • pp.138-144
    • /
    • 2014
  • Now, Autonomous unmanned vehicle has become an issue in next generation technology. 2D Laser scanner as the distance measurement sensor is used. 2D Laser scanner detects the distance of 80m, measured angle is -5 to 185 degree. Laser scanner detects only the plane, but using motor swings. As a result, traffic signs detect and analyze patterns. Traffic signs when driving at low speed, shape of the detected pattern is very similar. By shaking the laser scanner, traffic signs and other obstacles became clear distinction.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Trends of Sensor-based Intelligent Arc Welding Robot System (센서기반 지능형 아크 용접 로봇 시스템의 동향)

  • Joung, Ji Hoon;Shin, Hyeon-Ho;Song, Young Hoon;Kim, SooJong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.10
    • /
    • pp.1051-1056
    • /
    • 2014
  • In this paper, we introduce an intelligent robotic arc welding system which exploits sensors like as LVS (Laser Vision Sensor), Hall effect sensor, voltmeter and so on. The use of industrial robot is saturated because of its own limitation, and one of the major limitations is that industrial robot cannot recognize the environment. Lately, sensor-based environmental awareness research of the industrial robot is performed actively to overcome such limitation, and it can expand application field and improve productivity. We classify the sensor-based intelligent arc welding robot system by the goal and the sensing data. The goals can be categorized into detection of a welding start point, tracking of a welding line and correction of a torch deformation. The Sensing data can be categorized into welding data (i.e. current, voltage and short circuit detection) and displacement data (i.e. distance, position). This paper covers not only the explanation of the each category but also its advantage and limitation.

The Weld Defects Expression Method by the Concept of Segment Splitting Method and Mean Distance (분할법과 평균거리 개념에 의한 용접 결함 표현 방법)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.2
    • /
    • pp.37-43
    • /
    • 2007
  • In this paper, laser vision sensor is used to detect some defects any $co_{2}$ welded specimen in hardware. But, as the best expression of defects of welded specimen, the concept of segment splitting method and mean distance are introduced in software. The developed GUI software is used for deriding whether any welded specimen makes as proper shape or detects in real time. The criteria are based upon ISO 5817 as limits of imperfections in metallic fusion welds.