• 제목/요약/키워드: 2D vision sensor

검색결과 73건 처리시간 0.025초

2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발 (Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor)

  • 문종식;이병윤
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • 센서학회지
    • /
    • 제30권2호
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발 (Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts)

  • 김주영;김민규
    • 센서학회지
    • /
    • 제33권3호
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

센서데이터 융합을 이용한 원주형 물체인식 (Cylindrical Object Recognition using Sensor Data Fusion)

  • 김동기;윤광익;윤지섭;강이석
    • 제어로봇시스템학회논문지
    • /
    • 제7권8호
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Anomaly Event Detection Algorithm of Single-person Households Fusing Vision, Activity, and LiDAR Sensors

  • Lee, Do-Hyeon;Ahn, Jun-Ho
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권6호
    • /
    • pp.23-31
    • /
    • 2022
  • 최근 코로나 19가 유행하고 더불어 고령화 시대와 1인 가구 증가로 인해 가구 구성원이 집에서 다양한 활동을 하며 머무는 시간이 매우 증가하였다. 본 연구에서는 노인을 포함한 1인 가구의 구성원들의 이상 징후를 탐지하기 위한 알고리즘을 제안한다. 홈 CCTV를 통한 영상 센서 알고리즘, 스마트폰에 내장된 가속도 센서를 이용한 활동 센서 알고리즘 및 2D LiDAR 센서 기반의 LiDAR 센서 알고리즘을 이용한 사람의 움직임 및 낙상 탐지 결과를 기반으로 이상 징후를 탐지하는 알고리즘들을 제안한다. 하지만, 각 단일 센서 기반 알고리즘은 센서가 가진 한계점으로 인해 특정 상황에서 이상징후를 탐지하기 어려운 단점을 가지고 있다. 그에 따라 단일 센서 기반 알고리즘만을 사용한 것보다 다양한 상황에서 이상 징후를 탐지하기 위해 각 알고리즘을 결합하는 융합 방식을 제안한다. 우리는 각 센서로 수집한 데이터를 통해 알고리즘들의 성능을 평가하고, 특정 시나리오들을 통하여 알고리즘 하나만 사용하여 정확한 이상 징후를 탐지할 수 없는 상황에서도 융합 방식을 통해 서로 보완하여 정확한 이상 징후를 효율적으로 탐지할 수 있음을 보여준다.

3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션 (Hand/Eye calibration of Robot arms with a 3D visual sensing system)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

자율이동로봇MAIRO의 전방향 이미지센서와 스테레오 비전 시스템을 이용한 2차원 지도 생성 (2D Map generation Using Omnidirectional Image sensor and Stereo Vision for MobileRobot MAIRO)

  • 김경호;이형규;손영준;송재근
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.495-500
    • /
    • 2002
  • Recently, a service robot industry outstands as an up and coming industry of the next generation. Specially, there are so many research in self-steering movement(SSM). In order to implement SSM, robot must effectively recognize all around, detect objects and make a surrounding map with sensors. So, many robots have a sonar and a infrared sensor, etc. But, in these sensors, We only know informations about between the robot and the object as well as resolution faculty is of inferior quality. In this paper, we will introduce new algorithm that recognizes objects around robot and makes a two dimension surrounding map with a omni-direction vision camera and two stereo vision cameras.

  • PDF

비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼 (Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array)

  • 이준영;오승윤;김동민;김영웅;허정석;이대식
    • 센서학회지
    • /
    • 제33권2호
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.

평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션 (Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure)

  • 정지훈;강태선;신현호;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

3차원 영상처리 기술을 이용한 Grasp planning의 최적화 (The Optimal Grasp Planning by Using a 3-D Computer Vision Technique)

  • 이현기;김성환;최상균;이상룡
    • 한국정밀공학회지
    • /
    • 제19권11호
    • /
    • pp.54-64
    • /
    • 2002
  • This paper deals with the problem of synthesis of stable and optimal grasps with unknown objects by 3-finger hand. Previous robot grasp research has mainly analyzed with either unknown objects 2-dimensionally by vision sensor or known objects, such as cylindrical objects, 3-dimensionally. As extending the previous work, in this study we propose an algorithm to analyze grasp of unknown objects 3-dimensionally by using vision sensor. This is archived by two steps. The first step is to make a 3-dimensional geometrical model for unknown objects by using stereo matching. The second step is to find the optimal grasping points. In this step, we choose the 3-finger hand which has the characteristic of multi-finger hand and is easy to model. To find the optimal grasping points, genetic algorithm is employed and objective function minimizes the admissible force of finger tip applied to the objects. The algorithm is verified by computer simulation by which optimal grasping points of known objects with different angle are checked.