• 제목/요약/키워드: visual sensor

검색결과 450건 처리시간 0.026초

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권8호
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

시각센서를 이용한 부품변형 및 상대오차 측정 실험 (Experiments for measuring parts deformation and misalignments using a visual sensor)

  • 김진영;조형석;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.1395-1398
    • /
    • 1997
  • Flexible parts comparing with rigid parts can be deformed by contact force during assembly. for successful assembly, information about their deformation as well as possible misalignment between mating parts is essential. Howecer, because of the complex relationship between parts deformation and reaction forces, it is difficult to acquire all required information from the reaction forces alone. In this paper, we measure parts deformation and misalignments by using the visual sensing system presented for flexible parts assembly. Experimental results show that the system can be effectively used for detecting parts deformation and misalignments between mating parts.

  • PDF

깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법 (Active Shape Model-based Object Tracking using Depth Sensor)

  • 정훈조;이동은
    • 디지털산업정보학회논문지
    • /
    • 제9권1호
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

GMA 용접에서 용접선 추적용 시각센서의 화상처리에 관한 연구 (A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding)

  • 정규철;김재웅
    • Journal of Welding and Joining
    • /
    • 제18권3호
    • /
    • pp.60-67
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in GMA welding. The visual sensor consists of a CCD camera, a diode laser system with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. To obtain weld joint position and edge points accurately from the captured image, we compared Hough transform method with central difference method. As a result, we present Hough transform method can more accurately extract the points and it can be applied to real time weld seam tracking. Image processing is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points is determined by intersecting points of the lines. Although a spatter trace is in the image, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm15^{\circ}$.

  • PDF

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding

  • Kim, J.-W.;Chung, K.-C.
    • International Journal of Korean Welding Society
    • /
    • 제1권2호
    • /
    • pp.23-29
    • /
    • 2001
  • In this study, a preview-sensing visual sensor system is constructed far weld seam tracking in GMA welding. The visual sensor system consists of a CCD camera, a diode laser system with a cylindrical lens, and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. Among the image processing methods, Hough transform method is compared with the central difference method from a viewpoint of the capability for extracting the accurate feature position. As a result, it was revealed that Hough transform method can more accurately extract the feature positions and it can be applied to real time weld seam tracking. Image processing which includes Hough transform method is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points are determined by intersecting the lines. Even though the image includes a spatter trace on it, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm$ $15^{\circ}$.

  • PDF

칼라 센서를 이용한 사과와 토마토의 색상 판정 (Evaluation of Surface Color of Apples and Tomatoes by Using Color Sensors)

  • 배영환;주철
    • Journal of Biosystems Engineering
    • /
    • 제18권4호
    • /
    • pp.382-389
    • /
    • 1993
  • In this research, the surface color of 'Fuji' apples and tomatoes were measured by using Sharp PD 151 semiconductor color sensors. The measurements were compared with color-difference-meter readings and with visual sensory test scores. A negative exponential function was developed which describe the relationship between the dominant wavelength of the surface color of 'Fuji' apples and the ratio of the photoelectric currents of the color sensor. Also a linear relationship was found for the surface color of tomatoes and the color sensor output. There were good correlations between the visual test scores and the color sensor output for both 'Fuji' apples and tomatoes.

  • PDF

선박 자동계류를 위한 LiDAR기반 시각센서 시스템 개발 (A LiDAR-based Visual Sensor System for Automatic Mooring of a Ship)

  • 김진만;남택근;김헌희
    • 해양환경안전학회지
    • /
    • 제28권6호
    • /
    • pp.1036-1043
    • /
    • 2022
  • 본 논문은 자동계류 장치에 설치하여 선박의 접안 상황을 검출할 수 있는 시각 센서의 개발에 대하여 논하고 있다. 선박의 접안 시 사고방지를 위해 선박의 속도를 통제하고 위치를 확인하고 있음에도 불구하고 부두에서의 선박 충돌사고는 매년 발생하고 있으며, 이로 인한 경제적, 환경적 피해가 매우 크다. 따라서 부두에 접안하는 선박에 대한 안전성 확보를 위해 선박의 위치 및 속도 정보를 신속하게 확보할 수 있는 시각 시스템의 개발은 중요하다. 이에 본 연구에서는 선박의 접안 시 사람과 유사하게 영상을 통해 접안하는 선박을 관찰하고, 주변 환경에 따른 선박의 접안 상태를 적절하게 확인할 수 있는 시각센서를 개발하였다. 먼저, 개발하고자 하는 시각 센서의 적정성을 확보하기 위해 기존 센서로부터 제공되는 정보, 감지 범위, 실시간성, 정확도 및 정밀도 측면에서 센서 특성을 분석하였다. 이러한 분석 자료를 바탕으로 LiDAR형태의 3D시각 시스템의 개념 설계, 구동메커니즘 설계 및 모션 구동부의 힘과 위치 제어기 설계 등을 수행하여 대상물의 정보를 실시간으로 획득할 수 있는 3D 시각 모듈을 개발하였다. 최종적으로 시스템 구동을 위한 제어 시스템의 성능평가와 스캔 속도에 대한 성능을 분석하였고, 실험을 통해 개발된 시스템의 유용성을 확인할 수 있었다.

가시광선 영역에서의 선면 감지 센서 (Line Edge Detection Sensor using Visual Spectral Wavelength)

  • 최규남
    • 한국전자통신학회논문지
    • /
    • 제7권2호
    • /
    • pp.303-308
    • /
    • 2012
  • 실린더 형태의 롤러에 감기는 원단들의 가장자리에 있는 선 이나 면들을 감지할 수 있는 1차원 선면감지 센서에 대하여 연구하였다. 선면감지 센서는 광학계의 정렬을 필요로 하지 않는 1안 렌즈를 사용한 1:1 광학계를 사용하였고 2분할 실리콘 수광소자를 사용한 두 개 채널로 부터의 차 및 합 신호를 처리하여 구현하였다. 측정된 결과는 다양한 재질 및 색상을 갖는 피사체에 대해서도 0.1mm 까지 선폭을 감지할 수 있었으며 감기는 원단은 0.2mm 편차 이내로 감기는 것을 보여주었다.

깊이 센서를 이용한 등고선 레이어 생성 및 모델링 방법 (A Method for Generation of Contour lines and 3D Modeling using Depth Sensor)

  • 정훈조;이동은
    • 디지털산업정보학회논문지
    • /
    • 제12권1호
    • /
    • pp.27-33
    • /
    • 2016
  • In this study we propose a method for 3D landform reconstruction and object modeling method by generating contour lines on the map using a depth sensor which abstracts characteristics of geological layers from the depth map. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust contour and object can be extracted. The algorithm suggested in this paper first abstracts the characteristics of each geological layer from the depth map image and rearranges it into the proper order, then creates contour lines using the Bezier curve. Using the created contour lines, 3D images are reconstructed through rendering by mapping RGB images of the visual camera. Experimental results show that the proposed method using depth sensor can reconstruct contour map and 3D modeling in real-time. The generation of the contours with depth data is more efficient and economical in terms of the quality and accuracy.

센서융합을 통한 시맨틱 지도의 작성 (Sensor Fusion-Based Semantic Map Building)

  • 박중태;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.