• Title/Summary/Keyword: 3D image sensor

Search Result 334, Processing Time 0.031 seconds

A New Object Region Detection and Classification Method using Multiple Sensors on the Driving Environment (다중 센서를 사용한 주행 환경에서의 객체 검출 및 분류 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1271-1281
    • /
    • 2017
  • It is essential to collect and analyze target information around the vehicle for autonomous driving of the vehicle. Based on the analysis, environmental information such as location and direction should be analyzed in real time to control the vehicle. In particular, obstruction or cutting of objects in the image must be handled to provide accurate information about the vehicle environment and to facilitate safe operation. In this paper, we propose a method to simultaneously generate 2D and 3D bounding box proposals using LiDAR Edge generated by filtering LiDAR sensor information. We classify the classes of each proposal by connecting them with Region-based Fully-Covolutional Networks (R-FCN), which is an object classifier based on Deep Learning, which uses two-dimensional images as inputs. Each 3D box is rearranged by using the class label and the subcategory information of each class to finally complete the 3D bounding box corresponding to the object. Because 3D bounding boxes are created in 3D space, object information such as space coordinates and object size can be obtained at once, and 2D bounding boxes associated with 3D boxes do not have problems such as occlusion.

Transparent Manipulators Accomplished with RGB-D Sensor, AR Marker, and Color Correction Algorithm (RGB-D 센서, AR 마커, 색수정 알고리즘을 활용한 매니퓰레이터 투명화)

  • Kim, Dong Yeop;Kim, Young Jee;Son, Hyunsik;Hwang, Jung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.3
    • /
    • pp.293-300
    • /
    • 2020
  • The purpose of our sensor system is to transparentize the large hydraulic manipulators of a six-ton dual arm excavator from the operator camera view. Almost 40% of the camera view is blocked by the manipulators. In other words, the operator loses 40% of visual information which might be useful for many manipulator control scenarios such as clearing debris on a disaster site. The proposed method is based on a 3D reconstruction technology. By overlaying the camera image from front top of the cabin with the point cloud data from RGB-D (red, green, blue and depth) cameras placed at the outer side of each manipulator, the manipulator-free camera image can be obtained. Two additional algorithms are proposed to further enhance the productivity of dual arm excavators. First, a color correction algorithm is proposed to cope with the different color distribution of the RGB and RGB-D sensors used on the system. Also, the edge overlay algorithm is proposed. Although the manipulators often limit the operator's view, the visual feedback of the manipulator's configurations or states may be useful to the operator. Thus, the overlay algorithm is proposed to show the edge of the manipulators on the camera image. The experimental results show that the proposed transparentization algorithm helps the operator get information about the environment and objects around the excavator.

Evaluating Modified IKONOS RPC Using Pseudo GCP Data Set and Sequential Solution

  • Bang, Ki-In;Jeong, Soo;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.82-87
    • /
    • 2002
  • RFM is the sensor model of IKONOS imagery for end-users. IKONOS imagery vendors provide RPC (Rational Polynomial Coefficients), Ration Function Model coefficients for IKONOS, for end-users with imagery. So it is possible that end-users obtain geospatial information in their IKONOS imagery without additional any effort. But there are requirements still fur rigorous 3D positions on RPC user. Provided RPC can not satisfy user and company to generate precision 3D terrain model. In IKONOS imagery, physical sensor modeling is difficult because IKONOS vendors do not provide satellite ephemeris data and abstract sensor modeling requires many GCP well distributed in the whole image as well as other satellite imagery. Therefore RPC modification is better choice. If a few GCP are available, RPC can be modified by method which is introduced in this paper. Study on evaluation modified RPC in IKONOS reports reasonable result. Pseudo GCP generated with vendor's RPC and additional GCP make it possible through sequential solution.

  • PDF

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

3D PROCESSING OF HIGH-RESOLUTION SATELLITE IMAGES

  • Gruen, Armin;Li, Zhang
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.24-27
    • /
    • 2003
  • High-resolution satellite images at sub-5m footprint are becoming increasingly available to the earth observation community and their respective clients. The related cameras are all using linear array CCD technology for image sensing. The possibility and need for accurate 3D object reconstruction requires a sophisticated camera model, being able to deal with such sensor geometry. We have recently developed a full suite of new methods and software for the precision processing of this kind of data. The software can accommodate images from IKONOS, QuickBird, ALOS PRISM, SPOT5 HRS and sensors of similar type to be expected in the future. We will report about the status of the software, the functionality and some new algorithmic approaches in support of the processing concept. The functionality will be verified by results from various pilot projects. We put particular emphasis on the automatic generation of DSMs, which can be done at sub-pixel accuracy and on the semi-automated generation of city models.

  • PDF

A Moving Synchronization Technique for Virtual Target Overlay (가상표적 전시를 위한 이동 동기화 기법)

  • Kim Gye-Young;Jang Seok-Woo
    • Journal of Internet Computing and Services
    • /
    • v.7 no.4
    • /
    • pp.45-55
    • /
    • 2006
  • This paper proposes a virtual target overlay technique for a realistic training simulation which projects a virtual target on ground-based CCD images according to an appointed scenario. This method creates a realistic 3D model for instructors by using high resolution GeoTIFF (Geographic Tag Image File Format) satellite images and DTED(Digital Terrain Elevation Data), and it extracts road areas from the given CCD images for both instructors and trainees, Since there is much difference in observation position, resolution, and scale between satellite Images and ground-based sensor images, feature-based matching faces difficulty, Hence, we propose a moving synchronization technique that projects the targets on sensor images according to the moving paths marked on 3D satellite images. Experimental results show the effectiveness of the proposed algorithm with satellite and sensor images of Daejoen.

  • PDF

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Technology Trends of Range Image based Gesture Recognition (거리영상 기반 동작인식 기술동향)

  • Chang, J.Y.;Ryu, M.W.;Park, S.C
    • Electronics and Telecommunications Trends
    • /
    • v.29 no.1
    • /
    • pp.11-20
    • /
    • 2014
  • 동작인식(gesture recognition) 기술은 입력 영상으로부터 영상에 포함된 사람들의 동작을 인식하는 기술로써 영상감시(visual surveillance), 사람-컴퓨터 상호작용(human-computer interaction), 지능로봇(intelligence robot) 등 다양한 적용분야를 가진다. 특히 최근에는 저비용의 거리 센서(range sensor) 및 효율적인 3차원 자세 추정(3D pose estimation)기술의 등장으로 동작인식은 기존의 어려움들을 극복하고 다양한 산업분야에 적용이 가능할 정도로 발전을 거듭하고 있다. 본고에서는 그러한 거리영상(range image) 기반의 동작인식 기술에 대한 최신 연구동향을 살펴본다.

  • PDF

Simulation of Ladar Range Images based on Linear FM Signal Analysis (Linear FM 신호분석을 통한 Ladar Range 영상의 시뮬레이션)

  • Min, Seong-Hong;Kim, Seong-Joon;Lee, Im-Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.87-95
    • /
    • 2008
  • Ladar (Laser Detection And Ranging, Lidar) is a sensor to acquire precise distances to the surfaces of target region using laser signals, which can be suitably applied to ATD (Automatic Target Detection) for guided missiles or aerial vehicles recently. It provides a range image in which each measured distance is expressed as the brightness of the corresponding pixel. Since the precise 3D models can be generated from the Ladar range image, more robust identification and recognition of the targets can be possible. If we simulate the data of Ladar sensor, we can efficiently use this simulator to design and develop Ladar sensors and systems and to develop the data processing algorithm. The purposes of this study are thus to simulate the signals of a Ladar sensor based on linear frequency modulation and to create range images from the simulated Ladar signals. We first simulated the laser signals of a Ladar using FM chirp modulator and then computed the distances from the sensor to a target using the FFT process of the simulated signals. Finally, we created the range image using the distances set.

  • PDF

3-D vision sensor system for arc welding robot with coordinated motion by transputer system

  • Ishida, Hirofumi;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.446-450
    • /
    • 1993
  • In this paper we propose an arc welding robot system, where two robots works coordinately and employ the vision sensor. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. The vision sensor consists of two laser slit-ray projectors and one CCD TV camera, and is mounted on the top of one robot. The vision sensor detects the 3-dimensional shape of the groove on the target work which needs to be weld. And two robots are moved coordinately to trace the grooves with accuracy. In order to realize fast image processing, totally five sets of high-speed parallel processing units (Transputer) are employed. The teaching tasks of the coordinated motions are simplified considerably due to this vision sensor. Experimental results reveal the applicability of our system.

  • PDF