• Title/Summary/Keyword: 3D image sensor

Search Result 333, Processing Time 0.025 seconds

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Computational generation method of elemental images using a Kinect sensor in 3D depth-priority integral imaging (3D 깊이우선 집적영상 디스플레이에서의 키넥트 센서를 이용한 컴퓨터적인 요소영상 생성방법)

  • Ryu, Tae-Kyung;Oh, Yongseok;Jeong, Shin-Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.1
    • /
    • pp.167-174
    • /
    • 2016
  • In this paper, we propose a generation of 2D elemental images for 3D objects using Kinect in 3D depth-priority integral imaging (DPII) display. First, we analyze a principle to pickup elemental images based on ray optics. Based on our analysis, elemental images are generated with both RGB image and depth image recorded from Kinect. We reconstruct 3D images from the elemental images with the computational integral imaging reconstruction technique and then compare various perspective images. To show the usefulness of the proposed method, we carried out the preliminary experiments. The experimental results reveal that our method can provide correct 3D images with full parallax.

Land cover classification using LiDAR intensity data and neural network

  • Minh, Nguyen Quang;Hien, La Phu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.4
    • /
    • pp.429-438
    • /
    • 2011
  • LiDAR technology is a combination of laser ranging, satellite positioning technology and digital image technology for study and determination with high accuracy of the true earth surface features in 3 D. Laser scanning data is typically a points cloud on the ground, including coordinates, altitude and intensity of laser from the object on the ground to the sensor (Wehr & Lohr, 1999). Data from laser scanning can produce products such as digital elevation model (DEM), digital surface model (DSM) and the intensity data. In Vietnam, the LiDAR technology has been applied since 2005. However, the application of LiDAR in Vietnam is mostly for topological mapping and DEM establishment using point cloud 3D coordinate. In this study, another application of LiDAR data are present. The study use the intensity image combine with some other data sets (elevation data, Panchromatic image, RGB image) in Bacgiang City to perform land cover classification using neural network method. The results show that it is possible to obtain land cover classes from LiDAR data. However, the highest accurate classification can be obtained using LiDAR data with other data set and the neural network classification is more appropriate approach to conventional method such as maximum likelyhood classification.

Autonomous Control System of Compact Model-helicopter

  • Kang, Chul-Ung;Jun Satake;Takakazu Ishimatsu;Yoichi Shimomoto;Jun Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.95-99
    • /
    • 1998
  • We introduce an autonomous flying system using a model-helicopter. A feature of the helicopter is that autonomous flight is realized on the low-cost compact model-helicopter. Our helicopter system is divided into two parts. One is on the helicopter, and the other is on the land. The helicopter is loaded with a vision sensor and an electronic compass including a tilt sensor. The control system on the land monitors the helicopter movement and controls. We firstly introduce the configuration of our helicopter system with a vision sensor and an electronic compass. To determine the 3-D position and posture of helicopter, a technique of image recognition using a monocular image is described based on the idea of the sensor fusion of vision and electronic compass. Finally, we show an experiment result, which we obtained in the hovering. The result shows the effectiveness of our system in the compact model-helicopter.

  • PDF

Comparison Among Sensor Modeling Methods in High-Resolution Satellite Imagery (고해상도 위성영상의 센서모형과 방법 비교)

  • Kim, Eui Myoung;Lee, Suk Kun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6D
    • /
    • pp.1025-1032
    • /
    • 2006
  • Sensor modeling of high-resolution satellites is a prerequisite procedure for mapping and GIS applications. Sensor models, describing the geometric relationship between scene and object, are divided into two main categories, which are rigorous and approximate sensor models. A rigorous model is based on the actual geometry of the image formation process, involving internal and external characteristics of the implemented sensor. However, approximate models require neither a comprehensive understanding of imaging geometry nor the internal and external characteristics of the imaging sensor, which has gathered a great interest within photogrammetric communities. This paper described a comparison between rigorous and various approximate sensor models that have been used to determine three-dimensional positions, and proposed the appropriate sensor model in terms of the satellite imagery usage. Through the case study of using IKONOS satellite scenes, rigorous and approximate sensor models have been compared and evaluated for the positional accuracy in terms of acquirable number of ground controls. Bias compensated RFM(Rational Function Model) turned out to be the best among compared approximate sensor models, both modified parallel projection and parallel-perspective model were able to be modelled with a small number of controls. Also affine transformation, one of the approximate sensor models, can be used to determine the planimetric position of high-resolution satellites and perform image registration between scenes.

Real-time 3D Converting System using Stereoscopic Video (스테레오 비디오를 이용한 실시간 3차원 입체 변환 시스템)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.10C
    • /
    • pp.813-819
    • /
    • 2008
  • In this paper, we implemented a real-time system which displays 3-dimensional (3D) stereoscopic image with stereo camera. The system consists of a set of stereo camera, FPGA board, and 3D stereoscopic LCD. Two CMOS image sensor were used for the stereo camera. FPGA which processes video data was designed with Verilog-HDL, and it can accommodate various resolutional videos. The stereoscopic image is configured by two methods which are side-by-side and up-down image configuration. After the left and right images are converted to the type for the stereoscopic display, they are stored into SDRAM. When the next frame is inputted into FPGA from two CMOS image sensors, the previous video data is output to the DA converter for displaying it. From this pipeline operation, the real-time operation is possible. After the proposed system was implemented into hardware, we verified that it operated exactly.

Extraction of 3D Building Information using Shadow Analysis from Single High Resolution Satellite Images (단일 고해상도 위성영상으로부터 그림자를 이용한 3차원 건물정보 추출)

  • Lee, Tae-Yoon;Lim, Young-Jae;Kim, Tae-Jung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.14 no.2 s.36
    • /
    • pp.3-13
    • /
    • 2006
  • Extraction of man-made objects from high resolution satellite images has been studied by many researchers. In order to reconstruct accurate 3D building structures most of previous approaches assumed 3D information obtained by stereo analysis. For this, they need the process of sensor modeling, etc. We argue that a single image itself contains many clues of 3D information. The algorithm we propose projects virtual shadow on the image. When the shadow matches against the actual shadow, the height of a building can be determined. If the height of a building is determined, the algorithm draws vertical lines of sides of the building onto the building in the image. Then the roof boundary moves along vertical lines and the footprint of the building is extracted. The algorithm proposed can use the shadow cast onto the ground surface and onto facades of another building. This study compared the building heights determined by the algorithm proposed and those calculated by stereo analysis. As the results of verification, root mean square errors of building heights were about 1.5m.

  • PDF

View Point Tracking for Parallax Barrier Display Using a Low Cost 3D Imager

  • Wi, Sung-Min;Kim, Dong-Wook
    • Journal of the Korea Computer Industry Society
    • /
    • v.9 no.3
    • /
    • pp.105-114
    • /
    • 2008
  • We present an eye tracking system using a low cost 3D CMOS imager for 3D displays that ensures a correct auto stereoscopic view of position- dependent stereoscopic 3D images. The tracker is capable of segmenting the foreground objects (viewer) from background objects using their relative distance from the camera. The tracker is a novel 3D CMOS Image Sensor based on Time of Flight (TOF) principle using innovating photon gating techniques. The basic feature incorporates real time depth imaging by capturing the shape of a light-pulse front as it is reflected from a three dimensional object. The basic architecture and main building blocks of a real time depth CMOS pixel are described. For this application, we use a stereoscopic type of display using parallax barrier elements that is described as well.

  • PDF