• Title/Summary/Keyword: 3D image sensor

Search Result 334, Processing Time 0.023 seconds

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

Development of the Program for Reconnaissance and Exploratory Drones based on Open Source (오픈 소스 기반의 정찰 및 탐색용 드론 프로그램 개발)

  • Chae, Bum-sug;Kim, Jung-hwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.33-40
    • /
    • 2022
  • With the recent increase in the development of military drones, they are adopted and used as the combat system of battalion level or higher. However, it is difficult to use drones that can be used in battles below the platoon level due to the current conditions for the formation of units in the Korean military. In this paper, therefore, we developed a program drones equipped with a thermal imaging camera and LiDAR sensor for reconnaissance and exploration that can be applied in battles below the platoon level. Using these drones, we studied the possibility and feasibility of drones for small-scale combats that can find hidden enemies, search for an appropriate detour through image processing and conduct reconnaissance and search for battlefields, hiding and cover-up through image processing. In addition to the purpose of using the proposed drone to search for an enemies lying in ambush in the battlefield, it can be used as a function to check the optimal movement path when a combat unit is moving, or as a function to check the optimal place for cover-up or hiding. In particular, it is possible to check another route other than the route recommended by the program because the features of the terrain can be checked from various viewpoints through 3D modeling. We verified the possiblity of flying by designing and assembling in a form of adding LiDAR and thermal imaging camera module to a drone assembled based on racing drone parts, which are open source hardware, and developed autonomous flight and search functions which can be used even by non-professional drone operators based on open source software, and then installed them to verify their feasibility.

Incorporation of Scene Geometry in Least Squares Correlation Matching for DEM Generation from Linear Pushbroom Images

  • Kim, Tae-Jung;Yoon, Tae-Hun;Lee, Heung-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.182-187
    • /
    • 1999
  • Stereo matching is one of the most crucial parts in DEM generation. Naive stereo matching algorithms often create many holes and blunders in a DEM and therefore a carefully designed strategy must be employed to guide stereo matching algorithms to produce “good” 3D information. In this paper, we describe one such a strategy designed by the use of scene geometry, in particular, the epipolarity for generation of a DEM from linear pushbroom images. The epipolarity for perspective images is a well-known property, i.e., in a stereo image pair, a point in the reference image will map to a line in the search image uniquely defined by sensor models of the image pair. This concept has been utilized in stereo matching by applying epipolar resampling prior to matching. However, the epipolar matching for linear pushbroom images is rather complicated. It was found that the epipolarity can only be described by a Hyperbola- shaped curve and that epipolar resampling cannot be applied to linear pushbroom images. Instead, we have developed an algorithm of incorporating such epipolarity directly in least squares correlation matching. Experiments showed that this approach could improve the quality of a DEM.

  • PDF

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Dimensional Quality Assessment for Assembly Part of Prefabricated Steel Structures Using a Stereo Vision Sensor (스테레오 비전 센서 기반 프리팹 강구조물 조립부 형상 품질 평가)

  • Jonghyeok Kim;Haemin Jeon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.173-178
    • /
    • 2024
  • This study presents a technique for assessing the dimensional quality of assembly parts in Prefabricated Steel Structures (PSS) using a stereo vision sensor. The stereo vision system captures images and point cloud data of the assembly area, followed by applying image processing algorithms such as fuzzy-based edge detection and Hough transform-based circular bolt hole detection to identify bolt hole locations. The 3D center positions of each bolt hole are determined by correlating 3D real-world position information from depth images with the extracted bolt hole positions. Principal Component Analysis (PCA) is then employed to calculate coordinate axes for precise measurement of distances between bolt holes, even when the sensor and structure orientations differ. Bolt holes are sorted based on their 2D positions, and the distances between sorted bolt holes are calculated to assess the assembly part's dimensional quality. Comparison with actual drawing data confirms measurement accuracy with an absolute error of 1mm and a relative error within 4% based on median criteria.

CMOS Analog-Front End for CCD Image Sensors (CCD 영상센서를 위한 CMOS 아날로그 프론트 엔드)

  • Kim, Dae-Jeong;Nam, Jeong-Kwon
    • Journal of IKEEE
    • /
    • v.13 no.1
    • /
    • pp.41-48
    • /
    • 2009
  • This paper describes an implementation of the analog front end (AFE) incorporated with the image signal processing (ISP) unit in the SoC, dominating the performance of the CCD image sensor system. New schemes are exploited in the high-frequency sampling to reduce the sampling uncertainty apparently as the frequency increases, in the structure for the wide-range variable gain amplifier (VGA) capable of $0{\sim}36\;dB$ exponential gain control to meet the needed bandwidth and accuracy by adopting a new parasitic insensitive capacitor array. Moreover, the double cancellation of the black-level noise was efficiently achieved both in the analog and the digital domain. The proposed topology fabricated in a $0.35-{\mu}m$ CMOS process was proved in a full CCD camera system of 10-bit accuracy, dissipating 80 mA at 15 MHz with a 3.3 V supply voltage.

  • PDF

Development of High-Sensitivity Detection Sensor and Module for Spatial Distribution Measurement of Multi Gamma Sources (감마선원의 공간분포 가시화 및 3D모델링을 위한 운용환경 개발)

  • Song, Keun-Young;Lim, Ji-Seok;Choi, Jung-Huk;Yuk, Young-Ho;Hwang, Young-Gwan;Lee, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.702-704
    • /
    • 2017
  • In case of dismantling of nuclear power generation facility or radiation accident, the accurate information of gammaray source is essential for rapid decontamination. In order to more efficiently represent the position of the gamma ray to be removed, we create a spatial domain based on the real image. And we can perform decontamination of gamma-ray source more quickly by expressing the distribution of radiation source. The developed gamma ray imaging device overlaps with the visible image after gamma - ray detection and provides only two - dimensional image, but it does not show the distance information to the source. In this paper, we have developed a operation environment using the 3D visualization model for reporting effective decontamination operation.

  • PDF

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.