• Title/Summary/Keyword: depth information sensors

Search Result 95, Processing Time 0.021 seconds

Implementation of Water Depth Indicator using Contactless Smart Sensors (비접촉식 스마트센서 기반 수위측정 방법 구현)

  • Kim, Minhwan;Lee, Jinhee;Song, Giltae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.6
    • /
    • pp.733-739
    • /
    • 2019
  • Water level measurement is highly demanding in IoT monitoring areas such as smart factory, smart farm, and smart fish farm. However, existing water level indicators are limited to be used in industrial fields as commercial products due to the high cost of sensors and the complexity of algorithms used. In order to solve these problems, our paper proposed methods using an infrared distance sensor as well as a hall sensor for the water level measurement, both of which are contactless smart sensors. Data errors caused by the inaccuracy of existing sensors were decreased by applying new simple structures so that versatility is enhanced. The performance of our method was validated using experiments based on simulations. We expect that our new water depth indicator can be extended to a general-purpose water level monitoring system based on IoT technology.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF

Depth Map Interpolation Using High Frequency Components (고주파 성분을 이용한 깊이맵의 보간)

  • Jang, Seung-Eun;Kim, Sung-Yeol;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.459-470
    • /
    • 2012
  • In this paper, we propose a method to upsample a low-resolution depth map to a high-resolution version. While conventional camera sensors produce high-resolution color images, the sizes of the depth maps of range/depth sensors are usually low. In this paper, we consider the utilization of high-frequency components to the conventional depth map interpolation methods such as bilinear, bicubic, and bilateral. The proposed method is composed of the three steps: high-frequency component extraction, high-frequency component application, and interpolation. Two objective evaluation measures such as sharpness degree and blur metric are used to examine the performance. Experimental results show that the proposed method significantly outperforms other conventional methods by a factor of 2 in terms of sharpness degree. As well, a blur metric is reduced by a factor of 14 %.

Detection of Moving Objects using Depth Frame Data of 3D Sensor (3D센서의 Depth frame 데이터를 이용한 이동물체 감지)

  • Lee, Seong-Ho;Han, Kyong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.243-248
    • /
    • 2014
  • This study presents an investigation into the ways to detect the areas of object movement with Kinect's Depth Frame, which is capable of receiving 3D information regardless of external light sources. Applied to remove noises along the boundaries of objects among the depth information received from sensors were the blurring technique for the x and y coordinates of pixels and the frequency filter for the z coordinate. In addition, a clustering filter was applied according to the changing amounts of adjacent pixels to extract the areas of moving objects. It was also designed to detect fast movements above the standard according to filter settings, being applicable to mobile robots. Detected movements can be applied to security systems when being delivered to distant places via a network and can also be expanded to large-scale data through concerned information.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

The analysis on TMA gas-sensing characteristics of ZnO thin film sensors (ZnO 막막 센서의 TMA 가스 검지 특성 분석)

  • 류지열;박성현;최혁환;김진섭;이명교;권태하
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.34D no.12
    • /
    • pp.46-53
    • /
    • 1997
  • The TMA gas sensors are fabricated with the ZnO-based thin films grown by a RF magnetron sputtering method. The hall effect measurement and AES analysis are carried out to investigate the effects of the sputtering gases and dopants which effect on the electrical resistivity and sensitivity to TMA gas. We measure the cfhanges of the surface carrier concentration, haall electron mobility, electrical resistivity, surface condition, and depth profile of the films. The ZnO-based thin film sensors sputtered in oxygen, or added with dopants showed a high sruface carrier concentration, film sensors sputtered in oxygen and doped with 4.0 wt.% $Al_{2}$O$_{3}$, 1.0 wt.% TiO$_{2}$, and 0.2 wt% v$_{2}$O$_{5}$ showed the highest surface carrier concentration of 5.952 * 10$^{20}$ cm$^{-3}$ , hall electron mobility of 176.7 cm$^{2}$/V.s, lowest electrical resistivity of 6*10$^{-5}$ .ohm.cm and highest sensitivity of 12. These results were measured at a working temperature of 300.deg. C to 8 ppm TMA gas.

  • PDF