• Title/Summary/Keyword: depth information sensors

Search Result 95, Processing Time 0.025 seconds

Efficient Filtering for Depth Sensors under Infrared Light Emitting Sources (적외선 방출 조명 조건 하에서 깊이 센서의 효율적인 필터링)

  • Park, Tae-Jung
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.271-278
    • /
    • 2012
  • Recently, infrared (IR)-based depth sensors have proliferated as consumer electronics thanks to decreased price, which led to various applications including gesture recognition in television virtual studios. However, the depth sensors fail to capture depth information correctly under strong light conditions emitting infrared light which are very common in television studios. This paper analyzes the mechanism of such interference between the depth sensors relying on certain IR frequencies and infrared light emitting sources, and provides methods to get correct depth information by applying filters. Also, it describes experiment methods and presents the results of applying multiple combinations of filters with different cut-off frequencies. Finally, it proves that the interference due to IR can be filtered out using proposed filtering method practically by experiment.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Estimation of Disparity for Depth Extraction in Monochrome CMOS Image Sensors with Offset Pixel Apertures (깊이 정보 추출을 위한 오프셋 화소 조리개가 적용된 단색 CMOS 이미지 센서의 디스패리티 추정)

  • Lee, Jimin;Kim, Sang-Hwan;Kwen, Hyeunwoo;Chang, Seunghyuk;Park, JongHo;Lee, Sang-Jin;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.2
    • /
    • pp.123-127
    • /
    • 2020
  • In this paper, the estimation of the disparity for depth extraction in monochrome complementary metal-oxide-semiconductor (CMOS) image sensors with offset pixel apertures is presented. To obtain the depth information, the disparity information between two different channel data of the offset pixel apertures is required. The disparity is caused by the difference in the response angle between the left- and right-offset pixel aperture images. A depth map is implemented by the generated disparity. Therefore, the disparity is the most important factor for realizing 3D images from the designed CMOS image sensor with offset pixel apertures. The disparity is influenced by the pixel height and offset value of the offset pixel aperture. To confirm this correlation, the offset value is set to maximum within the pixel area, and the disparity values corresponding to the difference in the heights are calculated and compared. The disparity is derived using the camera-lens formula. Two monochrome CMOS image sensors with offset pixel apertures are used in the disparity estimation.

Improved 3D Resolution Analysis of N-Ocular Imaging Systems with the Defocusing Effect of an Imaging Lens

  • Lee, Min-Chul;Inoue, Kotaro;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.4
    • /
    • pp.270-274
    • /
    • 2015
  • In this paper, we propose an improved framework to analyze an N-ocular imaging system under fixed constrained resources such as the number of image sensors, the pixel size of image sensors, the distance between adjacent image sensors, the focal length of image sensors, and field of view of image sensors. This proposed framework takes into consideration, for the first time, the defocusing effect of the imaging lenses according to the object distance. Based on the proposed framework, the N-ocular imaging system such as integral imaging is analyzed in terms of depth resolution using two-point-source resolution analysis. By taking into consideration the defocusing effect of the imaging lenses using ray projection model, it is shown that an improved depth resolution can be obtained near the central depth plane as the number of cameras increases. To validate the proposed framework, Monte Carlo simulations are carried out and the results are analyzed.

A Study on Control of Drone Swarms Using Depth Camera (Depth 카메라를 사용한 군집 드론의 제어에 대한 연구)

  • Lee, Seong-Ho;Kim, Dong-Han;Han, Kyong-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.8
    • /
    • pp.1080-1088
    • /
    • 2018
  • General methods of controlling a drone are divided into manual control and automatic control, which means a drone moves along the route. In case of manual control, a man should be able to figure out the location and status of a drone and have a controller to control it remotely. When people control a drone, they collect information about the location and position of a drone with the eyes and have its internal information such as the battery voltage and atmospheric pressure delivered through telemetry. They make a decision about the movement of a drone based on the gathered information and control it with a radio device. The automatic control method of a drone finding its route itself is not much different from manual control by man. The information about the position of a drone is collected with the gyro and accelerator sensor, and the internal information is delivered to the CPU digitally. The location information of a drone is collected with GPS, atmospheric pressure sensors, camera sensors, and ultrasound sensors. This paper presents an investigation into drone control by a remote computer. Instead of using the automatic control function of a drone, this approach involves a computer observing a drone, determining its movement based on the observation results, and controlling it with a radio device. The computer with a Depth camera collects information, makes a decision, and controls a drone in a similar way to human beings, which makes it applicable to various fields. Its usability is enhanced further since it can control common commercial drones instead of specially manufactured drones for swarm flight. It can also be used to prevent drones clashing each other, control access to a drone, and control drones with no permit.

Optimal Depth Calibration for KinectTM Sensors via an Experimental Design Method (실험 계획법에 기반한 키넥트 센서의 최적 깊이 캘리브레이션 방법)

  • Park, Jae-Han;Bae, Ji-Hum;Baeg, Moon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1003-1007
    • /
    • 2015
  • Depth calibration is a procedure for finding the conversion function that maps disparity data from a depth-sensing camera to actual distance information. In this paper, we present an optimal depth calibration method for Kinect$^{TM}$ sensors based on an experimental design and convex optimization. The proposed method, which utilizes multiple measurements from only two points, suggests a simplified calibration procedure. The confidence ellipsoids obtained from a series of simulations confirm that a simpler procedure produces a more reliable calibration function.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Optimization on the fabrication process of Si pressure sensors utilizing piezoresistive effect (압저항 효과를 이용한 실리콘 압력센서 제작공정의 최적화)

  • Yun Eui-Jung;Kim Jwayeon;Lee Seok-Tae
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.1
    • /
    • pp.19-24
    • /
    • 2005
  • In this paper, the fabrication process of Si pressure sensors utilizing piezoresistive effect was optimized. The efficiency(yield) of the fabrication process for Si piezoresistive pressure sensors was improved by conducting Si anisotrophic etching process after processes of piezoresistors and AI circuit patterns. The position and process parameters for piezoresistors were determined by ANSYS and SUPREM simulators, respectively. The measured thickness of p-type Si piezoresistors from the boron depth-profile measurement was in good agreement with the simulated one from SUPREM simulation. The Si anisotrohic etching process for diaphragm was optimized by adding ammonium persulfate(AP) to tetramethyl ammonium hydroxide (TMAH) solution.

Enhancing Single Thermal Image Depth Estimation via Multi-Channel Remapping for Thermal Images (열화상 이미지 다중 채널 재매핑을 통한 단일 열화상 이미지 깊이 추정 향상)

  • Kim, Jeongyun;Jeon, Myung-Hwan;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.314-321
    • /
    • 2022
  • Depth information used in SLAM and visual odometry is essential in robotics. Depth information often obtained from sensors or learned by networks. While learning-based methods have gained popularity, they are mostly limited to RGB images. However, the limitation of RGB images occurs in visually derailed environments. Thermal cameras are in the spotlight as a way to solve these problems. Unlike RGB images, thermal images reliably perceive the environment regardless of the illumination variance but show lacking contrast and texture. This low contrast in the thermal image prohibits an algorithm from effectively learning the underlying scene details. To tackle these challenges, we propose multi-channel remapping for contrast. Our method allows a learning-based depth prediction model to have an accurate depth prediction even in low light conditions. We validate the feasibility and show that our multi-channel remapping method outperforms the existing methods both visually and quantitatively over our dataset.