• Title/Summary/Keyword: RGB-D camera

Search Result 96, Processing Time 0.03 seconds

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Real-Time Motion Generation Method of Humanoid Robots based on RGB-D Camera for Interactive Performance and Exhibition (인터렉티브 공연·전시를 위한 RGB-D 카메라 기반 휴머노이드 로봇의 실시간 로봇 동작 생성 방법)

  • Seo, Bohyeong;Lee, Duk-Yeon;Choi, Dongwoon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.528-536
    • /
    • 2020
  • As humanoid robot technology advances, the use of robots for performance is increasing. As a result, studies are being conducted to increase the scope of use of robots in performances by making them natural like humans. Among them, the use of motion capture technology is often used, and there are environmental inconveniences in preparing for motion capture, such as the need for IMU sensors or markers attached to each part of the body and precise high-performance cameras. In addition, for robots used in performance technology, there is a problem that they have to respond in real time depending on the unexpected situation or the audience's response. To make up for the above mentioned problems, in this paper, we proposed a real-time motion capture system by using a number of RGB-D cameras and creating natural robot motion similar to human motion by using motion-captured data.

A New Camera System Implementation for Realistic Media-based Contents (실감미디어 기반의 콘텐츠를 위한 카메라 시스템의 구현)

  • Seo, Young Ho;Lee, Yoon Hyuk;Koo, Ja Myung;Kim, Woo Youl;Kim, Bo Ra;Kim, Moon Seok;Kim, Dong Wook
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.99-109
    • /
    • 2013
  • In this paper, we propose a new system which captures real depth and color information from natural scene and implemented it. Based on it, we produced stereo and multiview images for 3-dimensional stereoscopic contents and introduced the production of a digital hologram which is considered to the next-generation image. The system consists of both a camera system for capturing images which correspond to RGB and depth images and softwares (SWs) for various image processings which consist of pre-processing such as rectification and calibration, 3D warping, and computer generated hologram (CGH). The camera system use a vertical rig with two paris of depth and RGB camera and a specially manufactured cold mirror which has the different transmittance according to wavelength for obtaining images with the same view point. The wavelength of our mirror is about 850nm. Each algorithm was implemented using C and C++ and the implemented system can be operated in real-time.

High-performance of Deep learning Colorization With Wavelet fusion (웨이블릿 퓨전에 의한 딥러닝 색상화의 성능 향상)

  • Kim, Young-Back;Choi, Hyun;Cho, Joong-Hwee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.313-319
    • /
    • 2018
  • We propose a post-processing algorithm to improve the quality of the RGB image generated by deep learning based colorization from the gray-scale image of an infrared camera. Wavelet fusion is used to generate a new luminance component of the RGB image luminance component from the deep learning model and the luminance component of the infrared camera. PSNR is increased for all experimental images by applying the proposed algorithm to RGB images generated by two deep learning models of SegNet and DCGAN. For the SegNet model, the average PSNR is improved by 1.3906dB at level 1 of the Haar wavelet method. For the DCGAN model, PSNR is improved 0.0759dB on the average at level 5 of the Daubechies wavelet method. It is also confirmed that the edge components are emphasized by the post-processing and the visibility is improved.

RGB-Depth Camera for Dynamic Measurement of Liquid Sloshing (RGB-Depth 카메라를 활용한 유체 표면의 거동 계측분석)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 2019
  • In this paper, a low-cost dynamic measurement system using the RGB-depth camera, Microsoft $Kinect^{(R)}$ v2, is proposed for measuring time-varying free surface motion of liquid dampers used in building vibration mitigation. Various experimental studies are conducted consecutively: performance evaluation and validation of the $Kinect^{(R)}$ v2, real-time monitoring using the $Kinect^{(R)}$ v2 SDK(software development kits), point cloud acquisition of liquid free surface in the 3D space, comparison with the existing video sensing technology. Utilizing the proposed $Kinect^{(R)}$ v2-based measurement system in this study, dynamic behavior of liquid in a laboratory-scaled small tank under a wide frequency range of input excitation is experimentally analyzed.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

A Study on Vision-based Calibration Method for Bin Picking Robots for Semiconductor Automation (반도체 자동화를 위한 빈피킹 로봇의 비전 기반 캘리브레이션 방법에 관한 연구)

  • Kyo Mun Ku;Ki Hyun Kim;Hyo Yung Kim;Jae Hong Shim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.72-77
    • /
    • 2023
  • In many manufacturing settings, including the semiconductor industry, products are completed by producing and assembling various components. Sorting out from randomly mixed parts and classification operations takes a lot of time and labor. Recently, many efforts have been made to select and assemble correct parts from mixed parts using robots. Automating the sorting and classification of randomly mixed components is difficult since various objects and the positions and attitudes of robots and cameras in 3D space need to be known. Previously, only objects in specific positions were grasped by robots or people sorting items directly. To enable robots to pick up random objects in 3D space, bin picking technology is required. To realize bin picking technology, it is essential to understand the coordinate system information between the robot, the grasping target object, and the camera. Calibration work to understand the coordinate system information between them is necessary to grasp the object recognized by the camera. It is difficult to restore the depth value of 2D images when 3D restoration is performed, which is necessary for bin picking technology. In this paper, we propose to use depth information of RGB-D camera for Z value in rotation and movement conversion used in calibration. Proceed with camera calibration for accurate coordinate system conversion of objects in 2D images, and proceed with calibration of robot and camera. We proved the effectiveness of the proposed method through accuracy evaluations for camera calibration and calibration between robots and cameras.

  • PDF