• Title/Summary/Keyword: Kinect depth camera

Search Result 76, Processing Time 0.025 seconds

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

HEVC Encoder Optimization using Depth Information (깊이정보를 이용한 HEVC의 인코더 고속화 방법)

  • Lee, Yoon Jin;Bae, Dong In;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.640-655
    • /
    • 2014
  • Many of today's video systems have additional depth camera to provide extra features such as 3D support. Thanks to these changes made in multimedia system, it is now much easier to obtain depth information of the video. Depth information can be used in various areas such as object classification, background area recognition, and so on. With depth information, we can achieve even higher coding efficiency compared to only using conventional method. Thus, in this paper, we propose the 2D video coding algorithm which uses depth information on top of the next generation 2D video codec HEVC. Background area can be recognized with depth information and by performing HEVC with it, coding complexity can be reduced. If current CU is background area, we propose the following three methods, 1) Earlier stop split structure of CU with PU SKIP mode, 2) Limiting split structure of CU with CU information in temporal position, 3) Limiting the range of motion searching. We implement our proposal using HEVC HM 12.0 reference software. With these methods results shows that encoding complexity is reduced more than 40% with only 0.5% BD-Bitrate loss. Especially, in case of video acquired through the Kinect developed by Microsoft Corp., encoding complexity is reduced by max 53% without a loss of quality. So, it is expected that these techniques can apply real-time online communication, mobile or handheld video service and so on.

Temporally-Consistent High-Resolution Depth Video Generation in Background Region (배경 영역의 시간적 일관성이 향상된 고해상도 깊이 동영상 생성 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.414-420
    • /
    • 2015
  • The quality of depth images is important in the 3D video system to represent complete 3D contents. However, the original depth image from a depth camera has a low resolution and a flickering problem which shows vibrating depth values in terms of temporal meaning. This problem causes an uncomfortable feeling when we look 3D contents. In order to solve a low resolution problem, we employ 3D warping and a depth weighted joint bilateral filter. A temporal mean filter can be applied to solve the flickering problem while we encounter a residual spectrum problem in the depth image. Thus, after classifying foreground andbackground regions, we use an upsampled depth image for a foreground region and temporal mean image for background region.Test results shows that the proposed method generates a time consistent depth video with a high resolution.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

Contactless Chroma Key System Using Gesture Recognition (제스처 인식을 이용한 비 접촉식 크로마키 시스템)

  • Jeong, Jongmyeon;Jo, HongLae;Kim, Hoyoung;Song, Sion;Lee, Junseo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.159-160
    • /
    • 2015
  • 본 논문에서는 사용자의 제스처를 인식하여 동작하는 비 접촉식 크로마키 시스템을 제안한다. 이를 위해서 키넥트 카메라로부터 깊이(depth) 이미지와 RGB 이미지를 입력받는다. 먼저 깊이 카메라와 RGB 카메라의 위치 차이로 인한 불일치(disparity)를 보정하고, 깊이 이미지에 대해 모폴로지 연산을 수행하여 잡음을 제거한 후 RGB 이미지와 결합하여 객체 영역을 추출한다. 추출된 객체영역을 분석하여 사용자 손의 위치와 모양을 인식하고 손의 위치와 모양을 포인팅 장비로 간주하여 크로마키 시스템을 제어한다. 실험을 통해 비접촉식 크로마키 시스템이 실시간으로 동작함을 확인하였다.

  • PDF

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Tire wear judgment system implementation using depth camera (깊이 카메라를 이용한 타이어 마모도 판단 시스템 구현)

  • Kim, Min-joon;Jang, Jong-wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.262-264
    • /
    • 2016
  • In order to check the status of tire wear, a driver or auto mechanic generally checks tires with the naked eyes or with a coin. Those are easy for anyone but make it difficult to obtain precise information. But the result is a measure of mechanic wear out due to subjective judgment. Since we can not give correctly measure the stability of the tire. This may lead to an accident of the operator. Therefore, there is a need for a system checking tires precisely, accurately and easily, while making up for the aforementioned defect. This thesis has implemented a system with the aforementioned function. This system tire surface scanner data unit to determine the tread wear on the car and a storage unit for the data save, And a Web service unit allows the user to easily check the information.

  • PDF

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.2
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

Recognition of Natural Hand Gesture by Using HMM (HMM을 이용한 자연스러운 손동작 인식)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.639-645
    • /
    • 2012
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.