• Title/Summary/Keyword: 깊이 이미지

Search Result 243, Processing Time 0.035 seconds

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

A Study on 2D-3D Image Conversion using Depth Map Chart Analysis (깊이정보 지도 분석을 통한 2D-3D 영상 변환 연구)

  • Kim, In-Su;Kim, Hyung-Taek;Youn, Joo-Sang;Oh, Se-Woong;Seo, in-Seok;Kim, Nam-Gyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.205-208
    • /
    • 2015
  • 3D 입체영상을 제작하기 위해서는 2D 영상제작에 비해 오랜 제작 기간과 많은 비용이 발생한다. 비용 절감을 위해 기존의 2D 영상을 3D 입체영상으로 변환하는 연구가 진행되고 있다. 2D 영상을 3D 입체영상으로 변환하는 방식은 자동변환방법과 수동변환방법으로 구분할 수 있으며, 고품질의 2D-3D 변환 영상을 획득하기 위해서는 깊이정보 지도(Depth map chart)를 활용한 수동변환 방법을 많이 사용되고 있다. 하지만 2D-3D 수동변환에 사용되는 깊이정보 지도의 정량적 분석 데이터가 부족하여 사용자가 변환한 이미지에 대한 정확한 기준 깊이값 설정이 어려운 단점이 있다. 본 논문에서는 깊이정보 지도의 깊이값 정보에 대한 정량적 분석 데이터를 바탕으로 한 2D-3D 수동변환 변화범위를 제시함으로써 적정한 영상 변화를 유도할 수 있도록 한다.

  • PDF

The Center of Hand Detection Using Geometric feature of Hand Image (손 이미지의 기하학적 특징을 이용한 중심 검출)

  • Kim, Min-Ha;Lee, Sang-Geol;Cho, Jae-Hyun;Cha, Eui-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.311-313
    • /
    • 2012
  • 본 논문에서는 RGBD(Red Green Blue Depth)센서를 이용하여 얻은 영상의 깊이 정보와 손 이미지의 기하학적 특징을 이용하여 손의 중심을 검출하는 방법을 제안한다. 영상의 깊이 정보와 피부색 정보를 이용하여 손 영역을 검출한다. 검출된 손의 기하학적 정보로 손에 대한 볼록 외피(convex hull)를 형성한다. 볼록 외피의 정점들(vertices)의 위치 정보를 이용하여 손의 중심을 찾는다. 손의 중심은 손의 위치를 추적하거나 손가락 개수를 구하는 것 등에 이용될 수 있다. 이러한 응용은 인간과 컴퓨터의 상호작용(HCI, Human Computer Interface)을 이용한 시스템에 적용될 수 있다.

  • PDF

Facial animation production method based on depth images (깊이 이미지 이용한 페이셜 애니메이션 제작 방법)

  • Fu, Linwei;Jiang, Haitao;Ji, Yun;Qu, Lin;Yun, Taesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.49-50
    • /
    • 2018
  • 본 논문은 깊이 이미지 이용한 페이셜 애니메이션 제작 방법을 소개한다. iPhone X의 true depth카메라를 사용하여 사람 얼굴의 심도를 정확하게 파악하고, 균등하게 분산된 도트를 통해 얼굴의 모든 표정변화를 모바일 데이터로 기록하여, 페이셜 애니메이션을 제작하는 제작한다. 본문에서의 방식은, 기존 페이셜 애니메이션 제작 과정에서의 rigging 부분을 생략하여, 기록된 얼굴 표정 데이터를 3D 모델링에 바로 전달할 수 있다. 이런 방식을 통해 전체 페이셜 애니메이션 제작 과정을 단축시켜, 제작 방법을 더욱 간단하고 효율적이게 하였다.

  • PDF

Correlation Analysis between Crack Depth of Concrete and Characteristics of Images (콘크리트 균열 깊이와 이미지 특성정보간의 상관성 분석)

  • Jung, Seo-Young;Yu, Jung-Ho
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2021.05a
    • /
    • pp.162-163
    • /
    • 2021
  • Currently, the depth of cracks is measured using ultrasonic detectors in maintenance practice. This method consists of measuring the depth of cracks by attaching ultrasonic depth measuring equipment to the concrete surface, and there are restrictions on the timing and location of the inspection. These limitations can be addressed through the development of image-based crack depth measurement AI technology. If crack depth measurements are made based on images, restrictions on the timing and location of inspections can be lifted because images acquired with simple filming equipment can be used as input information. To efficiently develop these artificial intelligence technologies, it is essential to identify the interrelationship between crack depth measurements and image characteristic information. Thus, this study is a basic study of the development of image-based crack depth measurement AI technology and aims to identify image characteristic information related to crack depth.

  • PDF

Development of Real-Time Objects Segmentation for Dual-Camera Synthesis in iOS (iOS 기반 실시간 객체 분리 및 듀얼 카메라 합성 개발)

  • Jang, Yoo-jin;Kim, Ji-yeong;Lee, Ju-hyun;Hwang, Jun
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.37-43
    • /
    • 2021
  • In this paper, we study how objects from front and back cameras can be recognized in real time in a mobile environment to segment regions of object pixels and synthesize them through image processing. To this work, we applied DeepLabV3 machine learning model to dual cameras provided by Apple's iOS. We also propose methods using Core Image and Core Graphics libraries from Apple for image synthesis and postprocessing. Furthermore, we improved CPU usage than previous works and compared the throughput rates and results of Depth and DeepLabV3. Finally, We also developed a camera application using these two methods.

A Study on VR Convergence Contents Creation Process ink painting (수묵화를 이용한 VR 융합콘텐츠 제작공정 연구)

  • Hou, Zheng-Dong;Choi, Chul-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.7
    • /
    • pp.193-198
    • /
    • 2018
  • Applying VR technology to animation areas is emerging as a trend of recent years. So if you use this VR technology in traditional ink animation, 2D art piece is expected to be equipped with a new narrative style and visual and auditory language, making it a new animation genre. There's a lot of technical difficulties in putting the existing 2D ink image on a 360 degree display. VR ink animation has been created that gives depth to VR space by using layer extraction method based on depth of distance and placing layers extracted on curved surface that is aligned with depth in 360-degree space in the image of ink painting, which is the background of traditional ink animation. In the text, we took an overview on problems generated in extracting layers of distant view, close-range view and middle distant view from the existing image of ink painting and made suggestions of an effective way to approach them.

Real-time monitoring system with Kinect v2 using notifications on mobile devices (Kinect V2를 이용한 모바일 장치 실시간 알림 모니터링 시스템)

  • Eric, Niyonsaba;Jang, Jong Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.277-280
    • /
    • 2016
  • Real-time remote monitoring system has an important value in many surveillance situations. It allows someone to be informed of what is happening in his monitoring locations. Kinect v2 is a new kind of camera which gives computers eyes and can generate different data such as color and depth images, audio input and skeletal data. In this paper, using Kinect v2 sensor with its depth image, we present a monitoring system in a space covered by Kinect. Therefore, based on space covered by Kinect camera, we define a target area to monitor using depth range by setting minimum and maximum distances. With computer vision library (Emgu CV), if there is an object tracked in the target space, kinect camera captures the whole image color and sends it in database and user gets at the same time a notification on his mobile device wherever he is with internet access.

  • PDF

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

A Ground Penetrating Radar Detection of Buried Cavities and Pipes and Development of an Image Processing Program (지반 공동 및 매립관의 지반 투과 레이더 탐사 및 이미지 처리 프로그램 개발)

  • Lee, Hyun-Ho
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.5 no.2
    • /
    • pp.177-184
    • /
    • 2017
  • Many ground subsidence accidents have happened in Korea. The accident was caused by the subsidence and leakage of the deteriorated sewage pipe. This study aims to establish the empirical data of the ground penetration radar(GPR) detection for ground subsidence. A test bed was also manufactured for the same purpose. The GPR detection variables are embedment depth and horizontal distance of embedded cast iron pipe and expanded polystyrene(EPS). From the detection results, the EPS embedded by a depth of 1.5m was difficult for detection. The EPS closely embedded to the cast iron pipe within a 0.5m distance had a very strong cast iron pipe signal. Therefore, the detection was impossible. This study developed an image processing program, called the GPR image processing program(GPRiPP), to process the GPR detection results. Its major function is the gain function, which amplifies the wiggle wave signal. Compared to the existing programs, the GPRiPP is capable of showing a similar image processing performance.