• 제목/요약/키워드: 3D Environment Reconstruction

검색결과 82건 처리시간 0.029초

다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론 (Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics)

  • 손호성;신민정;김준수;윤국진;정원식;이현우;강석주
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2022년도 하계학술대회
    • /
    • pp.4-7
    • /
    • 2022
  • 최근 다 시점 영상 콘텐츠 기반 3차원 공간(장면) 복원을 위한 다 시점 깊이 추정 딥러닝 네트워크 방법론이 널리 연구되고 있다. 다 시점 영상 콘텐츠는 촬영 구도, 촬영 환경 및 세팅에 따라 다양한 특성을 가지며, 고품질의 3차원 복원을 위해서는 이러한 특성을 이해하고, 적절한 깊이 추정 네트워크 기법들을 적용하는 것이 중요하다. 다 시점 영상 촬영 구도로는 수렴형, 발산형이 존재하며, 촬영 세팅에는 카메라 시점 간 물리적 거리인 baseline이 있다. 본 연구는 이와 같은 다 시점 영상 콘텐츠의 종류와 각 특징에 기반하여 콘텐츠(데이터 셋)의 특성에 따른 적절한 깊이 추정 네트워크 방법론을 다룬다. 실험 결과로부터, 기존의 다 시점 깊이 추정 네트워크를 발산형 또는 large baseline 특성을 가지는 데이터 셋에 곧바로 적용하는데 한계점이 존재함을 확인하였다. 따라서, 각 영상 환경에 적합한 '참조 시점 개수' 및 적절한 '참조 시점 선택 알고리즘'의 필요성을 검증하였다. 결론적으로, 3차원 공간(장면) 복원을 위한 딥러닝 기반 깊이 추정 네트워크 구현 시, 본 연구 결과가 다 시점 영상 콘텐츠 기반 깊이 추정 기법 선택에 있어 가이드라인으로 활용될 수 있음을 확인하였다.

  • PDF

영상합성을 위한 3D 공간 해석 및 조명환경의 재구성 (3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis)

  • 황용호;홍현기
    • 한국게임학회 논문지
    • /
    • 제6권2호
    • /
    • pp.45-50
    • /
    • 2006
  • 실 세계 공간에 가상 물체를 사실적으로 합성하기 위해서는 공간 내에 존재하는 조명정보 등을 분석해야 한다. 본 논문에서는 카메라에 대한 사전 교정(calibration)없이 카메라 및 조명의 위치 등을 추정하는 새로운 조명공간 재구성 방법이 제안된다. 먼저, 어안렌즈(fisheye lens)로부터 얻어진 전방향(omni-directional) 다중 노출 영상을 이용해 HDR (High Dynamic Range) 래디언스 맵(radiance map)을 생성한다. 그리고 다수의 대응점으로부터 카메라의 위치를 추정한 다음, 방향벡터를 이용해 조명의 위치를 재구성한다. 또한 대상 공간 내 많은 영향을 미치는 전역 조명과 일부 지역에 국한되어 영향을 주는 방향성을 갖는 지역 조명으로 분류하여 조명 환경을 재구성한다. 재구성된 조명환경 내에서 분산광선추적(distributed ray tracing) 방법으로 렌더링한 결과로부터 사실적인 합성영상이 얻어짐을 확인하였다. 제안된 방법은 카메라의 사전 교정 등이 필요하지 않으며 조명공간을 자동으로 재구성할 수 있는 장점이 있다.

  • PDF

A kinect-based parking assistance system

  • Bellone, Mauro;Pascali, Luca;Reina, Giulio
    • Advances in robotics research
    • /
    • 제1권2호
    • /
    • pp.127-140
    • /
    • 2014
  • This work presents an IR-based system for parking assistance and obstacle detection in the automotive field that employs the Microsoft Kinect camera for fast 3D point cloud reconstruction. In contrast to previous research that attempts to explicitly identify obstacles, the proposed system aims to detect "reachable regions" of the environment, i.e., those regions where the vehicle can drive to from its current position. A user-friendly 2D traversability grid of cells is generated and used as a visual aid for parking assistance. Given a raw 3D point cloud, first each point is mapped into individual cells, then, the elevation information is used within a graph-based algorithm to label a given cell as traversable or non-traversable. Following this rationale, positive and negative obstacles, as well as unknown regions can be implicitly detected. Additionally, no flat-world assumption is required. Experimental results, obtained from the system in typical parking scenarios, are presented showing its effectiveness for scene interpretation and detection of several types of obstacle.

3D Segmentation for High-Resolution Image Datasets Using a Commercial Editing Tool in the IoT Environment

  • Kwon, Koojoo;Shin, Byeong-Seok
    • Journal of Information Processing Systems
    • /
    • 제13권5호
    • /
    • pp.1126-1134
    • /
    • 2017
  • A variety of medical service applications in the field of the Internet of Things (IoT) are being studied. Segmentation is important to identify meaningful regions in images and is also required in 3D images. Previous methods have been based on gray value and shape. The Visible Korean dataset consists of serially sectioned high-resolution color images. Unlike computed tomography or magnetic resonance images, automatic segmentation of color images is difficult because detecting an object's boundaries in colored images is very difficult compared to grayscale images. Therefore, skilled anatomists usually segment color images manually or semi-automatically. We present an out-of-core 3D segmentation method for large-scale image datasets. Our method can segment significant regions in the coronal and sagittal planes, as well as the axial plane, to produce a 3D image. Our system verifies the result interactively with a multi-planar reconstruction view and a 3D view. Our system can be used to train unskilled anatomists and medical students. It is also possible for a skilled anatomist to segment an image remotely since it is difficult to transfer such large amounts of data.

원격지 환경 모니터링을 위한 적응형 증강 휴먼 합성 기법 (Augmented Human Synthesis for Remote Monitoring)

  • 최서현;조동식
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.427-428
    • /
    • 2021
  • 최근, 원격지 공장의 고장 상황 혹은 모니터링을 위해 다양한 캡처 및 가이드 인터페이스 방법이 제시되고 있다. 고장 수리에 이용되는 2D 매뉴얼은 이해가 직관적이지 않고, 공간감을 제공하기에는 한계가 있어 증강현실 기술을 이용하여 고장 수리 전문가가 가상의 객체(혹은 캡처된 원격지 사람)와 상호작용하는 기술이 적용되고 있다. 본 논문에서는 3D 캡처 환경에서 증강현실로 표현된 가상 휴먼(증강 휴먼)을 기반으로 가이드 방법을 제시한다. 이는 마치 전문가가 같이 있는 듯한 느낌을 통해 신속히 고장 수리를 수행하는 것이 가능하다. 이를 위해 본 논문에서는 원격지 환경을 위한 증강 휴먼 기반 가이드를 제시하기 위해 원격지 환경에 적응하여 디스플레이가 되는 적응형 증강 휴먼 합성 기법을 제시한다. 본 논문에 의해 서로 떨어져 있는 원격지 간 공장, 사무실, 의료시설, 학교 등 다양한 공간에서 증강 휴먼을 통해 고장 수리에 가이드를 제공할 수 있다.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • 제3권2호
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

CT 영상을 이용한 무릎관절 모의 치환 시술 환경 (Surgical Simulation Environment for Replacement of Artificial Knee Joint)

  • 김동민
    • 전기전자학회논문지
    • /
    • 제7권1호
    • /
    • pp.119-126
    • /
    • 2003
  • CT 영상을 이용하여 인공 무릎관절 모의 치환 시술 환경 구축에 관한 방법을 제시한다. 관절의 3차원 형상의 재구성에 필요한 정보는 영상처리 기법을 이용하여 연속된 CT 영상으로부터 노이즈 제거와 포인트 데이터를 추출하며, 추출된 포인트 데이터는 영상의 해상도와 함께 생역학에서 제시한 역학적 축을 기준으로 실측의 3차원 형상으로 재구성된다. 재구성된 형상은 PC 기반에서 구현된 모의시술 프로그램을 통하여 관절 운동의 해석과 접촉면 해석을 지원하며, 조이스틱과 마우스등을 이용하여 인공관절의 치환과 절단을 가상으로 실행할 수 있도록 하였다. 또한 인공 관절의 접합부에 대한 접촉면 해석을 통하여 시술에 대한 정합성과 연골 부위의 마모성 예측을 시각화하였다.

  • PDF

Development of underwater 3D shape measurement system with improved radiation tolerance

  • Kim, Taewon;Choi, Youngsoo;Ko, Yun-ho
    • Nuclear Engineering and Technology
    • /
    • 제53권4호
    • /
    • pp.1189-1198
    • /
    • 2021
  • When performing remote tasks using robots in nuclear power plants, a 3D shape measurement system is advantageous in improving the efficiency of remote operations by easily identifying the current state of the target object for example, size, shape, and distance information. Nuclear power plants have high-radiation and underwater environments therefore the electronic parts that comprise 3D shape measurement systems are prone to degradation and thus cannot be used for a long period of time. Also, given the refraction caused by a medium change in the underwater environment, optical design constraints and calibration methods for them are required. The present study proposed a method for developing an underwater 3D shape measurement system with improved radiation tolerance, which is composed of commercial electric parts and a stereo camera while being capable of easily and readily correcting underwater refraction. In an effort to improve its radiation tolerance, the number of parts that are exposed to a radiation environment was minimized to include only necessary components, such as a line beam laser, a motor to rotate the line beam laser, and a stereo camera. Given that a signal processing circuit and control circuit of the camera is susceptible to radiation, an image sensor and lens of the camera were separated from its main body to improve radiation tolerance. The prototype developed in the present study was made of commercial electric parts, and thus it was possible to improve the overall radiation tolerance at a relatively low cost. Also, it was easy to manufacture because there are few constraints for optical design.

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링 (Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots)

  • 김민영;조형석;김재훈
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.