• 제목/요약/키워드: 3D Environment Reconstruction

Search Result 82, Processing Time 0.028 seconds

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

A kinect-based parking assistance system

  • Bellone, Mauro;Pascali, Luca;Reina, Giulio
    • Advances in robotics research
    • /
    • v.1 no.2
    • /
    • pp.127-140
    • /
    • 2014
  • This work presents an IR-based system for parking assistance and obstacle detection in the automotive field that employs the Microsoft Kinect camera for fast 3D point cloud reconstruction. In contrast to previous research that attempts to explicitly identify obstacles, the proposed system aims to detect "reachable regions" of the environment, i.e., those regions where the vehicle can drive to from its current position. A user-friendly 2D traversability grid of cells is generated and used as a visual aid for parking assistance. Given a raw 3D point cloud, first each point is mapped into individual cells, then, the elevation information is used within a graph-based algorithm to label a given cell as traversable or non-traversable. Following this rationale, positive and negative obstacles, as well as unknown regions can be implicitly detected. Additionally, no flat-world assumption is required. Experimental results, obtained from the system in typical parking scenarios, are presented showing its effectiveness for scene interpretation and detection of several types of obstacle.

3D Segmentation for High-Resolution Image Datasets Using a Commercial Editing Tool in the IoT Environment

  • Kwon, Koojoo;Shin, Byeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1126-1134
    • /
    • 2017
  • A variety of medical service applications in the field of the Internet of Things (IoT) are being studied. Segmentation is important to identify meaningful regions in images and is also required in 3D images. Previous methods have been based on gray value and shape. The Visible Korean dataset consists of serially sectioned high-resolution color images. Unlike computed tomography or magnetic resonance images, automatic segmentation of color images is difficult because detecting an object's boundaries in colored images is very difficult compared to grayscale images. Therefore, skilled anatomists usually segment color images manually or semi-automatically. We present an out-of-core 3D segmentation method for large-scale image datasets. Our method can segment significant regions in the coronal and sagittal planes, as well as the axial plane, to produce a 3D image. Our system verifies the result interactively with a multi-planar reconstruction view and a 3D view. Our system can be used to train unskilled anatomists and medical students. It is also possible for a skilled anatomist to segment an image remotely since it is difficult to transfer such large amounts of data.

Augmented Human Synthesis for Remote Monitoring (원격지 환경 모니터링을 위한 적응형 증강 휴먼 합성 기법)

  • Choi, Seohyun;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.427-428
    • /
    • 2021
  • Recently, various augmented reality(AR) methods for monitoring situations in a remote factory have been proposed. The 2D manual with a classic method is not intuitive to understand and has limitations to provide to interact with objects to be repaired in the remote factory. In our paper, we present a guide method based on a virtual human expressed in augmented reality for the remote environment. Also, the remote factory environment was periodically captured through 3D reconstruction, our virtual human was adaptively matched to the built 3D environment. According to this paper, our method makes it possible to perform repairs quickly with the feeling of being with an expert. and it is possible to provide a visualized guide for troubleshooting in various fields such as factories, offices, medical facilities, and schools in remote environments.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

Surgical Simulation Environment for Replacement of Artificial Knee Joint (CT 영상을 이용한 무릎관절 모의 치환 시술 환경)

  • Kim, Dong-Min
    • Journal of IKEEE
    • /
    • v.7 no.1 s.12
    • /
    • pp.119-126
    • /
    • 2003
  • This paper presents a methodology for constructing a surgical simulation environment for the replacement of artificial knee join using CT image data. We provide a user interface of preoperative planning system for performing complex 3-D spatial manipulation and reasoning tasks. Simple manipulation of joystick and mouse has been proved to be both intuitive and accurate for the fitness and the wear expect of joint. The proposed methodology are useful for future virtual medical system where all the components of visualization, automated model generation, and surgical simulation are integrated.

  • PDF

Development of underwater 3D shape measurement system with improved radiation tolerance

  • Kim, Taewon;Choi, Youngsoo;Ko, Yun-ho
    • Nuclear Engineering and Technology
    • /
    • v.53 no.4
    • /
    • pp.1189-1198
    • /
    • 2021
  • When performing remote tasks using robots in nuclear power plants, a 3D shape measurement system is advantageous in improving the efficiency of remote operations by easily identifying the current state of the target object for example, size, shape, and distance information. Nuclear power plants have high-radiation and underwater environments therefore the electronic parts that comprise 3D shape measurement systems are prone to degradation and thus cannot be used for a long period of time. Also, given the refraction caused by a medium change in the underwater environment, optical design constraints and calibration methods for them are required. The present study proposed a method for developing an underwater 3D shape measurement system with improved radiation tolerance, which is composed of commercial electric parts and a stereo camera while being capable of easily and readily correcting underwater refraction. In an effort to improve its radiation tolerance, the number of parts that are exposed to a radiation environment was minimized to include only necessary components, such as a line beam laser, a motor to rotate the line beam laser, and a stereo camera. Given that a signal processing circuit and control circuit of the camera is susceptible to radiation, an image sensor and lens of the camera were separated from its main body to improve radiation tolerance. The prototype developed in the present study was made of commercial electric parts, and thus it was possible to improve the overall radiation tolerance at a relatively low cost. Also, it was easy to manufacture because there are few constraints for optical design.

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.