• Title/Summary/Keyword: depth

Search Result 26,341, Processing Time 0.052 seconds

Assesment of Impaired Depth due to Fire of Mock-up Concrete with 40MPa Using Drying Method After Water Immersion (수중 침지 건조방법을 이용한 40MPa Mock-up부재의 화해피해 깊이진단)

  • Lim, Gun Su;Han, Soo Hwan;Baek, Seung Bok;Kim, Jong;Han, Min Cheol;Han, Cheon Goo
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2021.05a
    • /
    • pp.245-246
    • /
    • 2021
  • In this study, we develop the damage depth diagnostic technology of fire damage concrete and propose an method of impaired depth due to fire by drying impaired concrete after immersing. Test results indicated that when assesing impared depth due to fire with the dry method after water immersing, impaired depth was clearly found and furthermore, compared with that by Phenolphthalein method 15 mm of damage depth was additionally identified, which is imposible to asses when Phenolphthalein is applied.

  • PDF

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

Region-Based Error Concealment of Depth Map in Multiview Video (영역 구분을 통한 다시점 영상의 깊이맵 손상 복구 기법)

  • Kim, Wooyeun;Shin, Jitae;Oh, Byung Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2530-2538
    • /
    • 2015
  • The pixel value of depth image is depth value so that different objects which are placed on nearby position have similar pixel value. Moreover, the pixels of depth image have distinct pixel values compared to adjacent pixels while those of color image has very similar values. Accordingly distorted depth image of multiview video plus depth (MVD) needs proper error concealment methods considering the characteristics of depth image when transmission errors are happened. In this paper, classifying regions of depth image to consider edge directions and then applying adaptive error concealment methods to each region are proposed. Recovered depth images utilize with multiview video data to synthesize intermediate-view point video. The synthesized view is evaluated by objective quality metrics to demonstrate proposed method performance.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

Decision of Interface and Depth Scale Calibration of Multilayer Films by SIMS Depth Profiling

  • Hwang, Hye-Hyun;Jang, Jong-Shik;Kang, Hee-Jae;Kim, Kyung-Joong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.274-274
    • /
    • 2012
  • In-depth analysis by secondary ion mass spectrometry (SIMS) is very important for the development of electronic devices using multilayered structures, because the quantity and depth distribution of some elements are critical for the electronic properties. Correct determination of the interface locations is critical for the calibration of the depth scale in SIMS depth profiling analysis of multilayer films. However, the interface locations are distorted from real ones by the several effects due to sputtering with energetic ions. In this study, the determination of interface locations in SIMS depth profiling of multilayer films was investigated by Si/Ge and Ti/Si multilayer systems. The original SIMS depth profiles were converted into compositional depth profiles by the relative sensitivity factors (RSF) derived from the atomic compositions of Si-Ge and Si-Ti alloy reference films determined by Rutherford backscattering spectroscopy. The thicknesses of the Si/Ge and Ti/Si multilayer films measured by SIMS depth profiling with various impact energy ion beam were compared with those measured by TEM. There are two methods to determine the interface locations. The one is the feasibility of 50 atomic % definition in SIMS composition depth profiling. And another one is using a distribution of SiGe and SiTi dimer ions. This study showed that the layer thicknesses measured with low energy oxygen and Cs ion beam and, by extension, with method of 50 atomic % definition were well correlated with the real thicknesses determined by TEM.

  • PDF

The Enhancement of the Boundary-Based Depth Image (경계 기반의 깊이 영상 개선)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.51-58
    • /
    • 2012
  • Recently, 3D technology based on depth image is widely used in various fields including 3D space recognition, image acquisition, interaction, and games. Depth camera is used in order to produce depth image, various types of effort are made to improve quality of the depth image. In this paper, we suggests using area-based Canny edge detector to improve depth image in applying 3D technology based on depth camera. The suggested method provides improved depth image with pre-processing and post-processing by fixing image quality deterioration, which may take place in acquiring depth image in a limited environment. For objective image quality evaluation, we have confirmed that the image is improved by 0.42dB at maximum, by applying and comparing improved depth image to virtual view reference software. In addition, with DSCQS(Double Stimulus Continuous Quality Scale) evaluation method, we are reassured of the effectiveness of improved depth image through objective evaluation of subjective quality.

A Study on the Optimum Range of Space Depth for Hospital Architecture Planning Focused on System (체계중심병원건축계획을 위한 공간깊이의 적정범위에 관한 연구)

  • Kim, Eun Seok;Yang, Nae Won
    • Journal of The Korea Institute of Healthcare Architecture
    • /
    • v.22 no.4
    • /
    • pp.47-55
    • /
    • 2016
  • Purpose: Growth and change are the most important things in planning of hospital architecture. It is especially necessary for countless changes taken place since the hospital opens to be adapted to the planning of hospital architecture phase. The space depth in the hospital serves a very crucial role in accepting these changes. The purpose of this study is to provide basic data necessary to space depth planning to prepare for change through analyzing space depth's change in hospital architecture chronologically. Methods:: The method of this study is analyzing space depth's change in cases of 19 hospitals in total, from the 1980's, which is the quantitative growth period, until recently. Especially this study is analyzing Max & Min space depth focusing change of medical environment. Based on this, this study suggests an form of space depth and optimum range of space depth response to growth and change of hospital architecture. Results: The conclusions of this study are as follows. Considering these conclusion, double linear system is most appropriate for space depth for hospital architecture planning focused on system. Optimal range of space depth is at least 21.6m or more in case of clinic room and from 27 meter to 37meter in case of examination & treatment room. Implications: Space of Depth is a key element determining system for hospital architecture planning focused on system. The results of this paper can be data for planning system of hospital architecture which copes with the change.

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.