• Title/Summary/Keyword: data depth

Search Result 6,578, Processing Time 0.034 seconds

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

Bathymetric mapping in Dong-Sha Atoll using SPOT data

  • Huang, Shih-Jen;Wen, Yao-Chung
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.525-528
    • /
    • 2006
  • The remote sensing data can be used to calculate the water depth especially in the clear and shallow water area. In this study, the SPOT data was used for bathymetric mapping in Dong-Sha atoll, located in northern South China Sea. The in situ sea depth was collected by echo sounder as well. A global positioning system was employed to locate the accurate sampling points for sea depth. An empirical model between measurement sea depth and band digital count was determined and based on least squares regression analysis. Both non-classification and unsupervised classification were used in this study. The results show that the standard error is less than 0.9m for non-classification. Besides, the 10% error related to the measurement water depth can be satisfied for more than 85% in situ data points. Otherwise, the 10% relative error can reach more than 97%, 69%, and 51% data points at class 4, 5, and 6 respectively if supervised classification is applied. Meanwhile, we also find that the unsupervised classification can get more accuracy to estimate water depth with standard error less than 0.63, 0.93, and 0.68m at class 4, 5, and 6 respectively.

  • PDF

Validation of the semi-analytical algorithm for estimating vertical underwater visibility using MODIS data in the waters around Korea

  • Kim, Sun-Hwa;Yang, Chan-Su;Ouchi, Kazuo
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.6
    • /
    • pp.601-610
    • /
    • 2013
  • As a standard water clarity variable, the vertical underwater visibility, called Secchi depth, is estimated with ocean color satellite data. In the present study, Moderate Resolvtion Imaging Spectradiometer (MODIS) data are used to measure the Secchi depth which is a useful indicator of ocean transparency for estimating the water quality and productivity. To estimate the Secchi depth $Z_v$, the empirical regression model is developed based on the satellite optical data and in-situ data. In the previous study, a semi-analytical algorithm for estimating $Z_v$ was developed and validated for Case 1 and 2 waters in both coastal and oceanic waters using extensive sets of satellite and in-situ data. The algorithm uses the vertical diffuse attenuation coefficient, $K_d$($m^{-1}$) and the beam attenuation coefficient, c($m^{-1}$) obtained from satellite ocean color data to estimate $Z_v$. In this study, the semi-analytical algorithm is validated using temporal MODIS data and in-situ data over the Yellow, Southern and East Seas including Case 1 and 2 waters. Using total 156 matching data, MODIS $Z_v$ data showed about 3.6m RMSE value and 1.7m bias value. The $Z_v$ values of the East Sea and Southern Sea showed higher RMSE than the Yellow Sea. Although the semi-analytical algorithm used the fixed coupling constant (= 6.0) transformed from Inherent Optical Properties (IOP) and Apparent Optical Properties (AOP) to Secchi depth, various coupling constants are needed for different sea types and water depth for the optimum estimation of $Z_v$.

Classification of Side Somatotype of the Trunk by Analysing Photographic Data (사진자료에 의한 여성 상반신 측면체형 분류)

  • Jung, Myong-Seok
    • Korean Journal of Human Ecology
    • /
    • v.12 no.5
    • /
    • pp.767-776
    • /
    • 2003
  • The purpose of this study was to classify side somatotypes of the trunk by analysing photographic data. Then their distribution according to the age groups was studied. The subjects were 315 females of 18 to 49 year-old. Thirty one photographic measurements were taken to each subject. The factors affecting the side somatotype of the trunk were obtained by principal component analysis, vertical size, posterior/anterior depth and neck posture. The side somatotypes of the trunk were classified into 4 types and their differences were shown by analysing photographic data. The side silhouettes of 4 types were compared with balanced type. By suggesting the canonical discriminant function with the unstandardized canonical coefficient, individual somatotype of the trunk could be discriminated from the photographic data of anterior neck height, anterior waist height, posterior waist depth, buttock height, and anterior depth at the level of back protrusion. The frequency distribution of the side somatotypes of the trunk according to the age groups could be applied for clothing construction and the rate of clothing production.

  • PDF

A Simulation Study on Regularization Method for Generating Non-Destructive Depth Profiles from Angle-Resolved XPS Data

  • Ro, Chul-Un
    • Analytical Science and Technology
    • /
    • v.8 no.4
    • /
    • pp.707-714
    • /
    • 1995
  • Two types of regularization method (singular system and HMP approaches) for generating depth-concentration profiles from angle-resolved XPS data were evaluated. Both approaches showed qualitatively similar results although they employed different numerical algorithms. The application of the regularization method to simulated data demonstrates its excellent utility for the complex depth profile system. It includes the stable restoration of the depth-concentration profiles from the data with considerable random error and the self choice of smoothing parameter that is imperative for the successful application of the regularization method. The self choice of smoothing parameter is based on generalized cross-validation method which lets the data themselves choose the optimal value of the parameter.

  • PDF

Real-time Multiple Stereo Image Synthesis using Depth Information (깊이 정보를 이용한 실시간 다시점 스테레오 영상 합성)

  • Jang Se hoon;Han Chung shin;Bae Jin woo;Yoo Ji sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4C
    • /
    • pp.239-246
    • /
    • 2005
  • In this paper. we generate a virtual right image corresponding to the input left image by using given RGB texture data and 8 bit gray scale depth data. We first transform the depth data to disparity data and then produce the virtual right image with this disparity. We also proposed a stereo image synthesis algorithm which is adaptable to a viewer's position and an real-time processing algorithm with a fast LUT(look up table) method. Finally, we could synthesize a total of eleven stereo images with different view points for SD quality of a texture image with 8 bit depth information in a real time.

H.264 Encoding Technique of Multi-view Image expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 영상에 대한 H.264 부호화 기술)

  • Kim, Min-Tae;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.81-90
    • /
    • 2010
  • This paper presents H.264 coding schemes for multi-view video using the concept of layered depth image(LDI) representation and efficient compression technique for LDI. After converting those data to the proposed representation, we encode color, depth, and auxiliary data representing the hierarchical structure, respectively, Two kinds of preprocessing approaches are proposed for multiple color and depth components. In order to compress auxiliary data, we have employed a near lossless coding method. Finally, we have reconstructed the original viewpoints successfully from the decoded approach that is useful for dealing with multiple color and depth data simultaneously.

Rainfall Recognition from Road Surveillance Videos Using TSN (TSN을 이용한 도로 감시 카메라 영상의 강우량 인식 방법)

  • Li, Zhun;Hyeon, Jonghwan;Choi, Ho-Jin
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.5
    • /
    • pp.735-747
    • /
    • 2018
  • Rainfall depth is an important meteorological information. Generally, high spatial resolution rainfall data such as road-level rainfall data are more beneficial. However, it is expensive to set up sufficient Automatic Weather Systems to get the road-level rainfall data. In this paper, we propose to use deep learning to recognize rainfall depth from road surveillance videos. To achieve this goal, we collect a new video dataset and propose a procedure to calculate refined rainfall depth from the original meteorological data. We also propose to utilize the differential frame as well as the optical flow image for better recognition of rainfall depth. Under the Temporal Segment Networks framework, the experimental results show that the combination of the video frame and the differential frame is a superior solution for the rainfall depth recognition. The final model is able to achieve high performance in the single-location low sensitivity classification task and reasonable accuracy in the higher sensitivity classification task for both the single-location and the multi-location case.

A Study on the Variation of Aerosol Optical Depth according to Aerosol Types in Northeast Asia using Aeronet Sun/Sky Radiometer Data (AERONET 선포토미터 데이터를 이용한 동북아시아 지역 대기 에어로졸 종류별 광학적 농도 변화 특성 연구)

  • Noh, Youngmin
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.5
    • /
    • pp.668-676
    • /
    • 2018
  • This study has developed a technique to divide the aerosol optical depth of the entire aerosol (${\tau}_{total}$) into the dust optical depth (${\tau}_D$) and the pollution particle optical depth (${\tau}_P$) using the AERONET sun/sky radiometer data provided in Version 3. This method was applied to the analysis of AERONET data observed from 2006 to 2016 in Beijing, China, Seoul and Gosan, Korea and Osaka, Japan and the aerosol optical depth trends of different types of atmospheric aerosols in Northeast Asia were analyzed. The annual variation of ${\tau}_{total}$ showed a tendency to decrease except for Seoul where observation data were limited. However, ${\tau}_D$ tended to decrease when ${\tau}_{total}$ were separated as ${\tau}_D$ and ${\tau}_P$, but ${\tau}_P$ tended to increase except for Osaka. This is because the concentration of airborne aerosols, represented by Asian dust in Northeast Asia, is decreased in both mass concentration and optical concentration. However, even though the mass concentration of pollution particles generated by human activity tends to decrease, Which means that the optical concentration represented as aerosol optical depth is increasing in Northeast Asia.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.