• Title/Summary/Keyword: Multiple 2D Images

Search Result 160, Processing Time 0.04 seconds

3-D Socation Estimation of Airbonne Targets Using a Modified Radon Transform (레이돈 변환 방식을 이용한 비행 물체의 3차원 위치 추정)

  • 최재호;곽훈성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.6
    • /
    • pp.25-32
    • /
    • 1994
  • A new projection-based approach derived from the Radon transform for detecting and estimating 3-D locations of unresolved targets in a time-sequential set of infrared imageries is presented. Since the signal-to-noise ration per pixel is very low (a dim target) and target tracks which span over many image frames. Since the 2-D multiple representations along arbitary orientations utilizing the 3-D Radon transform, our projection-based transform method enables us to analyze the 3-D problem in terms of its 2-D projections. Our method not only alleviates the great computatioonal expense of processing entire set of images as a whole, but the results reveal that the proposed strategy produces a robust detection and estimation of 3-D target trajectories event at low SNRs.

  • PDF

Multiple Homographies Estimation using a Guided Sequential RANSAC (가이드된 순차 RANSAC에 의한 다중 호모그래피 추정)

  • Park, Yong-Hee;Kwon, Oh-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.10-22
    • /
    • 2010
  • This study proposes a new method of multiple homographies estimation between two images. With a large proportion of outliers, RANSAC is a general and very successful robust parameter estimator. However it is limited by the assumption that a single model acounts for all of the data inliers. Therefore, it has been suggested to sequentially apply RANSAC to estimate multiple 2D projective transformations. In this case, because outliers stay in the correspondence data set through the estimation process sequentially, it tends to progress slowly for all models. And, it is difficult to parallelize the sequential process due to the estimation order by the number of inliers for each model. We introduce a guided sequential RANSAC algorithm, using the local model instances that have been obtained from RANSAC procedure, which is able to reduce the number of random samples and deal simultaneously with multiple models.

Influence of Two-Dimensional and Three-Dimensional Acquisitions of Radiomic Features for Prediction Accuracy

  • Ryohei Fukui;Ryutarou Matsuura;Katsuhiro Kida;Sachiko Goto
    • Progress in Medical Physics
    • /
    • v.34 no.3
    • /
    • pp.23-32
    • /
    • 2023
  • Purpose: In radiomics analysis, to evaluate features, and predict genetic characteristics and survival time, the pixel values of lesions depicted in computed tomography (CT) and magnetic resonance imaging (MRI) images are used. CT and MRI offer three-dimensional images, thus producing three-dimensional features (Features_3d) as output. However, in reports, the superiority between Features_3d and two-dimensional features (Features_2d) is distinct. In this study, we aimed to investigate whether a difference exists in the prediction accuracy of radiomics analysis of lung cancer using Features_2d and Features_3d. Methods: A total of 38 cases of large cell carcinoma (LCC) and 40 cases of squamous cell carcinoma (SCC) were selected for this study. Two- and three-dimensional lesion segmentations were performed. A total of 774 features were obtained. Using least absolute shrinkage and selection operator regression, seven Features_2d and six Features_3d were obtained. Results: Linear discriminant analysis revealed that the sensitivities of Features_2d and Features_3d to LCC were 86.8% and 89.5%, respectively. The coefficients of determination through multiple regression analysis and the areas under the receiver operating characteristic curve (AUC) were 0.68 and 0.70 and 0.93 and 0.94, respectively. The P-value of the estimated AUC was 0.87. Conclusions: No difference was found in the prediction accuracy for LCC and SCC between Features_2d and Features_3d.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Automatic Generation of 3D Face Model from Trinocular Images (Trinocular 영상을 이용한 3D 얼굴 모델 자동 생성)

  • Yi, Kwang-Do;Ahn, Sang-Chul;Kwon, Yong-Moo;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.104-115
    • /
    • 1999
  • This paper proposes an efficient method for 3D modeling of a human face from trinocular images by reconstructing face surface using range data. By using a trinocular camera system, we mitigated the tradeoff between the occlusion problem and the range resolution limitation which is the critical limitation in binocular camera system. We also propose an MPC_MBS (Matching Pixel Count Multiple Baseline Stereo) area-based matching method to reduce boundary overreach phenomenon and to improve both of accuracy and precision in matching. In this method, the computing time can be reduced significantly by removing the redundancies. In the model generation sub-pixel accurate surface data are achieved by 2D interpolation of disparity values, and are sampled to make regular triangular meshes. The data size of the triangular mesh model can be controlled by merging the vertices that lie on the same plane within user defined error threshold.

  • PDF

A study of multiple-exposure nanosphere lithography for photonic quasi-crystals fabrication (광자 준결정 제작을 위한 다중 노광 나노구 리소그라피 연구)

  • Yeo, Jong-Bin;Lee, Hyun-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2010.06a
    • /
    • pp.62-62
    • /
    • 2010
  • Photonic quasi-crystals(PQCs) have been fabricated by a multiple-exposure nanosphere lithography (MENSL) method using the self-assembled nanospheres as lens-mask patterns. The multiple-exposing source is collimated laser beam and rotation, tilting system. The arrays of the PQCs exhibited variable lattice structures and shape the control of ratating angle ($\theta$), tilting angle ($\gamma$) and the exposure conditions. The used nanosphere size is upto the $1\;{\mu}m$. Images of prepared 2D PQCs were observed by SEM. We believe that the MENSL method is a suitable useful tool to realize the PQCs arrays of large area.

  • PDF

Autostereoscopic Multiview 3D Display System based on Volume Hologram (체적 홀로그램을 이용한 무안경 다안식 3D 디스플레이 시스템)

  • 이승현;이상훈
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1609-1616
    • /
    • 2001
  • We present an autostereoscopic 3D display system using volume hologram. In this proposed system, the interference pattern of angular multiplexed plane reference and object beams are recorded into a volume hologram, which plays a role of guiding object beams of multi-view images into the desired perspective directions. For reconstruction, object beams containing the desired multi-view image information, which satisfy Bragg matching condition, are illuminated in the time-division multiplexed manner onto the crystal. Then multiple stereoscopic images are projected to the display plane for autostereoscopic 3D viewing. It is possible to make a high resolution multiview 3D display system independent upon the viewpoint.

  • PDF

ERS-1 AND CCRS C-SAR Data Integration For Look Direction Bias Correction Using Wavelet Transform

  • Won, J.S.;Moon, Woo-Il M.;Singhroy, Vern;Lowman, Paul-D.Jr.
    • Korean Journal of Remote Sensing
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 1994
  • Look direction bias in a single look SAR image can often be misinterpreted in the geological application of radar data. This paper investigates digital processing techniques for SAR image data integration and compensation of the SAR data look direction bias. The two important approaches for reducing look direction bias and integration of multiple SAR data sets are (1) principal component analysis (PCA), and (2) wavelet transform(WT) integration techniques. These two methods were investigated and tested with the ERS-1 (VV-polarization) and CCRS*s airborne (HH-polarization) C-SAR image data sets recorded over the Sudbury test site, Canada. The PCA technique has been very effective for integration of more than two layers of digital image data. When there only two sets of SAR data are available, the PCA thchnique requires at least one more set of auxiliary data for proper rendition of the fine surface features. The WT processing approach of SAR data integration utilizes the property which decomposes images into approximated image ( low frequencies) characterizing the spatially large and relatively distinct structures, and detailed image (high frequencies) in which the information on detailed fine structures are preserved. The test results with the ERS-1and CCRS*s C-SAR data indicate that the new WT approach is more efficient and robust in enhancibng the fine details of the multiple SAR images than the PCA approach.

Validation of a low-cost portable 3-dimensional face scanner

  • Liu, Catherine;Artopoulos, Andreas
    • Imaging Science in Dentistry
    • /
    • v.49 no.1
    • /
    • pp.35-43
    • /
    • 2019
  • Purpose: The goal of this study was to assess the accuracy and reliability of a low-cost portable scanner (Scanify) for imaging facial casts compared to a previously validated portable digital stereophotogrammetry device (Vectra H1). This in vitro study was performed using 2 facial casts obtained by recording impressions of the authors, at King's College London Academic Centre of Reconstructive Science. Materials and Methods: The casts were marked with anthropometric landmarks, then digitised using Scanify and Vectra H1. Computed tomography (CT) scans of the same casts were performed to verify the validation of Vectra H1. The 3-dimensional (3D) images acquired with each device were compared using linear measurements and 3D surface analysis software. Results: Overall, 91% of the linear Scanify measurements were within 1 mm of the corresponding reference values. The mean overall surface difference between the Scanify and Vectra images was <0.3mm. Significant differences were detected in depth measurements. Merging multiple Scanify images produced significantly greater registration error. Conclusion: Scanify is a very low-cost device that could have clinical applications for facial imaging if imaging errors could be corrected by a future software update or hardware revision.

A Method for Generating Multiple Virtual Hairstyles Based on 2D Photo-realistic Images (2D 실사 영상에 기반한 다중 가상 헤어스타일 생성 방법)

  • Lee, Hyoung-Jin;Kwak, No-Yoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.1627-1630
    • /
    • 2005
  • 본 논문에서는 2D 실사 영상에서 추출한 헤어스타일을 임의의 인물 영상의 두상에 정렬시킨 상태에서 원래의 헤어스타일에서 추출한 헤어스타일로 변해가는 반자동 필드 모핑을 수행함으로써 여러 유형의 헤어스타일을 자동으로 생성할 수 있는 가상 헤어스타일 생성 방법에 관한 것이다. 제안된 방법은 사전에 준비된 그래픽 객체 외에도 실사 영상에서 직접 추출한 헤어스타일을 사용할 수 있고, 추출한 헤어스타일 외에도 다양한 유형의 헤어스타일을 자동으로 생성할 수 있는 이점이 있다. 또한, 반자동 필드 모핑에 기반한 편리한 사용자 인터페이스를 제공할 수 있기 때문에 작업자의 피로도를 경감시킴과 동시에 작업 시간을 단축할 수 있고 비숙련자도 간단한 사용자 입력을 통해 자연스러운 가상 헤어스타일을 생성할 수 있는 장점이 있다.

  • PDF