• 제목/요약/키워드: Synthetic images

검색결과 573건 처리시간 0.023초

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • 제20권2호
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.

Synthetic aperture 집적 영상을 이용한 3D 영상 디스플레이 방법 (3D Image Display Method using Synthetic Aperture integral imaging)

  • 신동학;유훈
    • 한국정보통신학회논문지
    • /
    • 제16권9호
    • /
    • pp.2037-2042
    • /
    • 2012
  • Synthetic aperture 집적 영상 (SAII) 기술은 다수의 카메라를 이용하여 고해상도의 요소 영상을 획득할 수 있는 유망한 3D 이미징 기술이다. 본 논문에서는 SAII 기술을 이용하여 집적 영상 디스플레이를 수행하는 공간 3D 영상을 표시하는 방법을 제안한다. SAII로부터 얻어지는 요소 영상은 직접적으로 공간 3D 영상으로 사용될 수 없기 때문에 깊이 지도를 추출하여 새로운 디스플레이용 요소 영상으로 변환하여 공간 3D 영상을 표시한다. 제안하는 방법의 유용함을 보이기 위해서 장난감 3D 물체를 사용하여 기초적인 실험을 수행하고, 또한 공간 3D 영상이 구현된 실험 결과를 제시한다.

이산 웨이블릿 합성 영상을 이용한 철강 후판 검사의 조명 메커니즘에 관한 연구 (A Study on Illumination Mechanism of Steel Plate Inspection Using Wavelet Synthetic Images)

  • 조은덕;김경범
    • 반도체디스플레이기술학회지
    • /
    • 제17권2호
    • /
    • pp.26-31
    • /
    • 2018
  • In this paper, surface defects and typical illumination mechanisms for steel plates are analyzed, and then optimum illumination mechanism is selected using discrete wavelet transform (DWT) synthetic images and discriminant measure (DM). The DWT synthetic images are generated using component images decomposed by Haar wavelet transform filter. The best synthetic image according to surface defects is determined using signal to noise ratio (SNR). The optimum illumination mechanism is selected by applying discriminant measure (DM) to the best synthetic images. The DM is applied using the tenengrad-euclidian function. The DM is evaluated as the degree of contrast using the defect boundary information. The performance of the optimum illumination mechanism is verified by quantitative data and intuitive image looks.

Enhancing Automated Recognition of Small-Sized Construction Tools Using Synthetic Images: Validating Practical Applicability Through Confidence Scores

  • Soeun HAN;Choongwan KOO
    • 국제학술발표논문집
    • /
    • The 10th International Conference on Construction Engineering and Project Management
    • /
    • pp.1308-1308
    • /
    • 2024
  • Computer vision techniques have been widely employed in automated construction management to enhance safety and prevent accidents at construction sites. However, previous research in the field of vision-based approaches has often overlooked small-sized construction tools. These tools present unique challenges in data collection due to their diverse shapes and sizes, as well as in improving model performance to accurately detect and classify them. To address these challenges, this study aimed to enhance the performance of vision-based classifiers for small-sized construction tools, including bucket, cord reel, hammer, and tacker, by leveraging synthetic images generated from a 3D virtual environment. Three classifiers were developed using the YOLOv8 algorithm, each differing in the composition of the training dataset: (i) 'Real-4000', trained on 4,000 authentic images collected through web crawling methods (1,000 images per object); (ii) 'Hybrid-4000', consisting of 2,000 authentic images and 2,000 synthetic images; and (iii) 'Hybrid-8000', incorporating 4,000 authentic images and 4,000 synthetic images. To validate the performance of the classifiers, 144 directly-captured images for each object were collected from real construction sites as the test dataset. The mean Average Precision at an IoU threshold of 0.5 (mAP_0.5) for the classifiers was 79.6%, 90.8%, and 94.8%, respectively, with the 'Hybrid-8000' model demonstrating the highest performance. Notably, for objects with significant shape variations, the use of synthetic images led to the enhanced performance of the vision-based classifiers. Moreover, the practical applicability of the proposed classifiers was validated through confidence scores, particularly between the 'Hybrid-4000' and 'Hybrid-8000' models. Statistical analysis using t-tests indicated that the performance of the 'Hybrid-4000' model would either matched or exceeded that of the 'Hybrid-8000'model based on confidence scores. Thus, employing the 'Hybrid-4000' model may be preferable in terms of data collection efficiency and processing time, contributing to enhanced safety and real-time automation and robotics in construction practices.

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • 제44권4호
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • 제63권3호
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

물리 기반 인공신경망을 이용한 PIV용 합성 입자이미지 생성 (Generation of Synthetic Particle Images for Particle Image Velocimetry using Physics-Informed Neural Network)

  • 최현조;신명현;박종호;박진수
    • 한국가시화정보학회지
    • /
    • 제21권1호
    • /
    • pp.119-126
    • /
    • 2023
  • Acquiring experimental data for PIV verification or machine learning training data is resource-demanding, leading to an increasing interest in synthetic particle images as simulation data. Conventional synthetic particle image generation algorithms do not follow physical laws, and the use of CFD is time-consuming and requires computing resources. In this study, we propose a new method for synthetic particle image generation, based on a Physics-Informed Neural Networks(PINN). The PINN is utilized to infer the flow fields, enabling the generation of synthetic particle images that follow physical laws with reduced computation time and have no constraints on spatial resolution compared to CFD. The proposed method is expected to contribute to the verification of PIV algorithms.

Synthetic MR 기법을 이용한 금속 인공물 감소 효과 평가 (Evaluation of Effect of Decrease in Metallic Artifacts using the Synthetic MR Technique )

  • 권순용;안남용;오정은;김성호
    • 한국방사선학회논문지
    • /
    • 제16권7호
    • /
    • pp.835-842
    • /
    • 2022
  • 본 연구는 금속 인공물을 감소시키는 데 있어 synthetic MR 기법의 효과를 평가해보고자 하였다. 실험은 척추 수술용 나사로 제작된 팬텀을 대상으로 synthetic MR 기법과 고속 스핀 에코 기법을 적용하여 in-plane과 through-plane 영상을 획득하고 금속 인공물의 면적을 비교해 보았다. 금속 인공물은 signal-loss와 signal pile-up 영역으로 구분하여 측정하였고 둘의 합을 통해 최종 인공물의 면적을 계산하였다. 그 결과, in-plane, through-plane 모두 synthetic MR 기법을 적용했을 때 상대적으로 금속 인공물이 감소하였다. 시퀀스 별로 비교하면 in-plane의 경우 T1 영상은 23.45%, T2 영상은 20.85%, PD 영상은 19.67%, FLAIR 영상은 22.12% 감소하였다. 또한 through-plane의 경우 T1 영상은 62.95%, T2 영상은 73.93, PD 영상은 74.68%, FLAIR 영상은 66.43% 감소하였다. 이러한 결과의 원인은 synthetic MR 기법 적용 시 signal pile-up에 의한 왜곡이 발생하지 않아 전체 금속 인공물의 크기가 감소하였기 때문이다. 따라서 synthetic MR 기법은 매우 효과적으로 금속 인공물을 감소시킬 수 있어 영상의 진단적 가치를 높이는 데 도움을 줄 수 있다.

Web-based synthetic-aperture radar data management system and land cover classification

  • Dalwon Jang;Jaewon Lee;Jong-Seol Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1858-1872
    • /
    • 2023
  • With the advance of radar technologies, the availability of synthetic aperture radar (SAR) images increases. To improve application of SAR images, a management system for SAR images is proposed in this paper. The system provides trainable land cover classification module and display of SAR images on the map. Users of the system can create their own classifier with their data, and obtain the classified results of newly captured SAR images by applying the classifier to the images. The classifier is based on convolutional neural network structure. Since there are differences among SAR images depending on capturing method and devices, a fixed classifier cannot cover all types of SAR land cover classification problems. Thus, it is adopted to create each user's classifier. In our experiments, it is shown that the module works well with two different SAR datasets. With this system, SAR data and land cover classification results are managed and easily displayed.

BORA IN THE ADRIATIC SEA AND BLACK SEA IMAGED BY THE ENVISAT SYNTHETIC APERTURE RADAR

  • Ivanov, Andrei Yu.;Alpers, Werner
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2006년도 Proceedings of ISRS 2006 PORSEC Volume II
    • /
    • pp.964-968
    • /
    • 2006
  • Bora events over the Adriatic Sea and Black Sea are investigated by using synthetic aperture radar (SAR) images acquired by the Advanced Synthetic Aperture Radar (ASAR) onboard the European Envisat satellite. These images show pronounced elongated patterns of increased sea surface roughness caused by bora winds. The comparison of the SAR images with wind fields derived from Quikscat data confirms that in all cases a strong northeasterly wind was blowing from the mountains onto the sea. It is shown that the SAR images reveal details of the spatial extent of the bora wind fields over the sea which cannot be obtained by other instruments. Furtheremore, also quantitative information on the wind field is extracted from the SAR images by using a wind scatterometer model.

  • PDF