• Title/Summary/Keyword: Synthetic images

Search Result 573, Processing Time 0.027 seconds

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.

3D Image Display Method using Synthetic Aperture integral imaging (Synthetic aperture 집적 영상을 이용한 3D 영상 디스플레이 방법)

  • Shin, Dong-Hak;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.2037-2042
    • /
    • 2012
  • Synthetic aperture integral imaging is one of promising 3D imaging techniques to capture the high-resolution elemental images using multiple cameras. In this paper, we propose a method of displaying 3D images in space using the synthetic aperture integral imaging technique. Since the elemental images captured from SAII cannot be directly used to display 3D images in an integral imaging display system, we first extract the depth map from elemental images and then transform them to novel elemental images for 3D image display. The newly generated elemental images are displayed on a display panel to generate 3D images in space. To show the usefulness of the proposed method, we carry out the preliminary experiments using a 3D toy object and present the experimental results.

A Study on Illumination Mechanism of Steel Plate Inspection Using Wavelet Synthetic Images (이산 웨이블릿 합성 영상을 이용한 철강 후판 검사의 조명 메커니즘에 관한 연구)

  • Cho, Eun Deok;Kim, Gyung Bum
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.2
    • /
    • pp.26-31
    • /
    • 2018
  • In this paper, surface defects and typical illumination mechanisms for steel plates are analyzed, and then optimum illumination mechanism is selected using discrete wavelet transform (DWT) synthetic images and discriminant measure (DM). The DWT synthetic images are generated using component images decomposed by Haar wavelet transform filter. The best synthetic image according to surface defects is determined using signal to noise ratio (SNR). The optimum illumination mechanism is selected by applying discriminant measure (DM) to the best synthetic images. The DM is applied using the tenengrad-euclidian function. The DM is evaluated as the degree of contrast using the defect boundary information. The performance of the optimum illumination mechanism is verified by quantitative data and intuitive image looks.

Enhancing Automated Recognition of Small-Sized Construction Tools Using Synthetic Images: Validating Practical Applicability Through Confidence Scores

  • Soeun HAN;Choongwan KOO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1308-1308
    • /
    • 2024
  • Computer vision techniques have been widely employed in automated construction management to enhance safety and prevent accidents at construction sites. However, previous research in the field of vision-based approaches has often overlooked small-sized construction tools. These tools present unique challenges in data collection due to their diverse shapes and sizes, as well as in improving model performance to accurately detect and classify them. To address these challenges, this study aimed to enhance the performance of vision-based classifiers for small-sized construction tools, including bucket, cord reel, hammer, and tacker, by leveraging synthetic images generated from a 3D virtual environment. Three classifiers were developed using the YOLOv8 algorithm, each differing in the composition of the training dataset: (i) 'Real-4000', trained on 4,000 authentic images collected through web crawling methods (1,000 images per object); (ii) 'Hybrid-4000', consisting of 2,000 authentic images and 2,000 synthetic images; and (iii) 'Hybrid-8000', incorporating 4,000 authentic images and 4,000 synthetic images. To validate the performance of the classifiers, 144 directly-captured images for each object were collected from real construction sites as the test dataset. The mean Average Precision at an IoU threshold of 0.5 (mAP_0.5) for the classifiers was 79.6%, 90.8%, and 94.8%, respectively, with the 'Hybrid-8000' model demonstrating the highest performance. Notably, for objects with significant shape variations, the use of synthetic images led to the enhanced performance of the vision-based classifiers. Moreover, the practical applicability of the proposed classifiers was validated through confidence scores, particularly between the 'Hybrid-4000' and 'Hybrid-8000' models. Statistical analysis using t-tests indicated that the performance of the 'Hybrid-4000' model would either matched or exceeded that of the 'Hybrid-8000'model based on confidence scores. Thus, employing the 'Hybrid-4000' model may be preferable in terms of data collection efficiency and processing time, contributing to enhanced safety and real-time automation and robotics in construction practices.

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • v.63 no.3
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

Generation of Synthetic Particle Images for Particle Image Velocimetry using Physics-Informed Neural Network (물리 기반 인공신경망을 이용한 PIV용 합성 입자이미지 생성)

  • Hyeon Jo Choi;Myeong Hyeon, Shin;Jong Ho, Park;Jinsoo Park
    • Journal of the Korean Society of Visualization
    • /
    • v.21 no.1
    • /
    • pp.119-126
    • /
    • 2023
  • Acquiring experimental data for PIV verification or machine learning training data is resource-demanding, leading to an increasing interest in synthetic particle images as simulation data. Conventional synthetic particle image generation algorithms do not follow physical laws, and the use of CFD is time-consuming and requires computing resources. In this study, we propose a new method for synthetic particle image generation, based on a Physics-Informed Neural Networks(PINN). The PINN is utilized to infer the flow fields, enabling the generation of synthetic particle images that follow physical laws with reduced computation time and have no constraints on spatial resolution compared to CFD. The proposed method is expected to contribute to the verification of PIV algorithms.

Evaluation of Effect of Decrease in Metallic Artifacts using the Synthetic MR Technique (Synthetic MR 기법을 이용한 금속 인공물 감소 효과 평가)

  • Soon-Yong, Kwon;Nam-Yong, Ahn;Jeong-Eun, Oh;Seong-Ho, Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.7
    • /
    • pp.835-842
    • /
    • 2022
  • This study is aimed to evaluate the effects of a synthetic MR technique in reducing metal artifacts. In the experiment, the in-plane and through-plane images were acquired by applying a synthetic MR technique and a high-speed spin echo technique to a phantom manufactured with screw for spinal surgery. The area of the metal artifact was compared. The metal artifacts were measured by dividing the signal-loss and the signal pile-up areas, and the area of the final artifact was calculated through the sum of the two. As a result, the metal artifacts were relatively reduced when the synthetic MR techniques were applied to both in-plane and through-plane. Comparing by sequence, the in-plane T1 images decreased by 23.45%, T2 images by 20.85%, PD images by 19.67%, and FLAIR images by 22.12%. Also, in the case of the through-plane, the T1 image decreased by 62.95%, the T2 image decreased by 73.93%, the PD image decreased by 74.68%, and the FLAIR image decreased by 66.43%. The cause of this result is that when the synthetic MR technique is applied, the distortion is due to the signal pile-up and does not occur and the size of the entire metal artifact is reduced. Therefore, synthetic MR technique can very effectively reduce metal artifacts, which can help to increase the diagnostic value of images.

Web-based synthetic-aperture radar data management system and land cover classification

  • Dalwon Jang;Jaewon Lee;Jong-Seol Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1858-1872
    • /
    • 2023
  • With the advance of radar technologies, the availability of synthetic aperture radar (SAR) images increases. To improve application of SAR images, a management system for SAR images is proposed in this paper. The system provides trainable land cover classification module and display of SAR images on the map. Users of the system can create their own classifier with their data, and obtain the classified results of newly captured SAR images by applying the classifier to the images. The classifier is based on convolutional neural network structure. Since there are differences among SAR images depending on capturing method and devices, a fixed classifier cannot cover all types of SAR land cover classification problems. Thus, it is adopted to create each user's classifier. In our experiments, it is shown that the module works well with two different SAR datasets. With this system, SAR data and land cover classification results are managed and easily displayed.

BORA IN THE ADRIATIC SEA AND BLACK SEA IMAGED BY THE ENVISAT SYNTHETIC APERTURE RADAR

  • Ivanov, Andrei Yu.;Alpers, Werner
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.964-968
    • /
    • 2006
  • Bora events over the Adriatic Sea and Black Sea are investigated by using synthetic aperture radar (SAR) images acquired by the Advanced Synthetic Aperture Radar (ASAR) onboard the European Envisat satellite. These images show pronounced elongated patterns of increased sea surface roughness caused by bora winds. The comparison of the SAR images with wind fields derived from Quikscat data confirms that in all cases a strong northeasterly wind was blowing from the mountains onto the sea. It is shown that the SAR images reveal details of the spatial extent of the bora wind fields over the sea which cannot be obtained by other instruments. Furtheremore, also quantitative information on the wind field is extracted from the SAR images by using a wind scatterometer model.

  • PDF