• Title/Summary/Keyword: Generated Synthetic Images

Search Result 76, Processing Time 0.023 seconds

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

A Study on Illumination Mechanism of Steel Plate Inspection Using Wavelet Synthetic Images (이산 웨이블릿 합성 영상을 이용한 철강 후판 검사의 조명 메커니즘에 관한 연구)

  • Cho, Eun Deok;Kim, Gyung Bum
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.2
    • /
    • pp.26-31
    • /
    • 2018
  • In this paper, surface defects and typical illumination mechanisms for steel plates are analyzed, and then optimum illumination mechanism is selected using discrete wavelet transform (DWT) synthetic images and discriminant measure (DM). The DWT synthetic images are generated using component images decomposed by Haar wavelet transform filter. The best synthetic image according to surface defects is determined using signal to noise ratio (SNR). The optimum illumination mechanism is selected by applying discriminant measure (DM) to the best synthetic images. The DM is applied using the tenengrad-euclidian function. The DM is evaluated as the degree of contrast using the defect boundary information. The performance of the optimum illumination mechanism is verified by quantitative data and intuitive image looks.

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.

3D Image Display Method using Synthetic Aperture integral imaging (Synthetic aperture 집적 영상을 이용한 3D 영상 디스플레이 방법)

  • Shin, Dong-Hak;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.2037-2042
    • /
    • 2012
  • Synthetic aperture integral imaging is one of promising 3D imaging techniques to capture the high-resolution elemental images using multiple cameras. In this paper, we propose a method of displaying 3D images in space using the synthetic aperture integral imaging technique. Since the elemental images captured from SAII cannot be directly used to display 3D images in an integral imaging display system, we first extract the depth map from elemental images and then transform them to novel elemental images for 3D image display. The newly generated elemental images are displayed on a display panel to generate 3D images in space. To show the usefulness of the proposed method, we carry out the preliminary experiments using a 3D toy object and present the experimental results.

Game Engine Driven Synthetic Data Generation for Computer Vision-Based Construction Safety Monitoring

  • Lee, Heejae;Jeon, Jongmoo;Yang, Jaehun;Park, Chansik;Lee, Dongmin
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.893-903
    • /
    • 2022
  • Recently, computer vision (CV)-based safety monitoring (i.e., object detection) system has been widely researched in the construction industry. Sufficient and high-quality data collection is required to detect objects accurately. Such data collection is significant for detecting small objects or images from different camera angles. Although several previous studies proposed novel data augmentation and synthetic data generation approaches, it is still not thoroughly addressed (i.e., limited accuracy) in the dynamic construction work environment. In this study, we proposed a game engine-driven synthetic data generation model to enhance the accuracy of the CV-based object detection model, mainly targeting small objects. In the virtual 3D environment, we generated synthetic data to complement training images by altering the virtual camera angles. The main contribution of this paper is to confirm whether synthetic data generated in the game engine can improve the accuracy of the CV-based object detection model.

  • PDF

Improve object recognition using UWB SAR imaging with compressed sensing

  • Pham, The Hien;Hong, Ic-Pyo
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.76-82
    • /
    • 2021
  • In this paper, the compressed sensing basic pursuit denoise algorithm adopted to synthetic aperture radar imaging is investigated to improve the object recognition. From the incomplete data sets for image processing, the compressed sensing algorithm had been integrated to recover the data before the conventional back- projection algorithm was involved to obtain the synthetic aperture radar images. This method can lead to the reduction of measurement events while scanning the objects. An ultra-wideband radar scheme using a stripmap synthetic aperture radar algorithm was utilized to detect objects hidden behind the box. The Ultra-Wideband radar system with 3.1~4.8 GHz broadband and UWB antenna were implemented to transmit and receive signal data of two conductive cylinders located inside the paper box. The results confirmed that the images can be reconstructed by using a 30% randomly selected dataset without noticeable distortion compared to the images generated by full data using the conventional back-projection algorithm.

Transfer Learning-based Generated Synthetic Images Identification Model (전이 학습 기반의 생성 이미지 판별 모델 설계)

  • Chaewon Kim;Sungyeon Yoon;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.465-470
    • /
    • 2024
  • The advancement of AI-based image generation technology has resulted in the creation of various images, emphasizing the need for technology capable of accurately discerning them. The amount of generated image data is limited, and to achieve high performance with a limited dataset, this study proposes a model for discriminating generated images using transfer learning. Applying pre-trained models from the ImageNet dataset directly to the CIFAKE input dataset, we reduce training time cost followed by adding three hidden layers and one output layer to fine-tune the model. The modeling results revealed an improvement in the performance of the model when adjusting the final layer. Using transfer learning and then adjusting layers close to the output layer, small image data-related accuracy issues can be reduced and generated images can be classified.

Assessment of DEM Generated by Stereo C-band and X-band SAR images using Radargrammetry (Radargrammetry를 이용한 C-밴드 및 X-밴드 SAR 위성영상의 DEM 생성 평가)

  • Song, Yeong Sun;Kim, Gi Hong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2013
  • To extract the 3D geometric information from SAR(Synthetic Aperture Radar) images, two different techniques, interferometric SAR and radargrammetry, have been widely used. InSAR is most widely used for the generation of precise DEM(Digital Elevation Model) until now. But, Interferometric SAR requires severe temporal correlation over areas covered with vegetation and high relief areas. Because radargrammetry is less sensible to temporal correlation, it can provide better results than interferometric SAR in certain, especially X-band SAR. In this paper, we assess the properties of DEMs generated by radargrammetry using stereo C-band RADARSAT-1 images and X-band TerraSAR-X images.

3D Building Detection and Reconstruction from Aerial Images Using Perceptual Organization and Fast Graph Search

  • Woo, Dong-Min;Nguyen, Quoc-Dat
    • Journal of Electrical Engineering and Technology
    • /
    • v.3 no.3
    • /
    • pp.436-443
    • /
    • 2008
  • This paper presents a new method for building detection and reconstruction from aerial images. In our approach, we extract useful building location information from the generated disparity map to segment the interested objects and consequently reduce unnecessary line segments extracted in the low level feature extraction step. Hypothesis selection is carried out by using an undirected graph, in which close cycles represent complete rooftops hypotheses. We test the proposed method with the synthetic images generated from Avenches dataset of Ascona aerial images. The experiment result shows that the extracted 3D line segments of the reconstructed buildings have an average error of 1.69m and our method can be efficiently used for the task of building detection and reconstruction from aerial images.

Dosimetric Evaluation of Synthetic Computed Tomography Technique on Position Variation of Air Cavity in Magnetic Resonance-Guided Radiotherapy

  • Hyeongmin Jin;Hyun Joon An;Eui Kyu Chie;Jong Min Park;Jung-in Kim
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.142-149
    • /
    • 2022
  • Purpose: This study seeks to compare the dosimetric parameters of the bulk electron density (ED) approach and synthetic computed tomography (CT) image in terms of position variation of the air cavity in magnetic resonance-guided radiotherapy (MRgRT) for patients with pancreatic cancer. Methods: This study included nine patients that previously received MRgRT and their simulation CT and magnetic resonance (MR) images were collected. Air cavities were manually delineated on simulation CT and MR images in the treatment planning system for each patient. The synthetic CT images were generated using the deep learning model trained in a prior study. Two more plans with identical beam parameters were recalculated with ED maps that were either manually overridden by the cavities or derived from the synthetic CT. Dose calculation accuracy was explored in terms of dose-volume histogram parameters and gamma analysis. Results: The D95% averages were 48.80 Gy, 48.50 Gy, and 48.23 Gy for the original, manually assigned, and synthetic CT-based dose distributions, respectively. The greatest deviation was observed for one patient, whose D95% to synthetic CT was 1.84 Gy higher than the original plan. Conclusions: The variation of the air cavity position in the gastrointestinal area affects the treatment dose calculation. Synthetic CT-based ED modification would be a significant option for shortening the time-consuming process and improving MRgRT treatment accuracy.