• 제목/요약/키워드: Generated Synthetic Images

검색결과 76건 처리시간 0.023초

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • 제44권4호
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

이산 웨이블릿 합성 영상을 이용한 철강 후판 검사의 조명 메커니즘에 관한 연구 (A Study on Illumination Mechanism of Steel Plate Inspection Using Wavelet Synthetic Images)

  • 조은덕;김경범
    • 반도체디스플레이기술학회지
    • /
    • 제17권2호
    • /
    • pp.26-31
    • /
    • 2018
  • In this paper, surface defects and typical illumination mechanisms for steel plates are analyzed, and then optimum illumination mechanism is selected using discrete wavelet transform (DWT) synthetic images and discriminant measure (DM). The DWT synthetic images are generated using component images decomposed by Haar wavelet transform filter. The best synthetic image according to surface defects is determined using signal to noise ratio (SNR). The optimum illumination mechanism is selected by applying discriminant measure (DM) to the best synthetic images. The DM is applied using the tenengrad-euclidian function. The DM is evaluated as the degree of contrast using the defect boundary information. The performance of the optimum illumination mechanism is verified by quantitative data and intuitive image looks.

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • 제20권2호
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.

Synthetic aperture 집적 영상을 이용한 3D 영상 디스플레이 방법 (3D Image Display Method using Synthetic Aperture integral imaging)

  • 신동학;유훈
    • 한국정보통신학회논문지
    • /
    • 제16권9호
    • /
    • pp.2037-2042
    • /
    • 2012
  • Synthetic aperture 집적 영상 (SAII) 기술은 다수의 카메라를 이용하여 고해상도의 요소 영상을 획득할 수 있는 유망한 3D 이미징 기술이다. 본 논문에서는 SAII 기술을 이용하여 집적 영상 디스플레이를 수행하는 공간 3D 영상을 표시하는 방법을 제안한다. SAII로부터 얻어지는 요소 영상은 직접적으로 공간 3D 영상으로 사용될 수 없기 때문에 깊이 지도를 추출하여 새로운 디스플레이용 요소 영상으로 변환하여 공간 3D 영상을 표시한다. 제안하는 방법의 유용함을 보이기 위해서 장난감 3D 물체를 사용하여 기초적인 실험을 수행하고, 또한 공간 3D 영상이 구현된 실험 결과를 제시한다.

Game Engine Driven Synthetic Data Generation for Computer Vision-Based Construction Safety Monitoring

  • Lee, Heejae;Jeon, Jongmoo;Yang, Jaehun;Park, Chansik;Lee, Dongmin
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.893-903
    • /
    • 2022
  • Recently, computer vision (CV)-based safety monitoring (i.e., object detection) system has been widely researched in the construction industry. Sufficient and high-quality data collection is required to detect objects accurately. Such data collection is significant for detecting small objects or images from different camera angles. Although several previous studies proposed novel data augmentation and synthetic data generation approaches, it is still not thoroughly addressed (i.e., limited accuracy) in the dynamic construction work environment. In this study, we proposed a game engine-driven synthetic data generation model to enhance the accuracy of the CV-based object detection model, mainly targeting small objects. In the virtual 3D environment, we generated synthetic data to complement training images by altering the virtual camera angles. The main contribution of this paper is to confirm whether synthetic data generated in the game engine can improve the accuracy of the CV-based object detection model.

  • PDF

Improve object recognition using UWB SAR imaging with compressed sensing

  • Pham, The Hien;Hong, Ic-Pyo
    • 전기전자학회논문지
    • /
    • 제25권1호
    • /
    • pp.76-82
    • /
    • 2021
  • In this paper, the compressed sensing basic pursuit denoise algorithm adopted to synthetic aperture radar imaging is investigated to improve the object recognition. From the incomplete data sets for image processing, the compressed sensing algorithm had been integrated to recover the data before the conventional back- projection algorithm was involved to obtain the synthetic aperture radar images. This method can lead to the reduction of measurement events while scanning the objects. An ultra-wideband radar scheme using a stripmap synthetic aperture radar algorithm was utilized to detect objects hidden behind the box. The Ultra-Wideband radar system with 3.1~4.8 GHz broadband and UWB antenna were implemented to transmit and receive signal data of two conductive cylinders located inside the paper box. The results confirmed that the images can be reconstructed by using a 30% randomly selected dataset without noticeable distortion compared to the images generated by full data using the conventional back-projection algorithm.

전이 학습 기반의 생성 이미지 판별 모델 설계 (Transfer Learning-based Generated Synthetic Images Identification Model)

  • 김채원;윤성연;한명은;박민서
    • 문화기술의 융합
    • /
    • 제10권2호
    • /
    • pp.465-470
    • /
    • 2024
  • 인공지능(Artificial Intelligence, AI) 기반 이미지 생성 기술의 발달로 다양한 이미지가 생성되고 있으며, 이를 정확하게 판별하는 기술이 필요하다. 생성된 이미지 데이터의 양에는 한계가 있으며, 한정된 데이터로 높은 성능을 내기 위해 본 연구에서는 전이 학습(Transfer Learning)을 활용한 생성 이미지를 판별하는 모델을 제안한다. ImageNet 데이터 셋으로 사전학습 된 모델을 입력 데이터 셋인 CIFAKE 데이터 셋에 그대로 적용하여 학습의 시간 비용을 줄인 후, 3개의 은닉층과 1개의 출력층을 더해 모델을 튜닝한다. 모델링 결과, 최종 레이어를 조정한 모델의 성능이 높아짐을 확인하였다. 딥러닝에서 전이 학습을 통해 학습한 후 출력층과 가까운 레이어를 데이터의 특성에 맞게 추가 및 조정하는 과정을 통해 적은 이미지 데이터로 인한 학습 정확도 이슈를 줄이고 생성된 이미지 판별을 할수 있다는 데 의의가 있다.

Radargrammetry를 이용한 C-밴드 및 X-밴드 SAR 위성영상의 DEM 생성 평가 (Assessment of DEM Generated by Stereo C-band and X-band SAR images using Radargrammetry)

  • 송영선;김기홍
    • 대한공간정보학회지
    • /
    • 제21권4호
    • /
    • pp.109-116
    • /
    • 2013
  • SAR(Synthetic Aperture Radar) 영상으로부터 3차원 정보를 추출하는 방법에는 InSAR기법과 radargrammetry기법이 있다. 지금까지는 정밀한 DEM의 생성을 위해서 InSAR가 주로 사용되어 왔으나 InSAR는 지형의 기복이 심하거나 혹은 식생지역에서도 두 영상사이에 높은 상관도를 요구한다. 이에 비해 radargrammetry는 InSAR에 비해서 두 영상의 상관도에 덜 민감하기 때문에 경우에 따라서 DEM을 생성하는데 더 효과적일 수 있다. 특히 두 영상의 상관도를 유지하기가 어려운 X-밴드 SAR 위성영상의 경우는 DEM의 생성에 radargarmmetry가 더 유용할 수 있다. 본 연구에서는 C-밴드 위성인 RADARSAT-1의 입체위성영상과 X-밴드인 TerraSAR-X 입체위성영상에 radargrammetry기법을 적용하여 DEM을 생성하고, 그 특성을 분석하였다.

3D Building Detection and Reconstruction from Aerial Images Using Perceptual Organization and Fast Graph Search

  • Woo, Dong-Min;Nguyen, Quoc-Dat
    • Journal of Electrical Engineering and Technology
    • /
    • 제3권3호
    • /
    • pp.436-443
    • /
    • 2008
  • This paper presents a new method for building detection and reconstruction from aerial images. In our approach, we extract useful building location information from the generated disparity map to segment the interested objects and consequently reduce unnecessary line segments extracted in the low level feature extraction step. Hypothesis selection is carried out by using an undirected graph, in which close cycles represent complete rooftops hypotheses. We test the proposed method with the synthetic images generated from Avenches dataset of Ascona aerial images. The experiment result shows that the extracted 3D line segments of the reconstructed buildings have an average error of 1.69m and our method can be efficiently used for the task of building detection and reconstruction from aerial images.

Dosimetric Evaluation of Synthetic Computed Tomography Technique on Position Variation of Air Cavity in Magnetic Resonance-Guided Radiotherapy

  • Hyeongmin Jin;Hyun Joon An;Eui Kyu Chie;Jong Min Park;Jung-in Kim
    • 한국의학물리학회지:의학물리
    • /
    • 제33권4호
    • /
    • pp.142-149
    • /
    • 2022
  • Purpose: This study seeks to compare the dosimetric parameters of the bulk electron density (ED) approach and synthetic computed tomography (CT) image in terms of position variation of the air cavity in magnetic resonance-guided radiotherapy (MRgRT) for patients with pancreatic cancer. Methods: This study included nine patients that previously received MRgRT and their simulation CT and magnetic resonance (MR) images were collected. Air cavities were manually delineated on simulation CT and MR images in the treatment planning system for each patient. The synthetic CT images were generated using the deep learning model trained in a prior study. Two more plans with identical beam parameters were recalculated with ED maps that were either manually overridden by the cavities or derived from the synthetic CT. Dose calculation accuracy was explored in terms of dose-volume histogram parameters and gamma analysis. Results: The D95% averages were 48.80 Gy, 48.50 Gy, and 48.23 Gy for the original, manually assigned, and synthetic CT-based dose distributions, respectively. The greatest deviation was observed for one patient, whose D95% to synthetic CT was 1.84 Gy higher than the original plan. Conclusions: The variation of the air cavity position in the gastrointestinal area affects the treatment dose calculation. Synthetic CT-based ED modification would be a significant option for shortening the time-consuming process and improving MRgRT treatment accuracy.