• Title/Summary/Keyword: 실험 영상

Search Result 10,218, Processing Time 0.034 seconds

Perceptual Generative Adversarial Network for Single Image De-Snowing (단일 영상에서 눈송이 제거를 위한 지각적 GAN)

  • Wan, Weiguo;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.403-410
    • /
    • 2019
  • Image de-snowing aims at eliminating the negative influence by snow particles and improving scene understanding in images. In this paper, a perceptual generative adversarial network based a single image snow removal method is proposed. The residual U-Net is designed as a generator to generate the snow free image. In order to handle various sizes of snow particles, the inception module with different filter kernels is adopted to extract multiple resolution features of the input snow image. Except the adversarial loss, the perceptual loss and total variation loss are employed to improve the quality of the resulted image. Experimental results indicate that our method can obtain excellent performance both on synthetic and realistic snow images in terms of visual observation and commonly used visual quality indices.

Design of Facial Image Data Collection System for Heart Rate Measurement (심박수 측정을 위한 안면 얼굴 영상 데이터 수집 시스템 설계)

  • Jang, Seung-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.971-976
    • /
    • 2021
  • In this paper, we design a facial facial image data collection system for heart rate measurement using a web camera. The design content of this paper is a function of collecting user face image information using a web camera and measuring heart rate using the user's face image information. There is a possibility that an error may occur due to non-contact heart rate measurement using a web camera. Therefore, in this paper, it is to be used for correcting heart rate program errors through classification of data in cases of error and normal. The data in case of error can be used for the purpose of reducing the error. Experiments were conducted on the proposed ideas and designed in this paper. As a result of the experiment, it was confirmed that it operates normally.

Video Stabilization Algorithm of Shaking image using Deep Learning (딥러닝을 활용한 흔들림 영상 안정화 알고리즘)

  • Lee, Kyung Min;Lin, Chi Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.145-152
    • /
    • 2019
  • In this paper, we proposed a shaking image stabilization algorithm using deep learning. The proposed algorithm utilizes deep learning, unlike some 2D, 2.5D and 3D based stabilization techniques. The proposed algorithm is an algorithm that extracts and compares features of shaky images through CNN network structure and LSTM network structure, and transforms images in reverse order of movement size and direction of feature points through the difference of feature point between previous frame and current frame. The algorithm for stabilizing the shake is implemented by using CNN network and LSTM structure using Tensorflow for feature extraction and comparison of each frame. Image stabilization is implemented by using OpenCV open source. Experimental results show that the proposed algorithm can be used to stabilize the camera shake stability in the up, down, left, and right shaking images.

Acquisition of HDR image using estimation of scenic dynamic range in images with various exposures (다중 노출 복수 영상에서 장면의 다이내믹 레인지 추정을 통한 HDR 영상 획득)

  • Park, Dae-Geun;Park, Kee-Hyon;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.10-20
    • /
    • 2008
  • Generally, to acquire an HDR image, many images that cover the entire dynamic range of the scene with different exposure times are required, then these images are fused into one HDR image. This paper proposes an efficient method for the HDR image acquisition with small number of images. First, we estimated scenic dynamic range using two images with different exposure times. These two images contain the upper and lower limit of the scenic dynamic range. Independently of the scene, according to varied exposure times, similar characteristics for both the maximum gray levels in images that include the upper limit and the minimum gray levels in images that include the lower limit are identified. After modeling these characteristics, the scenic dynamic range is estimated using the modeling results. This estimated scenic dynamic range is then used to select the proper exposure times for the acquisition of an HDR image. We selected only three proper exposure times because entire dynamic range of the cameras could be covered by three dynamic range of the cameras with different exposure times. To evaluate the error of the HDR image, experiments using virtual digital camera images were carried out. For several test images, the error of the HDR image using proposed method was comparable to that of the HDR image which utilize more than ten images for the HDR image acquisition.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Automated Image Matching for Satellite Images with Different GSDs through Improved Feature Matching and Robust Estimation (특징점 매칭 개선 및 강인추정을 통한 이종해상도 위성영상 자동영상정합)

  • Ban, Seunghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1257-1271
    • /
    • 2022
  • Recently, many Earth observation optical satellites have been developed, as their demands were increasing. Therefore, a rapid preprocessing of satellites became one of the most important problem for an active utilization of satellite images. Satellite image matching is a technique in which two images are transformed and represented in one specific coordinate system. This technique is used for aligning different bands or correcting of relative positions error between two satellite images. In this paper, we propose an automatic image matching method among satellite images with different ground sampling distances (GSDs). Our method is based on improved feature matching and robust estimation of transformation between satellite images. The proposed method consists of five processes: calculation of overlapping area, improved feature detection, feature matching, robust estimation of transformation, and image resampling. For feature detection, we extract overlapping areas and resample them to equalize their GSDs. For feature matching, we used Oriented FAST and rotated BRIEF (ORB) to improve matching performance. We performed image registration experiments with images KOMPSAT-3A and RapidEye. The performance verification of the proposed method was checked in qualitative and quantitative methods. The reprojection errors of image matching were in the range of 1.277 to 1.608 pixels accuracy with respect to the GSD of RapidEye images. Finally, we confirmed the possibility of satellite image matching with heterogeneous GSDs through the proposed method.

Metal artifact SUV estimation by using attenuation correction image and non attenuation correction image in PET-CT (PET-CT에서 감쇠보정 영상과 비감쇠보정 영상을 통한 Metal Artifact 보정에 대한 고찰)

  • Kim, June;Kim, Jae-II;Lee, Hong-Jae;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.21-26
    • /
    • 2016
  • Purpose Because of many advantages, PET-CT Scanners generally use CT Data for attenuation correction. By using CT based attenuation correction, we can get anatomical information, reduce scan time and make more accurate correction of attenuation. However in case metal artifact occurred during CT scan, CT-based attenuation correction can induce artifacts and quantitative errors that can affect the PET images. Therefore this study infers true SUV of metal artifact region from attenuation corrected image count -to- non attenuation corrected image count ratio. Materials and Methods Micro phantom inserted $^{18}F-FDG$ 4mCi was used for phantom test and Biograph mCT S(40) is used for medical test equipment. We generated metal artifact in micro phantom by using metal. Then we acquired both metal artifact region of correction factor and non metal artifact region of correction factor by using attenuation correction image count -to- non attenuation correction image count ratio. In case of clinical image, we reconstructed both attenuation corrected images and non attenuation corrected images of 10 normal patient($66{\pm}15age$) who examined PET-CT scan in SNUH. After that, we standardize several organs of correction factor by using attenuation corrected image count -to- non attenuation corrected count ratio. Then we figured out metal artifact region of correction factor by using metal artifact region of attenuation corrected image count -to- non attenuation corrected count ratio And we compared standard organs correction factor with metal artifact region correction factor. Results according to phantom test results, metal artifact induce overestimation of correction factor so metal artifact region of correction factors are 12% bigger than the non metal artifact region of correction factors. in case of clinical test, correction factor of organs with high CT number(>1000) is $8{\pm}0.5%$, correction factor of organs with CT number similar to soft tissue is $6{\pm}2%$ and correction factor of organs with low CT number(-100>) is $3{\pm}1%$. Also metal artifact correction factors are 20% bigger than soft tissue correction factors which didn't happened metal artifact. Conclusion metal artifact lead to overestimation of attenuation coefficient. because of that, SUV of metal artifact region is overestimated. Thus for more accurate quantitative evaluation, using attenuation correction image count -to-non attenuation correction image count ratio is one of the methods to reduce metal artifact affect.

  • PDF

Preliminary Study on the MR Temperature Mapping using Center Array-Sequencing Phase Unwrapping Algorithm (Center Array-Sequencing 위상펼침 기법의 MR 온도영상 적용에 관한 기초연구)

  • Tan, Kee Chin;Kim, Tae-Hyung;Chun, Song-I;Han, Yong-Hee;Choi, Ki-Seung;Lee, Kwang-Sig;Jun, Jae-Ryang;Eun, Choong-Ki;Mun, Chi-Woong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.2
    • /
    • pp.131-141
    • /
    • 2008
  • Purpose : To investigate the feasibility and accuracy of Proton Resonance Frequency (PRF) shift based magnetic resonance (MR) temperature mapping utilizing the self-developed center array-sequencing phase unwrapping (PU) method for non-invasive temperature monitoring. Materials and Methods : The computer simulation was done on the PU algorithm for performance evaluation before further application to MR thermometry. The MR experiments were conducted in two approaches namely PU experiment, and temperature mapping experiment based on the PU technique with all the image postprocessing implemented in MATLAB. A 1.5T MR scanner employing a knee coil with $T2^*$ GRE (Gradient Recalled Echo) pulse sequence were used throughout the experiments. Various subjects such as water phantom, orange, and agarose gel phantom were used for the assessment of the self-developed PU algorithm. The MR temperature mapping experiment was initially attempted on the agarose gel phantom only with the application of a custom-made thermoregulating water pump as the heating source. Heat was generated to the phantom via hot water circulation whilst temperature variation was observed with T-type thermocouple. The PU program was implemented on the reconstructed wrapped phase images prior to map the temperature distribution of subjects. As the temperature change is directly proportional to the phase difference map, the absolute temperature could be estimated from the summation of the computed temperature difference with the measured ambient temperature of subjects. Results : The PU technique successfully recovered and removed the phase wrapping artifacts on MR phase images with various subjects by producing a smooth and continuous phase map thus producing a more reliable temperature map. Conclusion : This work presented a rapid, and robust self-developed center array-sequencing PU algorithm feasible for the application of MR temperature mapping according to the PRF phase shift property.

  • PDF

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

Boundary Noise Removal in Synthesized Intermediate Viewpoint Images for 3D Video (3차원 비디오의 중간시점 합성영상의 경계 잡음 제거 방법)

  • Lee, Cheon;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.109-112
    • /
    • 2008
  • 최근 MPEG(moving picture experts group)에서 표준화를 진행하고 있는 3차원 비디오 시스템은 다시점 영상과 깊이영상을 동시에 이용하여 사용자가 임의의 시점을 선택하거나 스테레오스코픽 장치와 같은 3차원 영상 재생장 치를 동해 3차원 영상을 제공하는 차세대 방송 시스템이다 제한된 시점수를 이용하여 보다 많은 시점의 영상을 제공하려면 중간시점의 영상을 보간하는 장치가 필수적이다. 이 시스템의 입력정보인 깊이값을 이용하면 시점이동을 쉽게 할 수 있는데, 보간한 영상의 화질은 이 깊이값의 정확도에 따라 결정된다. 깊이맵은 대개 컴퓨터 비전을 기반으로 한 스테레오 정합기술을 이용 획득하는데, 객체의 경계와 같은 깊이값 불연속 영역에서 주로 깊이값 오류가 발생하게 된다. 이런 오류는 생성한 중간영상의 배경에 원치 않는 잡음을 발생시킨다. 기존의 방법에서는 측정한 깊이법의 객체 경계와 영상의 객체 경계가 일치한다는 가정으로 중간영상을 합성했다. 그러나 실제로는 깊이값 측정 과정에서 두 가지 경계가 일치하지 않아 전경의 일부분이 배경으로 합성되어 잡음을 발생하는 것이다. 본 논문에서는 깊이맵을 기반으로 중간시점의 영상을 보간할 때 발생하는 경계 잡음을 처리하는 방법을 제안한다. 중간영상을 합성할 때 비폐색 영역을 합성한 후 경계 잡음이 발생할 수 있는 영역을 비폐색 영역을 따라 구별한 다음, 잡음이 없는 참조 영상을 이용함으로써 경계 잡음을 처리할 수 있다. 실험 결과를 통해 배경 잡음이 사라진 자연스러운 합성영상을 생성했다.

  • PDF