• Title/Summary/Keyword: noise in image data

Search Result 753, Processing Time 0.028 seconds

Atmospheric correction by Spectral Shape Matching Method (SSMM): Accounting for horizontal inhomogeneity of the atmosphere

  • Shanmugam Palanisamy;Ahn Yu-Hwan
    • Proceedings of the KSRS Conference
    • /
    • 2006.03a
    • /
    • pp.341-343
    • /
    • 2006
  • The current spectral shape matching method (SSMM), developed by Ahn and Shanmugam (2004), relies on the assumption that the path radiance resulting from scattered photons due to air molecules and aerosols and possibly direct-reflected light from the air-sea interface is spatially homogeneous over the sub-scene of interest, enabling the retrieval of water-leaving radiances ($L_w$) from the satellite ocean color image data. This assumption remains valid for the clear atmospheric conditions, but when the distribution of aerosol loadings varies dramatically the above postulation of spatial homogeneity will be violated. In this study, we present the second version of SSMM which will take into account the horizontal variations of aerosol loading in the correction of atmospheric effects in SeaWiFS ocean color image data. The new version includes models for the correction of the effects of aerosols and Raleigh particles and a method fur computation of diffuse transmittance ($t_{os}$) as similar to SeaWiFS. We tested this method over the different optical environments and compared its effectiveness with the results of standard atmospheric correction (SAC) algorithm (Gordon and Wang, 1994) and those from in-situ observations. Findings revealed that the SAC algorithm appeared to distort the spectral shape of water-leaving radiance spectra in suspended sediments (SS) and algal bloom dominated-areas and frequently yielded underestimated or often negative values in the lower green and blue part of the electromagnetic spectrum. Retrieval of water-leaving radiances in coastal waters with very high sediments, for instance = > 8g $m^{-3}$, was not possible with the SAC algorithm. As the current SAC algorithm does not include models for the Asian aerosols, the water-leaving radiances over the aerosol-dominated areas could not be retrieved from the image and large errors often resulted from an inappropriate extrapolation of the estimated aerosol radiance from two IR bands to visible spectrum. In contrast to the above results, the new SSMM enabled accurate retrieval of water-leaving radiances in a various range of turbid waters with SS concentrations from 1 to 100 g $m^{-3}$ that closely matched with those from the in-situ observations. Regardless of the spectral band, the RMS error deviation was minimum of 0.003 and maximum of 0.46, in contrast with those of 0.26 and 0.81, respectively, for SAC algorithm. The new SSMM also remove all aerosol effects excluding areas for which the signal-to-noise ratio is much lower than the water signal.

  • PDF

Deep Video Stabilization via Optical Flow in Unstable Scenes (동영상 안정화를 위한 옵티컬 플로우의 비지도 학습 방법)

  • Bohee Lee;Kwangsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.115-127
    • /
    • 2023
  • Video stabilization is one of the camera technologies that the importance is gradually increasing as the personal media market has recently become huge. For deep learning-based video stabilization, existing methods collect pairs of video datas before and after stabilization, but it takes a lot of time and effort to create synchronized datas. Recently, to solve this problem, unsupervised learning method using only unstable video data has been proposed. In this paper, we propose a network structure that learns the stabilized trajectory only with the unstable video image without the pair of unstable and stable video pair using the Convolutional Auto Encoder structure, one of the unsupervised learning methods. Optical flow data is used as network input and output, and optical flow data was mapped into grid units to simplify the network and minimize noise. In addition, to generate a stabilized trajectory with an unsupervised learning method, we define the loss function that smoothing the input optical flow data. And through comparison of the results, we confirmed that the network is learned as intended by the loss function.

Usefulness of Deep Learning Image Reconstruction in Pediatric Chest CT (소아 흉부 CT 검사 시 딥러닝 영상 재구성의 유용성)

  • Do-Hun Kim;Hyo-Yeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.297-303
    • /
    • 2023
  • Pediatric Computed Tomography (CT) examinations can often result in exam failures or the need for frequent retests due to the difficulty of cooperation from young patients. Deep Learning Image Reconstruction (DLIR) methods offer the potential to obtain diagnostically valuable images while reducing the retest rate in CT examinations of pediatric patients with high radiation sensitivity. In this study, we investigated the possibility of applying DLIR to reduce artifacts caused by respiration or motion and obtain clinically useful images in pediatric chest CT examinations. Retrospective analysis was conducted on chest CT examination data of 43 children under the age of 7 from P Hospital in Gyeongsangnam-do. The images reconstructed using Filtered Back Projection (FBP), Adaptive Statistical Iterative Reconstruction (ASIR-50), and the deep learning algorithm TrueFidelity-Middle (TF-M) were compared. Regions of interest (ROI) were drawn on the right ascending aorta (AA) and back muscle (BM) in contrast-enhanced chest images, and noise (standard deviation, SD) was measured using Hounsfield units (HU) in each image. Statistical analysis was performed using SPSS (ver. 22.0), analyzing the mean values of the three measurements with one-way analysis of variance (ANOVA). The results showed that the SD values for AA were FBP=25.65±3.75, ASIR-50=19.08±3.93, and TF-M=17.05±4.45 (F=66.72, p=0.00), while the SD values for BM were FBP=26.64±3.81, ASIR-50=19.19±3.37, and TF-M=19.87±4.25 (F=49.54, p=0.00). Post-hoc tests revealed significant differences among the three groups. DLIR using TF-M demonstrated significantly lower noise values compared to conventional reconstruction methods. Therefore, the application of the deep learning algorithm TrueFidelity-Middle (TF-M) is expected to be clinically valuable in pediatric chest CT examinations by reducing the degradation of image quality caused by respiration or motion.

An Adaptive Block Matching Algorithm Based on Temporal Correlations (시간적 상관성을 이용한 적응적 블록 정합 알고리즘)

  • Yoon, Hyo-Sun;Lee, Guee-Sang
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.199-204
    • /
    • 2002
  • Since motion estimation and motion compensation methods remove the redundant data to employ the temporal redundancy in images, it plays an important role in digital video compression. Because of its high computational complexity, however, it is difficult to apply to high-resolution applications in real time environments. If we have information about the motion of an image block before the motion estimation, the location of a better starting point for the search of an exact motion vector can be determined to expedite the searching process. In this paper, we present an adaptive motion estimation approach bated on temporal correlations of consecutive image frames that defines the search pattern and determines the location of the initial search point adaptively. Through experiments, compared with DS(Diamond Search) algorithm, the proposed algorithm is about 0.1∼0.5(dB) better than DS in terms of PSNR(Peak Signal to Noise Ratio) and improves as high as 50% compared with DS in terms of average number of search point per motion vector estimation.

Monitoring System for TV Advertisement Using Watermark (워터마크를 이용한 TV방송 광고모니터링 시스템)

  • Shin, Dong-Hwan;Kim, Geung-Sun;Kim, Jong-Weon;Choi, Jong-Uk
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.15-18
    • /
    • 2004
  • In this paper, it is implemented the monitoring system for TV advertisement using video watermark. The functions of an advertisement monitoring system are automatically monitoring for the time, length, and index of the on-air advertisement, saving the log data, and reporting the monitoring result. The performance of the video watermark used in this paper is tested for TV advertisement monitoring. This test includes LAB test and field test. LAB test is done in laboratory environment and field test in actually broadcasting environment. LAB test includes PSNR, distortion measure in image, and the watermark detection rate in the various attack environment such as AD/DA(analog to digital and digital to analog) conversion, noise addition, and MPEG compression The result of LAB test is good for the TV advertisement monitoring. KOBACO and SBS are participated in the field test. The watermark detection rate is 100% in both the real-time processing and the saved file processing. The average deviation of the watermark detection time is 0.2 second, which is good because the permissible average error is 0.5 second.

  • PDF

A Study on the design of Video Watermarking System for TV Advertisement Monitoring (TV광고 모니터링을 위한 비디오 워터마킹 시스템의 설계에 관한 연구)

  • Shin, Dong-Hwan;Kim, Sung-Hwan
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.1
    • /
    • pp.206-213
    • /
    • 2007
  • In this paper, The monitoring system for TV advertisement is implemented using video watermark. The functions of the advertisement monitoring system are monitoring the time, length, and index of the on-air advertisement, saving the log data, and reporting the monitoring result. The performance of the video watermark used in this paper is tested for TV advertisement monitoring. This test includes LAB test and field test. LAB test is done in laboratory environment and field test in actually broadcasting environment. LAB test includes PSNR, distortion measure in image, and the watermark detection rate in the various attack environment such as AD/DA(analog to digital and digital to analog) conversion, noise addition, and MPEG compression. The result of LAB test is good for the TV advertisement monitoring. KOBACO and SBS are participated in the field test. The watermark detection rate is 100% in both the real-time processing and the saved the processing. The average deviation of the watermark detection time is 0.2 second, which is good because the permissible average error is 0.5 second.

Study on Characteristic difference of Semiconductor Radiation Detectors fabricated with a wet coating process

  • Choi, Chi-Won;Cho, Sung-Ho;Yun, Min-Suk;Kang, Sang-Sik;Park, Ji-Koon;Nam, Sang-Hee
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.06a
    • /
    • pp.192-193
    • /
    • 2006
  • The wet coating process could easily be made from large area film with printing paste mixed with semiconductor and binder material at room temperature. Semiconductor film fabricated about 25mm thickness was evaluated by field emissions-canning electron microscopy (FE-SEM). X-ray performance data such as dark current, sensitivity and signal to noise ratio (SNR) were evaluated. The $Hgl_2$ semiconductor was shown in much lower dark current than the others, but the best sensitivity. In this paper, reactivity and combination character of semiconductor and binder material that affect electrical and X-ray detection properties would prove out though experimental results.

  • PDF

AWGN Removal Algorithm using Switching Fuzzy Function and Weight (스위칭 퍼지 함수와 가중치를 사용한 AWGN 제거 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.121-123
    • /
    • 2021
  • Image processing is being used in various forms in important fields of the 4th industrial revolution, such as artificial intelligence, smart factories, and the IoT industry. In particular, in systems that require data processing such as object tracking, medical images, and object recognition, noise removal is used as a preprocessing step, but the existing algorithm has a drawback in that blurring occurs in the filtering process. Therefore, in this paper, we propose a filter algorithm using switching fuzzy weights. The proposed algorithm switches the fuzzy function by dividing the low-frequency region and the high-frequency region by the standard deviation of the filtering mask, and obtains the final output according to the fuzzy weight. The proposed algorithm showed improved results compared to the existing method, and showed excellent characteristics in the region where the high-frequency component is strong.

  • PDF

A Spline-Regularized Sinogram Smoothing Method for Filtered Backprojection Tomographic Reconstruction

  • Lee, S.J.;Kim, H.S.
    • Journal of Biomedical Engineering Research
    • /
    • v.22 no.4
    • /
    • pp.311-319
    • /
    • 2001
  • Statistical reconstruction methods in the context of a Bayesian framework have played an important role in emission tomography since they allow to incorporate a priori information into the reconstruction algorithm. Given the ill-posed nature of tomographic inversion and the poor quality of projection data, the Bayesian approach uses regularizers to stabilize solutions by incorporating suitable prior models. In this work we show that, while the quantitative performance of the standard filtered backprojection (FBP) algorithm is not as good as that of Bayesian methods, the application of spline-regularized smoothing to the sinogram space can make the FBP algorithm improve its performance by inheriting the advantages of using the spline priors in Bayesian methods. We first show how to implement the spline-regularized smoothing filter by deriving mathematical relationship between the regularization and the lowpass filtering. We then compare quantitative performance of our new FBP algorithms using the quantitation of bias/variance and the total squared error (TSE) measured over noise trials. Our numerical results show that the second-order spline filter applied to FBP yields the best results in terms of TSE among the three different spline orders considered in our experiments.

  • PDF

Visualization and classification of hidden defects in triplex composites used in LNG carriers by active thermography

  • Hwang, Soonkyu;Jeon, Ikgeun;Han, Gayoung;Sohn, Hoon;Yun, Wonjun
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.803-812
    • /
    • 2019
  • Triplex composite is an epoxy-bonded joint structure, which constitutes the secondary barrier in a liquefied natural gas (LNG) carrier. Defects in the triplex composite weaken its shear strength and may cause leakage of the LNG, thus compromising the structural integrity of the LNG carrier. This paper proposes an autonomous triplex composite inspection (ATCI) system for visualizing and classifying hidden defects in the triplex composite installed inside an LNG carrier. First, heat energy is generated on the surface of the triplex composite using halogen lamps, and the corresponding heat response is measured by an infrared (IR) camera. Next, the region of interest (ROI) is traced and noise components are removed to minimize false indications of defects. After a defect is identified, it is classified as internal void or uncured adhesive and its size and shape are quantified and visualized, respectively. The proposed ATCI system allows the fully automated and contactless detection, classification, and quantification of hidden defects inside the triplex composite. The effectiveness of the proposed ATCI system is validated using the data obtained from actual triplex composite installed in an LNG carrier membrane system.