• Title/Summary/Keyword: residual image

Search Result 358, Processing Time 0.027 seconds

Landsat TM Image Compression Using Classified Bidirectional Prediction and KLT (영역별 양방향 예측과 KLT를 이용한 인공위성 화상데이터 압축)

  • Kim Seung-Jin;Kim Tae-Su;Park Kyung-Nam;Kim Young-Choon;Lee Kuhn-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • We propose an effective Landsat TM image compression method using the classified bidirectional prediction (CBP), the classified KLT and the SPIHT. The SPIHT is used to exploit the spatial redundancy of feature bands selected in the visible range and the infrared range separately. Regions of the prediction bands are classified into three classes in the wavelet domain, and then the CBP is performed to exploit the spectral redundancy. Residual bands that consist of difference values between the original band and the predicted band are decorrelated by the spectral KLT Finally, the three dimensional (3-D) SPIHT is used to encode the decorrelated coefficients. Experiment results show that the proposed method reconstructs higher quality Landsat TM image than conventional methods at the same bit rate.

A Case Study on the Cross-Well Travel-Time Tomography Regulated by the Error in the Measurement of the First Arrival Time (초동 주시 측정 오차로 제어된 공대공 주시 토모그래피 사례연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.3
    • /
    • pp.233-238
    • /
    • 2009
  • An inversion method regulated by the error in the measurement of the first arrival time was developed, and we conducted a feasibility study by applying the method to a real cross-well seismic data. The inversion is a two-step regulation process; 1) derive the measurement error bound based on the resolution of the velocity image want to derive, and exclude the records whose picking error is larger than the error bound, 2) set the travel time residual to zero if the residual is less than the measurement error. This process prevents the trivial residuals are accumulated and contribute to the velocity-model update. Comparison of two velocity images, one by using all records and another by using the regulate inversion method, shows that the later velocity image exhibits less numerical artefacts, and it also indicates that, according to the Fermat's principle, the latter image is a more feasible velocity model.

Median Filtering Detection of Digital Images Using Pixel Gradients

  • RHEE, Kang Hyeon
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.195-201
    • /
    • 2015
  • For median filtering (MF) detection in altered digital images, this paper presents a new feature vector that is formed from autoregressive (AR) coefficients via an AR model of the gradients between the neighboring row and column lines in an image. Subsequently, the defined 10-D feature vector is trained in a support vector machine (SVM) for MF detection among forged images. The MF classification is compared to the median filter residual (MFR) scheme that had the same 10-D feature vector. In the experiment, three kinds of test items are area under receiver operating characteristic (ROC) curve (AUC), classification ratio, and minimal average decision error. The performance is excellent for unaltered (ORI) or once-altered images, such as $3{\times}3$ average filtering (AVE3), QF=90 JPEG (JPG90), 90% down, and 110% up to scale (DN0.9 and Up1.1) images, versus $3{\times}3$ and $5{\times}5$ median filtering (MF3 and MF5, respectively) and MF3 and MF5 composite images (MF35). When the forged image was post-altered with AVE3, DN0.9, UP1.1 and JPG70 after MF3, MF5 and MF35, the performance of the proposed scheme is lower than the MFR scheme. In particular, the feature vector in this paper has a superior classification ratio compared to AVE3. However, in the measured performances with unaltered, once-altered and post-altered images versus MF3, MF5 and MF35, the resultant AUC by 'sensitivity' (TP: true positive rate) and '1-specificity' (FN: false negative rate) is achieved closer to 1. Thus, it is confirmed that the grade evaluation of the proposed scheme can be rated as 'Excellent (A)'.

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.4
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.

Multispectral Image Data Compression Using Classified Prediction and KLT in Wavelet Transform Domain

  • Kim, Tae-Su;Kim, Seung-Jin;Kim, Byung-Ju;Lee, Jong-Won;Kwon, Seong-Geun;Lee, Kuhn-Il
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.204-207
    • /
    • 2002
  • The current paper proposes a new multispectral image data compression algorithm that can efficiently reduce spatial and spectral redundancies by applying classified prediction, a Karhunen-Loeve transform (KLT), and the three-dimensional set partitioning in hierarchical trees (3-D SPIHT) algorithm In the wavelet transform (WT) domain. The classification is performed in the WT domain to exploit the interband classified dependency, while the resulting class information is used for the interband prediction. The residual image data on the prediction errors between the original image data and the predicted image data is decorrelated by a KLT. Finally, the 3D-SPIHT algorithm is used to encode the transformed coefficients listed in a descending order spatially and spectrally as a result of the WT and KLT. Simulation results showed that the reconstructed images after using the proposed algorithm exhibited a better quality and higher compression ratio than those using conventional algorithms.

  • PDF

PM2.5 Estimation Based on Image Analysis

  • Li, Xiaoli;Zhang, Shan;Wang, Kang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.907-923
    • /
    • 2020
  • For the severe haze situation in the Beijing-Tianjin-Hebei region, conventional fine particulate matter (PM2.5) concentration prediction methods based on pollutant data face problems such as incomplete data, which may lead to poor prediction performance. Therefore, this paper proposes a method of predicting the PM2.5 concentration based on image analysis technology that combines image data, which can reflect the original weather conditions, with currently popular machine learning methods. First, based on local parameter estimation, autoregressive (AR) model analysis and local estimation of the increase in image blur, we extract features from the weather images using an approach inspired by free energy and a no-reference robust metric model. Next, we compare the coefficient energy and contrast difference of each pixel in the AR model and then use the percentages to calculate the image sharpness to derive the overall mass fraction. Furthermore, the results are compared. The relationship between residual value and PM2.5 concentration is fitted by generalized Gauss distribution (GGD) model. Finally, nonlinear mapping is performed via the wavelet neural network (WNN) method to obtain the PM2.5 concentration. Experimental results obtained on real data show that the proposed method offers an improved prediction accuracy and lower root mean square error (RMSE).

Interframe Coding for 3-D Medical Images Using an Adaptive Mode Selection Technique in Wavelet Transform Domain (웨이블릿 변환 영역에서의 적응적 모드 선택 기법을 이용한 3차원 의료 영상을 위한 interframe 부호화)

  • 조현덕;나종범
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.3
    • /
    • pp.265-274
    • /
    • 1999
  • In this paper, we propose a novel interframe coding algorithm especially appropriate for 3-D medical images. The proposed algorithm is based on a video coding algorithm using motion estimation/ compensation and transform coding. In the algorithm, warping is adopted lor motion compensation (MC). Then, by using adaptive mode selection, a motion compensated residual image and original image are mixed up in the wavelet transform domain for improvement in coding performance. The mixed image is then compressed by the zerotree coding method. We prove that the adaptive mode selection technique in the wavelet transform domain is very useful lor 3-D medical image coding. Simulation results show that the proposed scheme provides good performance regardless of inter-slice distance and is prospective for 3-D medical image compression.

  • PDF

Characteristics of Motion-blur Free TFT-LCD using Short Persistent CCFL in Blinking Backlight Driving

  • Han, Jeong-Min;Ok, Chul-Ho;Hwang, Jeoung-Yeon;Seo, Dae-Shik
    • Transactions on Electrical and Electronic Materials
    • /
    • v.8 no.4
    • /
    • pp.166-169
    • /
    • 2007
  • In applying LCD to TV application, one of the most significant factors to be improved is image sticking on the moving picture. LCD is different from CRT in the sense that it's continuous passive device, which holds images in entire frame period, while impulse type device generate image in very short time. To reduce image sticking problem related to hold type display mode, we made an experiment to drive TN-LCD like CRT. We made articulate images by turn on-off backlight, and we realized the ratio of Back Light on-off time by counting between on time and off time for video signal input during 1 frame (16.7 ms). Conventional CCFL (cold cathode fluorescent lamp) cannot follow fast on-off speed, so we evaluated new fluorescent substances of light source to improve residual light characteristic of CCFL. We realized articulate image generation similar to CRT by CCFL blinking drive and TN-LCD overdriving. As a result, reduced image sticking phenomenon was validated by naked eye and response time measurement.

Context-Based Minimum MSE Prediction and Entropy Coding for Lossless Image Coding

  • Musik-Kwon;Kim, Hyo-Joon;Kim, Jeong-Kwon;Kim, Jong-Hyo;Lee, Choong-Woong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.83-88
    • /
    • 1999
  • In this paper, a novel gray-scale lossless image coder combining context-based minimum mean squared error (MMSE) prediction and entropy coding is proposed. To obtain context of prediction, this paper first defines directional difference according to sharpness of edge and gradients of localities of image data. Classification of 4 directional differences forms“geometry context”model which characterizes two-dimensional general image behaviors such as directional edge region, smooth region or texture. Based on this context model, adaptive DPCM prediction coefficients are calculated in MMSE sense and the prediction is performed. The MMSE method on context-by-context basis is more in accord with minimum entropy condition, which is one of the major objectives of the predictive coding. In entropy coding stage, context modeling method also gives useful performance. To reduce the statistical redundancy of the residual image, many contexts are preset to take full advantage of conditional probability in entropy coding and merged into small number of context in efficient way for complexity reduction. The proposed lossless coding scheme slightly outperforms the CALIC, which is the state-of-the-art, in compression ratio.

MRU-Net: A remote sensing image segmentation network for enhanced edge contour Detection

  • Jing Han;Weiyu Wang;Yuqi Lin;Xueqiang LYU
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3364-3382
    • /
    • 2023
  • Remote sensing image segmentation plays an important role in realizing intelligent city construction. The current mainstream segmentation networks effectively improve the segmentation effect of remote sensing images by deeply mining the rich texture and semantic features of images. But there are still some problems such as rough results of small target region segmentation and poor edge contour segmentation. To overcome these three challenges, we propose an improved semantic segmentation model, referred to as MRU-Net, which adopts the U-Net architecture as its backbone. Firstly, the convolutional layer is replaced by BasicBlock structure in U-Net network to extract features, then the activation function is replaced to reduce the computational load of model in the network. Secondly, a hybrid multi-scale recognition module is added in the encoder to improve the accuracy of image segmentation of small targets and edge parts. Finally, test on Massachusetts Buildings Dataset and WHU Dataset the experimental results show that compared with the original network the ACC, mIoU and F1 value are improved, and the imposed network shows good robustness and portability in different datasets.