• Title/Summary/Keyword: Image decomposition

Search Result 368, Processing Time 0.021 seconds

Multi-resolution hierarchical motion estimation in the wavelet transform domain (웨이브렛 변환된 다해상도 영상을 이용한 계층적 움직임 추정)

  • 김진태;장준필;김동욱;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.50-59
    • /
    • 1996
  • In this paper, a new hierarchical motion estiamtion scheme using the wavelet transformed multi-resolution image layers is proposed. Compared with the full search motion estimation method, the existing hierarchical methods remarkably reduce the amount of the computation but their efficiencies are depreciated by the local minima problem. In order to solve the local minima problem, the multi-resolution image layers are composed using the wavelet transform and the number of layers participated in the motion estimation for a block is determined by considering of its low band energy and higher band energy on the first wavelet transformed layer. The ratio between higher band energy and low band energy of each block is evaluated and in the case of the blocks which include relatively large higher band energy, the motion estimation is carried out in the high resolution layer. Otherwise, all layers are used. The final motion vectors are obtained in the first wavelet transformed layer. So less bits for motion vectors are transmitted, and the decomposition of received image using inverse wavelet transform decreases the blocking effect.

  • PDF

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

Texture Image Retrieval Using DTCWT-SVD and Local Binary Pattern Features

  • Jiang, Dayou;Kim, Jongweon
    • Journal of Information Processing Systems
    • /
    • v.13 no.6
    • /
    • pp.1628-1639
    • /
    • 2017
  • The combination texture feature extraction approach for texture image retrieval is proposed in this paper. Two kinds of low level texture features were combined in the approach. One of them was extracted from singular value decomposition (SVD) based dual-tree complex wavelet transform (DTCWT) coefficients, and the other one was extracted from multi-scale local binary patterns (LBPs). The fusion features of SVD based multi-directional wavelet features and multi-scale LBP features have short dimensions of feature vector. The comparing experiments are conducted on Brodatz and Vistex datasets. According to the experimental results, the proposed method has a relatively better performance in aspect of retrieval accuracy and time complexity upon the existing methods.

DIRECT COMPARISON STUDY OF THE CAHN-HILLIARD EQUATION WITH REAL EXPERIMENTAL DATA

  • DARAE, JEONG;SEOKJUN, HAM;JUNSEOK, KIM
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.26 no.4
    • /
    • pp.333-342
    • /
    • 2022
  • In this paper, we perform a direct comparison study of real experimental data for domain rearrangement and the Cahn-Hilliard (CH) equation on the dynamics of morphological evolution. To validate a mathematical model for physical phenomena, we take initial conditions from experimental images by using an image segmentation technique. The image segmentation algorithm is based on the Mumford-Shah functional and the Allen-Cahn (AC) equation. The segmented phase-field profile is similar to the solution of the CH equation, that is, it has hyperbolic tangent profile across interfacial transition region. We use unconditionally stable schemes to solve the governing equations. As a test problem, we take domain rearrangement of lipid bilayers. Numerical results demonstrate that comparison of the evolutions with experimental data is a good benchmark test for validating a mathematical model.

An image sequence coding using motion-compensated transform technique based on the sub-band decomposition (움직임 보상 기법과 분할 대역 기법을 사용한 동영상 부호화 기법)

  • Paek, Hoon;Kim, Rin-Chul;Lee, Sang-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.1-16
    • /
    • 1996
  • In this paper, by combining the motion compensated transform coding with the sub-band decomposition technique, we present a motion compensated sub-band coding technique(MCSBC) for image sequence coding. Several problems related to the MCSBC, such as a scheme for motion compensation in each sub-band and the efficient VWL coding of the DCT coefficients in each sub-band are discussed. For an efficient coding, the motion estimation and compensation is performed only on the LL sub-band, but the discrete cosine transform(DCT) is employed to encode all sub-bands in our approach. Then, the transform coefficients in each sub-band are scanned in a different manner depending on the energy distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distribution of each sub-band. The performance of the proposed MCSBC technique is intensively examined by computer simulations on the HDTV image sequences. The simulation results reveal that the proposed MCSBC technique outperforms other coding techniques, especially the well-known motion compensated transform coding technique by about 1.5dB, in terms of the average peak signal to noise ratio.

  • PDF

Comparison of Thresholding Techniques for SVD Coefficients in CT Perfusion Image Analysis (CT 관류 영상 해석에서의 SVD 계수 임계화 기법의 성능 비교)

  • Kim, Nak Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.276-286
    • /
    • 2013
  • SVD-based deconvolution algorithm has been known as the most effective technique for CT perfusion image analysis. In this algorithm, in order to reduce noise effects, SVD coefficients smaller than a certain threshold are removed. As the truncation threshold, either a fixed value or a variable threshold yielding a predetermined OI (oscillation index) is frequently employed. Each of these two thresholding methods has an advantage to the other either in accuracy or efficiency. In this paper, we propose a Monte Carlo simulation method to evaluate the accuracy of the two methods. An extension of the proposed method is presented as well to measure the effects of image smoothing on the accuracy of the thresholding methods. In this paper, after the simulation method is described, experimental results are presented using both simulated data and real CT images.

3-D Lossy Volumetric Medical Image Compression with Overlapping method and SPIHT Algorithm and Lifting Steps (Overlapping method와 SPIHT Algorithm과 Lifting Steps을 이용한 3차원 손실 의료 영상 압축 방법)

  • 김영섭
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.3
    • /
    • pp.263-269
    • /
    • 2003
  • This paper focuses on lossy medical image compression methods for medical images that operate on three-dimensional(3D) irreversible integer wavelet transform. We offer an application of the Set Partitioning in Hierarchical Trees(SPIHT) algorithm〔l-3〕to medical images, using a 3-D wavelet decomposition and a 3-D spatial dependence tree. The wavelet decomposition is accomplished with integer wavelet filters implemented with the lifting method, where careful scaling and truncations keep the integer precision small and the transform unitary. As the compression rate increases, the boundaries between adjacent coding units become increasingly visible. Unlike video, the volume image is examined under static condition, and must not exhibit such boundary artifacts. In order to eliminate them, we utilize overlapping at axial boundaries between adjacent coding units. We have tested our encoder on medical images using different integer filters. Results show that our algorithm with certain filters performs as well. The improvement is visibly manifested as fewer ringing artifacts and noticeably better reconstruction of low contrast.

  • PDF

3D volumetric medical image coding using unbalanced tree structure (불균형 트리 구조를 이용한 3차원 의료 영상 압축)

  • Kim Young-Seop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.4
    • /
    • pp.567-574
    • /
    • 2006
  • This paper focuses on lossy medical image compression methods for medical images that operate on three-dimensional(3-D) irreversible integer wavelet transform. We offer an application of unbalanced tree structure algorithm to medical images, using a 3-D unbalanced wavelet decomposition and a 3-D unbalanced spatial dependence tree. The wavelet decomposition is accomplished with integer wavelet filters implemented with the lifting method. We have tested our encoder on volumetric medical images using different integer filters and 16 coding unit size. The coding unit sizes of 16 slices save considerable dynamic memory(RAM) and coding delay from full sequence coding units used in previous works. If we allow the formation of trees of different lengths, then we can accomodate more transaxial scales than three. Then the encoder and decoder can then keep track of the length of the tree in which each pixel resides through the sequence of decompositions. Results show that, even with these small coding units, our algorithm with I(5,3)filter performs as well and better in lossy coding than previous coding systems using 3-D integer unbalanced wavelet transforms on volumetric medical images.

  • PDF

Flame Detection Using Haar Wavelet and Moving Average in Infrared Video (적외선 비디오에서 Haar 웨이블릿과 이동평균을 이용한 화염검출)

  • Kim, Dong-Keun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.367-376
    • /
    • 2009
  • In this paper, we propose a flame detection method using Haar wavelet and moving averages in outdoor infrared video sequences. Our proposed method is composed of three steps which are Haar wavelet decomposition, flame candidates detection, and their tracking and flame classification. In Haar wavelet decomposition, each frame is decomposed into 4 sub- images(LL, LH, HL, HH), and also computed high frequency energy components using LH, HL, and HH. In flame candidates detection, we compute a binary image by thresholding in LL sub-image and apply morphology operations to the binary image to remove noises. After finding initial boundaries, final candidate regions are extracted using expanding initial boundary regions to their neighborhoods. In tracking and flame classification, features of region size and high frequency energy are calculated from candidate regions and tracked using queues, and we classify whether the tracked regions are flames by temporal changes of moving averages.

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.4
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.