• Title/Summary/Keyword: Wavelet coefficient

Search Result 269, Processing Time 0.03 seconds

Implementation of JBIG2 CODEC with Effective Document Segmentation (문서의 효율적 영역 분할과 JBIG2 CODEC의 구현)

  • 백옥규;김현민;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6A
    • /
    • pp.575-583
    • /
    • 2002
  • JBIG2 is an International Standard fur compression of Bi-level images and documents. JBIG2 supports three encoding modes for high compression according to region features of documents. One of which is generic region coding for bitmap coding. The basic bitmap coder is either MMR or arithmetic coding. Pattern matching coding method is used for text region, and halftone pattern coding is used for halftone region. In this paper, a document is segmented into line-art, halftone and text region for JBIG2 encoding and JBIG2 CODEC is implemented. For efficient region segmentation of documents, region segmentation method using wavelet coefficient is applied with existing boundary extraction technique. In case of facsimile test image(IEEE-167a), there is improvement in compression ratio of about 2% and enhancement of subjective quality. Also, we propose arbitrary shape halftone region coding, which improves subjective quality in talc neighboring text of halftone region.

Image Retrieval Using Spacial Color Correlation and Local Texture Characteristics (칼라의 공간적 상관관계 및 국부 질감 특성을 이용한 영상검색)

  • Sung, Joong-Ki;Chun, Young-Deok;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.103-114
    • /
    • 2005
  • This paper presents a content-based image retrieval (CBIR) method using the combination of color and texture features. As a color feature, a color autocorrelogram is chosen which is extracted from the hue and saturation components of a color image. As a texture feature, BDIP(block difference of inverse probabilities) and BVLC(block variation of local correlation coefficients) are chosen which are extracted from the value component. When the features are extracted, the color autocorrelogram and the BVLC are simplified in consideration of their calculation complexity. After the feature extraction, vector components of these features are efficiently quantized in consideration of their storage space. Experiments for Corel and VisTex DBs show that the proposed retrieval method yields 9.5% maximum precision gain over the method using only the color autucorrelogram and 4.0% over the BDIP-BVLC. Also, the proposed method yields 12.6%, 14.6%, and 27.9% maximum precision gains over the methods using wavelet moments, CSD, and color histogram, respectively.

A differential image quantizer based on wavelet for low bit rate video coding (저비트율 동영상 부호화에 적합한 웨이블릿 기반의 차영상 양자화기)

  • 주수경;유지상
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.473-480
    • /
    • 2003
  • In this paper, we propose a new quadtree coding a1gorithm to improve the performance of the old one. The new algorithm can process any frame of size in standard and reduce encoding and decoding time by decreasing computational load. It also improves the image quality comparing with any old quantizer based on quadtree and zerotree structure. In order for the new algorithm to be applied for real video codec, we analyze the statistical characteristics of coefficients of differential image and add a function that makes It deal with an arbitrary size of image by using new technique while the old one process by block unit. We can also improve the image quality by scaling the coefficient's value from a differential image. By comparing the performance of the new algorithm with quadtree and SPIHT, it Is shown that PSNR is improved, that the computational load is not reduced in encoding and decoding.

Effective Drought Prediction Based on Machine Learning (머신러닝 기반 효과적인 가뭄예측)

  • Kim, Kyosik;Yoo, Jae Hwan;Kim, Byunghyun;Han, Kun-Yeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.326-326
    • /
    • 2021
  • 장기간에 걸쳐 넓은 지역에 대해 발생하는 가뭄을 예측하기위해 많은 학자들의 기술적, 학술적 시도가 있어왔다. 본 연구에서는 복잡한 시계열을 가진 가뭄을 전망하는 방법 중 시나리오에 기반을 둔 가뭄전망 방법과 실시간으로 가뭄을 예측하는 비시나리오 기반의 방법 등을 이용하여 미래 가뭄전망을 실시했다. 시나리오에 기반을 둔 가뭄전망 방법으로는, 3개월 GCM(General Circulation Model) 예측 결과를 바탕으로 2009년도 PDSI(Palmer Drought Severity Index) 가뭄지수를 산정하여 가뭄심도에 대한 단기예측을 실시하였다. 또, 통계학적 방법과 물리적 모델(Physical model)에 기반을 둔 확정론적 수치해석 방법을 이용하여 비시나리오 기반 가뭄을 예측했다. 기존 가뭄을 통계학적 방법으로 예측하기 위해서 시도된 대표적인 방법으로 ARIMA(Autoregressive Integrated Moving Average) 모델의 예측에 대한 한계를 극복하기위해 서포트 벡터 회귀(support vector regression, SVR)와 웨이블릿(wavelet neural network) 신경망을 이용해 SPI를 측정하였다. 최적모델구조는 RMSE(root mean square error), MAE(mean absolute error) 및 R(correlation Coefficient)를 통해 선정하였고, 1-6개월의 선행예보 시간을 갖고 가뭄을 전망하였다. 그리고 SPI를 이용하여, 마코프 연쇄(Markov chain) 및 대수선형모델(log-linear model)을 적용하여 SPI기반 가뭄예측의 정확도를 검증하였으며, 터키의 아나톨리아(Anatolia) 지역을 대상으로 뉴로퍼지모델(Neuro-Fuzzy)을 적용하여 1964-2006년 기간의 월평균 강수량과 SPI를 바탕으로 가뭄을 예측하였다. 가뭄 빈도와 패턴이 불규칙적으로 변하며 지역별 강수량의 양극화가 심화됨에 따라 가뭄예측의 정확도를 높여야 하는 요구가 커지고 있다. 본 연구에서는 복잡하고 비선형성으로 이루어진 가뭄 패턴을 기상학적 가뭄의 정도를 나타내는 표준강수증발지수(SPEI, Standardized Precipitation Evapotranspiration Index)인 월SPEI와 일SPEI를 기계학습모델에 적용하여 예측개선 모형을 개발하고자 한다.

  • PDF

Gabor Wavelet Analysis for Face Recognition in Medical Asset Protection (의료자산보호에서 얼굴인식을 위한 가보 웨이블릿 분석)

  • Jun, In-Ja;Chung, Kyung-Yong;Lee, Young-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.11
    • /
    • pp.10-18
    • /
    • 2011
  • Medical asset protection is important in each medical institution especially because of the law on private medical record protection and face recognition for this protection is one of the most interesting and challenging problems. In recognizing human faces, the distortion of face images can be caused by the change of pose, illumination, expressions and scale. It is difficult to recognize faces due to the locations of lights and the directions of lights. In order to overcome those problems, this paper presents an analysis of coefficients of Gabor wavelets, kernel decision, feature point, size of kernel, for face recognition in CCTV surveillance. The proposed method consists of analyses. The first analysis is to select of the kernel from images, the second is an coefficient analysis for kernel sizes and the last is the measure of changes in garbo kernel sizes according to the change of image sizes. Face recognitions are processed using the coefficients of experiment results and success rate is 97.3%. Ultimately, this paper suggests empirical application to verify the adequacy and the validity with the proposed method. Accordingly, the satisfaction and the quality of services will be improved in the face recognition area.

FPGA-based One-Chip Architecture and Design of Real-time Video CODEC with Embedded Blind Watermarking (블라인드 워터마킹을 내장한 실시간 비디오 코덱의 FPGA기반 단일 칩 구조 및 설계)

  • 서영호;김대경;유지상;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1113-1124
    • /
    • 2004
  • In this paper, we proposed a hardware(H/W) structure which can compress and recontruct the input image in real time operation and implemented it into a FPGA platform using VHDL(VHSIC Hardware Description Language). All the image processing element to process both compression and reconstruction in a FPGA were considered each of them was mapped into H/W with the efficient structure for FPGA. We used the DWT(discrete wavelet transform) which transforms the data from spatial domain to the frequency domain, because use considered the motion JPEG2000 as the application. The implemented H/W is separated to both the data path part and the control part. The data path part consisted of the image processing blocks and the data processing blocks. The image processing blocks consisted of the DWT Kernel fur the filtering by DWT, Quantizer/Huffman Encoder, Inverse Adder/Buffer for adding the low frequency coefficient to the high frequency one in the inverse DWT operation, and Huffman Decoder. Also there existed the interface blocks for communicating with the external application environments and the timing blocks for buffering between the internal blocks The global operations of the designed H/W are the image compression and the reconstruction, and it is operated by the unit of a field synchronized with the A/D converter. The implemented H/W used the 69%(16980) LAB(Logic Array Block) and 9%(28352) ESB(Embedded System Block) in the APEX20KC EP20K600CB652-7 FPGA chip of ALTERA, and stably operated in the 70MHz clock frequency. So we verified the real time operation of 60 fields/sec(30 frames/sec).

Identification of Subsurface Discontinuities via Analyses of Borehole Synthetic Seismograms (시추공 합성탄성파 기록을 통한 지하 불연속 경계면의 파악)

  • Kim, Ji-Soo;Lee, Jae-Young;Seo, Yong-Seok;Ju, Hyeon-Tae
    • The Journal of Engineering Geology
    • /
    • v.23 no.4
    • /
    • pp.457-465
    • /
    • 2013
  • We integrated and correlated datasets from surface and subsurface geophysics, drilling cores, and engineering geology to identify geological interfaces and characterize the joints and fracture zones within the rock mass. The regional geometry of a geologically weak zone was investigated via a fence projection of electrical resistivity data and a borehole image-processing system. Subsurface discontinuities and intensive fracture zones within the rock mass are delineated by cross-hole seismic tomography and analyses of dip directions in rose diagrams. The dynamic elastic modulus is studied in terms of the P-wave velocity and Poisson's ratio. Subsurface discontinuities, which are conventionally identified using the N value and from core samples, can now be identified from anomalous reflection coefficients (i.e., acoustic impedance contrast) calculated using a pair of well logs, comprising seismic velocity from suspension-PS logging and density from logging. Intensive fracture zones identified in the synthetic seismogram are matched to core loss zones in the drilling core data and to a high concentration of joints in the borehole imaging system. The upper boundaries of fracture zones are correlated to strongly negative amplitude in the synthetic trace, which is constructed by convolution of the optimal Ricker wavelet with a reflection coefficient. The standard deviations of dynamic elastic moduli are higher for fracture zones than for acompact rock mass, due to the wide range of velocities resulting from the large numbers of joints and fractures within the zone.

Seismic AVO Analysis, AVO Modeling, AVO Inversion for understanding the gas-hydrate structure (가스 하이드레이트 부존층의 구조파악을 위한 탄성파 AVO 분석 AVO모델링, AVO역산)

  • Kim Gun-Duk;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.643-646
    • /
    • 2005
  • The gas hydrate exploration using seismic reflection data, the detection of BSR(Bottom Simulating Reflector) on the seismic section is the most important work flow because the BSR have been interpreted as being formed at the base of a gas hydrate zone. Usually, BSR has some dominant qualitative characteristics on seismic section i.e. Wavelet phase reversal compare to sea bottom signal, Parallel layer with sea bottom, Strong amplitude, Masking phenomenon above the BSR, Cross bedding with other geological layer. Even though a BSR can be selected on seismic section with these guidance, it is not enough to conform as being true BSR. Some other available methods for verifying the BSR with reliable analysis quantitatively i.e. Interval velocity analysis, AVO(Amplitude Variation with Offset)analysis etc. Usually, AVO analysis can be divided by three main parts. The first part is AVO analysis, the second is AVO modeling and the last is AVO inversion. AVO analysis is unique method for detecting the free gas zone on seismic section directly. Therefore it can be a kind of useful analysis method for discriminating true BSR, which might arise from an Possion ratio contrast between high velocity layer, partially hydrated sediment and low velocity layer, water saturated gas sediment. During the AVO interpretation, as the AVO response can be changed depend upon the water saturation ratio, it is confused to discriminate the AVO response of gas layer from dry layer. In that case, the AVO modeling is necessary to generate synthetic seismogram comparing with real data. It can be available to make conclusions from correspondence or lack of correspondence between the two seismograms. AVO inversion process is the method for driving a geological model by iterative operation that the result ing synthetic seismogram matches to real data seismogram wi thin some tolerance level. AVO inversion is a topic of current research and for now there is no general consensus on how the process should be done or even whether is valid for standard seismic data. Unfortunately, there are no well log data acquired from gas hydrate exploration area in Korea. Instead of that data, well log data and seismic data acquired from gas sand area located nearby the gas hydrate exploration area is used to AVO analysis, As the results of AVO modeling, type III AVO anomaly confirmed on the gas sand layer. The Castagna's equation constant value for estimating the S-wave velocity are evaluated as A=0.86190, B=-3845.14431 respectively and water saturation ratio is $50\%$. To calculate the reflection coefficient of synthetic seismogram, the Zoeppritz equation is used. For AVO inversion process, the dataset provided by Hampson-Rushell CO. is used.

  • PDF

Improvement of Flexible Zerotree Coder by Efficient Transmission of Wavelet Coefficients (웨이블렛 계수의 효율적인 전송에 따른 가변제로트리코더의 성능개선)

  • Joo, Sang-Hyun;Shin, Jae-Ho
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.9
    • /
    • pp.76-84
    • /
    • 1999
  • EZW proposed by Shapiro is based on a zerotree constructed in a way that a parent coefficient in a subband is related to four child coefficients in the next finer subband of similar orientation. This fixed treeing based on 1-to-4 parent-child is suitable to exploti hierachical correlations among subbands but not to exploit spatial correlations within a subband. A new treeing by Joo, et al. is suggested to simulatneously exploit those two correlatins by extending parent-child relationship in a flexible way. The flexible treeing leads to increasing the number of symbols and lowering entorpy comparing to the fixed treeing, and therefore a better compression can be resulted. In this paper, we suggest two techniques to suppress the increasing of symbols. First, a probing bit is generated to avoid redundant scan for insignivicant coefficients. Second, since all subbands do not always require the same kind of symbol-set, produced symbols are re-symbolized into binary codes according to a pre-defined procedure. Owing to those techniques, all symbols are generated as binary codes. The binary symbols can be entropy-coded by an adaptive arithmetic coding. Moerover, the binary symbol stream can give comparatively good performances without help of additional entropy coding. Our proposed coding scheme is suggested in two modes: binary coding mode and arithmetic coding mode. We evaluate the effectivenessof our modifications by comparing with the original EZW.

  • PDF