• Title/Summary/Keyword: transform domain processing

Search Result 233, Processing Time 0.024 seconds

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

DCT-domain MPEG-2/H.264 Video Transcoder System Architecture for DMB Services (DMB 서비스를 위한 DCT 기반 MPEG-2/H.264 비디오 트랜스코더 시스템 구조)

  • Lee Joo-Kyong;Kwon Soon-Young;Park Seong-Ho;Kim Young-Ju;Chung Ki-Dong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.637-646
    • /
    • 2005
  • Most of the multimedia contents for DBM services art provided as MPEG-2 bit streams. However, they have to be transcoded to H.264 bit streams for practical services because the standard video codec for DMB is H.264. The existing transcoder architecture is Cascaded Pixel-Domain Transcoding Architecture, which consists of the MPEG-2 dacoding phase and the H.264 encoding phase. This architecture can be easily implemented using MPEG-2 decoder and H.264 encoder without source modifying. However. It has disadvantages in transcoding time and DCT-mismatch problem. In this paper, we propose two kinds of transcoder architecture, DCT-OPEN and DCT-CLOSED, to complement the CPDT architecture. Although DCT-OPEN has lower PSNR than CPDT due to drift problem, it is efficient for real-time transcoding. On the contrary, the DCT-CLOSED architecture has the advantage of PSNR over CPDT at the cost of transcoding time.

The Consideration for Optimum 3D Seismic Processing Procedures in Block II, Northern Part of South Yellow Sea Basin (대륙붕 2광구 서해분지 북부지역의 3D전산처리 최적화 방안시 고려점)

  • Ko, Seung-Won;Shin, Kook-Sun;Jung, Hyun-Young
    • The Korean Journal of Petroleum Geology
    • /
    • v.11 no.1 s.12
    • /
    • pp.9-17
    • /
    • 2005
  • In the main target area of the block II, Targe-scale faults occur below the unconformity developed around 1 km in depth. The contrast of seismic velocity around the unconformity is generally so large that the strong multiples and the radical velocity variation would deteriorate the quality of migrated section due to serious distortion. More than 15 kinds of data processing techniques have been applied to improve the image resolution for the structures farmed from this active crustal activity. The bad and noisy traces were edited on the common shot gathers in the first step to get rid of acquisition problems which could take place from unfavorable conditions such as climatic change during data acquisition. Correction of amplitude attenuation caused from spherical divergence and inelastic attenuation has been also applied. Mild F/K filter was used to attenuate coherent noise such as guided waves and side scatters. Predictive deconvolution has been applied before stacking to remove peg-leg multiples and water reverberations. The velocity analysis process was conducted at every 2 km interval to analyze migration velocity, and it was iterated to get the high fidelity image. The strum noise caused from streamer was completely removed by applying predictive deconvolution in time space and ${\tau}-P$ domain. Residual multiples caused from thin layer or water bottom were eliminated through parabolic radon transform demultiple process. The migration using curved ray Kirchhoff-style algorithm has been applied to stack data. The velocity obtained after several iteration approach for MVA (migration velocity analysis) was used instead or DMO for the migration velocity. Using various testing methods, optimum seismic processing parameter can be obtained for structural and stratigraphic interpretation in the Block II, Yellow Sea Basin.

  • PDF

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Correction of Accelerogram in Frequency Domain (주파수영역에서의 가속도 기록 보정)

  • Park, Chang Ho;Lee, Dong Guen
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.4
    • /
    • pp.71-79
    • /
    • 1992
  • In general, the accelerogram of earthquake ground motion or the accelerogram obtained from dynamic tests contain various errors. In these errors of the accelerograms, there are instrumental errors(magnitude and phase distortion) due to the response characteristics of accelerometer and the digitizing error concentrated in low and high frequency components and random errors. Then, these errors may be detrimental to the results of data processing and dynamic analysis. An efficient method which can correct the errors of the accelerogram is proposed in this study. The correction of errors can be accomplished through four steps as followes ; 1) using an interpolation method a data form appropriate to the error correction is prepared, 2) low and high frequency errors of the accelerogram are removed by band-pass filter between prescribed frequency limits, 3) instrumental errors are corrected using dynamic equilibrium equation of the accelerometer, 4) velocity and displacement are obtained by integrating corrected accelerogram. Presently, infinite impulse response(IIR) filter and finite impulse response (FIR) filter are generally used as band-pass filter. In the proposed error correction procedure, the deficiencies of FIR filter and IIR filter are reduced and, using the properties of the differentiation and the integration of Fourier transform, the accuracy of instrument correction and integration is improved.

  • PDF

DCT Based Watermarking Technique Using Region of Interest (관심영역을 이용한 DCT기반 워터마킹 기법)

  • Shin, Jae-Wook;Jeong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.1
    • /
    • pp.16-26
    • /
    • 2000
  • The proposed method inserts a watermark information not mto a whole Image region but only into regions of interest(ROIs) To extract the ROIs, we divide an original Image into sub-blocks and use modified Shi-Kuo Chang's PIM(picture information measure) as the criteria to select the ROIs Considering the directional information and frequency bands, we insert the watermark information into sub-blocks m the DCT domain. The proposed method can reduce the distortion in comparison With the other methods which utilize the whole Image as an nor The proposed method makes much less damaged Images m comparison to the other methods And those Images processed by the proposed algorithm are more robust to the changes caused by signal processing operations such as resampling, clipping. noise, and so on Also due to the block-based watermark insertion, the proposed method has the robustness to the Image compression processes such as JPEG and MPEG.

  • PDF

Reduction of Structural and Computational Complexity in IMD Reduction Method of the PTS-based OFDM Communication System (PTS 방식의 OFDM 통신 시스템에서 IMD 저감 기법의 복잡도와 계산량 저감)

  • Kim, Seon-Ae;Lee, Il-Jin;Baek, Gwang-Hoon;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8A
    • /
    • pp.583-591
    • /
    • 2009
  • OFDM(orthogonal frequency division multiplexing) signal with high PAPR(peak to average power ratio) produces the nonlinear distortion and/or decreases down the power efficiency of HPA(high power amplifier). So, the IMD(inter-modulation distortion) reduction method was proposed to reduce the nonlinear distortion, which shows better BER(bit error rate) performance than the PAPR reduction methods. However, IMD reduction method has inherent problem which system complexity and processing time increases because the FFT(fast Fourier transform) processor is added in transmitter and decision criterion of IMD reduction method is computed in frequency domain,. In this paper, therefore, we propose a new IMD reduction method to reduce the computational complexity and structure of IMD computation. And we apply this proposed method into OFDM system using PTS(partial transmit sequence) scheme and compare the computational complexity between conventional and proposed IMD reduction method. This method can reduce the system size and computational complexity. Also, the proposed has almost same BER performance with the conventional IMD reduction method.

The Redundancy Reduction Using Fuzzy C-means Clustering and Cosine Similarity on a Very Large Gas Sensor Array for Mimicking Biological Olfaction (생물학적 후각 시스템을 모방한 대규모 가스 센서 어레이에서 코사인 유사도와 퍼지 클러스터링을 이용한 중복도 제거 방법)

  • Kim, Jeong-Do;Kim, Jung-Ju;Park, Sung-Dae;Byun, Hyung-Gi;Persaud, K.C.;Lim, Seung-Ju
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.59-67
    • /
    • 2012
  • It was reported that the latest sensor technology allow an 65536 conductive polymer sensor array to be made with broad but overlapping selectivity to different families of chemicals emulating the characteristics found in biological olfaction. However, the supernumerary redundancy always accompanies great error and risk as well as an inordinate amount of computation time and local minima in signal processing, e.g. neural networks. In this paper, we propose a new method to reduce the number of sensor for analysis by reducing redundancy between sensors and by removing unstable sensors using the cosine similarity method and to decide on representative sensor using FCM(Fuzzy C-Means) algorithm. The representative sensors can be just used in analyzing. And, we introduce DWT(Discrete Wavelet Transform) for data compression in the time domain as preprocessing. Throughout experimental trials, we have done a comparative analysis between gas sensor data with and without reduced redundancy. The possibility and superiority of the proposed methods are confirmed through experiments.

Analysis of Utility Metering Data for Estimation of User Abnormal Life Status (사용자 비정상 생활상태 추정을 위한 유틸리티 검침 데이터 분석)

  • Baek, Jong-Mock;Kim, Byung-Gi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.85-93
    • /
    • 2011
  • In this paper, we analyzed the function elements of the Integrated meter reading system based on PLC which is working in Mok-dong, Seoul and studied how to improve the vulnerability. Also we propose an efficient method for the estimation of abnormal life status through frequency domain processing of utility meter readings. We found out that even after removing the high-frequency components from the raw meter data, the shape of the graph still maintains the original graph characteristics. The graph of the inverse transformed data has simpler and smoother curve than the original graph pattern. The original graph is not good to be used in deciding whether the residence's life pattern is normal or not. We could find out that the graph which is processed frequency signal has simple and intuitive graph pattern.

Tracking and Interpretation of Moving Object in MPEG-2 Compressed Domain (MPEG-2 압축 영역에서 움직이는 객체의 추적 및 해석)

  • Mun, Su-Jeong;Ryu, Woon-Young;Kim, Joon-Cheol;Lee, Joon-Hoan
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.27-34
    • /
    • 2004
  • This paper proposes a method to trace and interpret a moving object based on the information which can be directly obtained from MPEG-2 compressed video stream without decoding process. In the proposed method, the motion flow is constructed from the motion vectors included in compressed video. We calculate the amount of pan, tilt, and zoom associated with camera operations using generalized Hough transform. The local object motion can be extracted from the motion flow after the compensation with the parameters related to the global camera motion. Initially, a moving object to be traced is designated by user via bounding box. After then automatic tracking Is performed based on the accumulated motion flows according to the area contributions. Also, in order to reduce the cumulative tracking error, the object area is reshaped in the first I-frame of a GOP by matching the DCT coefficients. The proposed method can improve the computation speed because the information can be directly obtained from the MPEG-2 compressed video, but the object boundary is limited by macro-blocks rather than pixels. Also, the proposed method is proper for approximate object tracking rather than accurate tracing of an object because of limited information available in the compressed video data.