• Title/Summary/Keyword: feature compression

Search Result 209, Processing Time 0.029 seconds

Feature Extraction of Disease Region in Stomach Images Based on DCT (DCT기반 위장영상 질환부위의 특징추출)

  • Ahn, Byeoung-Ju;Lee, Sang-Bock
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.3
    • /
    • pp.167-171
    • /
    • 2012
  • In this paper, we present an algorithm to extract features about disease region in digital stomach images. For feature extraction, DCT coefficients of gastrointestinal imaging matrix was obtained. DCT coefficent matrix is concentrated energy in low frequency region, we were extracted 128 feature parameters in low frequency region. Extracted feature parameters can using for differential compression of PACS and, can using for input parameter in CAD.

CNN based Image Restoration Method for the Reduction of Compression Artifacts (압축 왜곡 감소를 위한 CNN 기반 이미지 화질개선 알고리즘)

  • Lee, Yooho;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.676-684
    • /
    • 2022
  • As realistic media are widespread in various image processing areas, image or video compression is one of the key technologies to enable real-time applications with limited network bandwidth. Generally, image or video compression cause the unnecessary compression artifacts, such as blocking artifacts and ringing effects. In this study, we propose a Deep Residual Channel-attention Network, so called DRCAN, which consists of an input layer, a feature extractor and an output layer. Experimental results showed that the proposed DRCAN can reduced the total memory size and the inference time by as low as 47% and 59%, respectively. In addition, DRCAN can achieve a better peak signal-to-noise ratio and structural similarity index measure for compressed images compared to the previous methods.

On the laboratory investigations into the one-dimensional compression behaviour of iron tailings

  • Ismail A. Okewale;Matthew R. Coop;Christoffel H. Grobler
    • Geomechanics and Engineering
    • /
    • v.35 no.4
    • /
    • pp.437-447
    • /
    • 2023
  • The failures of tailing dams have caused irreparable damage to human lives, assets and environment and this has ultimately resulted in great economic, social and environmental challenges worldwide. Due to this, investigation into mechanical behaviour of tailings has received some attention. However, the knowledge and understanding of mechanics of behaviour in iron tailings is still limited. This study investigates the mechanics of iron tailings from Nigeria considering grading, effects of fabric resulting from different sample preparations and the possibility of non-convergent behaviour. This was achieved by conducting series of one-dimensional compression tests in conjunction with index, microstructural, chemical and mineralogical tests. The materials are predominantly poorly graded, non-clayey and non-plastic. The tailings are characterised by angular particles with no obvious particle aggregations and dominated by silicon, iron, aluminium, haematite and quartz. The compression paths do not converge and unique normal compression lines are not found and this is an important feature of the transitional mode of behaviour. The behaviour of these iron tailings therefore depends on initial specific volume. The preparation methods also have effect on the compression paths of the samples. The gradings of the samples have an influence on the degree of transitional behaviour but the preparation methods do affect the degree of convergence. The transitional mode of behaviour in these iron tailings investigated is very strong.

Compression method of feature based on CNN image classification network using Autoencoder (오토인코더를 이용한 CNN 이미지 분류 네트워크의 feature 압축 방안)

  • Go, Sungyoung;Kwon, Seunguk;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.280-282
    • /
    • 2020
  • 최근 사물인터넷(IoT), 자율주행과 같이 기계 간의 통신이 요구되는 서비스가 늘어감에 따라, 기계 임무 수행에 최적화된 데이터의 생성 및 압축에 대한 필요성이 증가하고 있다. 또한, 사물인터넷과 인공지능(AI)이 접목된 기술이 주목을 받으면서 딥러닝 모델에서 추출되는 특징(feature)을 디바이스에서 클라우드로 전송하는 방안에 관한 연구가 진행되고 있으며, 국제 표준화 기구인 MPEG에서는 '기계를 위한 부호화(Video Coding for Machine: VCM)'에 대한 표준 기술 개발을 진행 중이다. 딥러닝으로 특징을 추출하는 가장 대표적인 방법으로는 합성곱 신경망(Convolutional Neural Network: CNN)이 있으며, 오토인코더는 입력층과 출력층의 구조를 동일하게 하여 출력을 가능한 한 입력에 근사시키고 은닉층을 입력층보다 작게 구성하여 차원을 축소함으로써 데이터를 압축하는 딥러닝 기반 이미지 압축 방식이다. 이에 본 논문에서는 이러한 오토인코더의 성질을 이용하여 CNN 기반의 이미지 분류 네트워크의 합성곱 신경망으로부터 추출된 feature에 오토인코더를 적용하여 압축하는 방안을 제안한다.

  • PDF

Design of Adaptive Quantization Tables and Huffman Tables for JPEG Compression of Medical Images (의료영상의 JPEG 압축을 위한 적응적 양자화 테이블과 허프만 테이블의 설계)

  • 양시영;정제창;박상규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.824-833
    • /
    • 2004
  • Due to the bandwidth and storage limitations, medical images are needed to be compressed before transmission and storage. DICOM (Digital Imaging and Communications in Medicine) specification, which is the medical images standard, provides a mechanism for supporting the use of JPEG still image compression standard. In this paper, we explain a method for compressing medical images by JPEG standard and propose two methods for JPEG compression. First, because medical images differ from natural images in optical feature, we propose a method to design adaptively the quantization table using spectrum analysis. Second, because medical images have higher pixel depth than natural images do, we propose a method to design Huffman table which considers the probability distribution feature of symbols. Therefore, we propose methods to design a quantization table and Huffman table suitable for medical images. Simulation results show the improved performance compared to the quantization table and the adjusted Huffman table of JPEG standard. Proposed methods which are satisfied JPEG Standard, can be applied to PACS (Picture Archiving and Communications System).

Segmented Douglas-Peucker Algorithm Based on the Node Importance

  • Wang, Xiaofei;Yang, Wei;Liu, Yan;Sun, Rui;Hu, Jun;Yang, Longcheng;Hou, Boyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1562-1578
    • /
    • 2020
  • Vector data compression algorithm can meet requirements of different levels and scales by reducing the data amount of vector graphics, so as to reduce the transmission, processing time and storage overhead of data. In view of the fact that large threshold leading to comparatively large error in Douglas-Peucker vector data compression algorithm, which has difficulty in maintaining the uncertainty of shape features and threshold selection, a segmented Douglas-Peucker algorithm based on node importance is proposed. Firstly, the algorithm uses the vertical chord ratio as the main feature to detect and extract the critical points with large contribution to the shape of the curve, so as to ensure its basic shape. Then, combined with the radial distance constraint, it selects the maximum point as the critical point, and introduces the threshold related to the scale to merge and adjust the critical points, so as to realize local feature extraction between two critical points to meet the requirements in accuracy. Finally, through a large number of different vector data sets, the improved algorithm is analyzed and evaluated from qualitative and quantitative aspects. Experimental results indicate that the improved vector data compression algorithm is better than Douglas-Peucker algorithm in shape retention, compression error, results simplification and time efficiency.

ECG Signal Compression using Feature Points based on Curvature (곡률을 이용한 특징점 기반 심전도 신호 압축)

  • Kim, Tae-Hun;Kim, Sung-Wan;Ryu, Chun-Ha;Yun, Byoung-Ju;Kim, Jeong-Hong;Choi, Byung-Jae;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.624-630
    • /
    • 2010
  • As electrocardiogram(ECG) signals are generally sampled with a frequency of over 200Hz, a method to compress diagnostic information without losing data is required to store and transmit them efficiently. In this paper, an ECG signal compression method, which uses feature points based on curvature, is proposed. The feature points of P, Q, R, S, T waves, which are critical components of the ECG signal, have large curvature values compared to other vertexes. Thus, these vertexes are extracted with the proposed method, which uses local extremum of curvatures. Furthermore, in order to minimize reconstruction errors of the ECG signal, extra vertexes are added according to the iterative vertex selection method. Through the experimental results on the ECG signals from MIT-BIH Arrhythmia database, it is concluded that the vertexes selected by the proposed method preserve all feature points of the ECG signals. In addition, they are more efficient than the AZTEC(Amplitude Zone Time Epoch Coding) method.

A Preprocessing Algorithm for Efficient Lossless Compression of Gray Scale Images

  • Kim, Sun-Ja;Hwang, Doh-Yeun;Yoo, Gi-Hyoung;You, Kang-Soo;Kwak, Hoon-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2485-2489
    • /
    • 2005
  • This paper introduces a new preprocessing scheme to replace original data of gray scale images with particular ordered data so that performance of lossless compression can be improved more efficiently. As a kind of preprocessing technique to maximize performance of entropy encoder, the proposed method converts the input image data into more compressible form. Before encoding a stream of the input image, the proposed preprocessor counts co-occurrence frequencies for neighboring pixel pairs. Then, it replaces each pair of adjacent gray values with particular ordered numbers based on the investigated co-occurrence frequencies. When compressing ordered image using entropy encoder, we can expect to raise compression rate more highly because of enhanced statistical feature of the input image. In this paper, we show that lossless compression rate increased by up to 37.85% when comparing results from compressing preprocessed and non-preprocessed image data using entropy encoder such as Huffman, Arithmetic encoder.

  • PDF

Blind Classification of Speech Compression Methods using Structural Analysis of Bitstreams (비트스트림의 구조 분석을 이용한 음성 부호화 방식 추정 기법)

  • Yoo, Hoon;Park, Cheol-Sun;Park, Young-Mi;Kim, Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.1
    • /
    • pp.59-64
    • /
    • 2012
  • This paper addresses a blind estimation and classification algorithm of the speech compression methods by using analysis on the structure of compressed bitstreams. Various speech compression methods including vocoders are developed in order to transmit or store the speech signals at very low bitrates. As a key feature, the vocoders contain the block structure inevitably. In classification of each compression method, we use the Measure of Inter-Block Correlation (MIBC) to check whether the bitstream includes the block structure or not, and to estimate the block length. Moreover, for the compression methods with the same block length, the proposed algorithm estimates the corresponding compression method correctly by using that each compression method has different correlation characteristics in each bit location. Experimental results indicate that the proposed algorithm classifies the speech compression methods robustly for various types and lengths of speech signals in noisy environment.

Vector Quantization for Medical Image Compression Based on DCT and Fuzzy C-Means

  • Supot, Sookpotharom;Nopparat, Rantsaena;Surapan, Airphaiboon;Manas, Sangworasil
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.285-288
    • /
    • 2002
  • Compression of magnetic resonance images (MRI) has proved to be more difficult than other medical imaging modalities. In an average sized hospital, many tora bytes of digital imaging data (MRI) are generated every year, almost all of which has to be kept. The medical image compression is currently being performed by using different algorithms. In this paper, Fuzzy C-Means (FCM) algorithm is used for the Vector Quantization (VQ). First, a digital image is divided into subblocks of fixed size, which consists of 4${\times}$4 blocks of pixels. By performing 2-D Discrete Cosine Transform (DCT), we select six DCT coefficients to form the feature vector. And using FCM algorithm in constructing the VQ codebook. By doing so, the algorithm can make good time quality, and reduce the processing time while constructing the VQ codebook.

  • PDF