• Title/Summary/Keyword: 데이터 압축

Search Result 1,714, Processing Time 0.029 seconds

Design of Spatial Data Compression Methods for Mobile Vector Map Services (모바일 벡터 지도 서비스를 위한 공간 데이터 압축 기법의 설계)

  • 최진오
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.358-362
    • /
    • 2004
  • According to the rapid advance of computer and communication techniques, the request of mobile internet services is highly increasing. However, the main obstacles for mobile vector map service environments, are large data volume and narrow wireless bandwidth. Among the many possible solutions, spatial data compression technique may contribute to reduce the load of bandwidth and client response time. This thesis proposes two methods for spatial data compression. The one is relative coordinates transformation method, and the other is client coordinates transformation method. And, this thesis also proposes the system architecture for experiments. The two compression methods could be evaluated the compression effect and the response time.

  • PDF

Application of wavelet scheme to transmission data for web-monitoring system (웹-모니터링용 전송데이터의 압축을 위한 wavelet 기법의 적용)

  • 이영삼;배금동;김성호
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.483-486
    • /
    • 2004
  • 일반적으로 재해 방제용 시스템은 측정 데이터를 현장에 설치된 데이터 로거에 저장하고 일정시간 후 체계적인 분석을 위해 데이터 로거에 저장된 데이터를 원격의 서버로 전송하는 구조로 되어있다. 신뢰도가 높은 재해 방제용 시스템의 구현을 위해서는 계측 현장에 되도록이면 많은 센서들이 설치되어야 함과 동시에 짧은 시간 간격으로 원격의 서버에 데이터를 전송하는 것이 바람직하다. 이들 전송 데이터의 양은 관측구간이 넓어짐에 따라 늘어나게 되며 따라서 전송 효율을 극대화를 위해서는 측정 데이터를 압축하여 전송하는 것이 바람직하게 된다. 본 연구에서는 측정 데이터의 전송 효율을 향상시키기 위해 데이터 압축 성능이 우수한 것으로 알려진 wavelet 이론을 웹-모니터링 시스템에 적용하여 봄으로써 이의 유용성을 확인하고자 한다.

  • PDF

Data Transmitting and Storing Scheme based on Bandwidth in Hadoop Cluster (하둡 클러스터의 대역폭을 고려한 압축 데이터 전송 및 저장 기법)

  • Kim, Youngmin;Kim, Heejin;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.46-52
    • /
    • 2019
  • The size of data generated and collected at industrial sites or in public institutions is growing rapidly. The existing data processing server often handles the increasing data by increasing the performance by scaling up. However, in the big data era, when the speed of data generation is exploding, there is a limit to data processing with a conventional server. To overcome such limitations, a distributed cluster computing system has been introduced that distributes data in a scale-out manner. However, because distributed cluster computing systems distribute data, inefficient use of network bandwidth can degrade the performance of the cluster as a whole. In this paper, we propose a scheme that compresses data when transmitting data in a Hadoop cluster considering network bandwidth. The proposed scheme considers the network bandwidth and the characteristics of the compression algorithm and selects the optimal compression transmission scheme before transmission. Experimental results show that the proposed scheme reduces data transfer time and size.

Edge Preserving Image Compression with Weighted Centroid Neural Network (신경망에 의한 테두리를 보존하는 영상압축)

  • 박동철;우영준
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.10B
    • /
    • pp.1946-1952
    • /
    • 1999
  • A new image compression method to preserve edge characteristics in reconstructed images using an unsupervised learning neural is proposed in this paper. By the unsupervised competitive learning which generalizes previously proposed Centroid Neural Network(CNN) algorithm with the geometric characteristics of edge area and statistical characteristics of image data, more codevectors are allocated in the edge areas to provide the more accurate edges in reconstructed image. Experimental results show that the proposed method gives improved edge in reconstructed images when compared with SOM, Modified SOM and M/R-CNN.

  • PDF

Efficient CAN Data Compression Algorithm Using Signal Length (신호의 길이 특성을 이용한 효율적인 CAN 데이터 압축 알고리즘)

  • Wu, Yujing;Chung, Jin-Gyun
    • Smart Media Journal
    • /
    • v.3 no.3
    • /
    • pp.9-14
    • /
    • 2014
  • The increasing number of ECUs in automobiles causes the CAN bus overloaded and consequently the error probability of data transmission increases. Since the time duration for the data transmission is proportional to CAN frame length, it is desirable to reduce the frame length. In this paper, we present a CAN message compression method using Data Length Code (DLC) and bit rearrangement. By simulations using actual CAN data, it is shown that the CAN transmission data is reduced up to 54 % by the proposed method, compared with conventional methods.

Performance Evaluation of ECG Compression Algorithms using Classification of Signals based PQSRT Wave Features (PQRST파 특징 기반 신호의 분류를 이용한 심전도 압축 알고리즘 성능 평가)

  • Koo, Jung-Joo;Choi, Goang-Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.313-320
    • /
    • 2012
  • An ECG(Electrocardiogram) compression can increase the processing speed of system as well as reduce amount of signal transmission and data storage of long-term records. Whereas conventional performance evaluations of loss or lossless compression algorithms measure PRD(Percent RMS Difference) and CR(Compression Ratio) in the viewpoint of engineers, this paper focused on the performance evaluations of compression algorithms in the viewpoint of diagnostician who diagnosis ECG. Generally, for not effecting the diagnosis in the ECG compression, the position, length, amplitude and waveform of the restored signal of PQRST wave should not be damaged. AZTEC, a typical ECG compression algorithm, is validated its effectiveness in conventional performance evaluation. In this paper, we propose novel performance evaluation of AZTEC in the viewpoint of diagnostician.

Design of high-speed block transmission technology for real-time data duplication (실시간 데이터 이중화를 위한 고속 블록 전송기술 설계)

  • Han, JaeSeung;An, Jae-Hoon;Kim, Young-Hwan;Park, Chang-Won
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.07a
    • /
    • pp.445-448
    • /
    • 2018
  • 본 논문에서는 데이터 이중화 저장시스템의 장애발생으로 인한 백업서버 데이터 손실을 보호하기 위해 무 손실 실시간 데이터 이중화 시스템 설계방안을 제안한다. 이는 원본서버의 데이터와 백업서버의 데이터가 특정 시점 T에서 100% 일치하지 않는 비동기 방식을 동기방식으로 해결하기 위한 시스템 설계 제안으로, 원본서버의 데이터 생성과 동시에 실시간 데이터 백업을 목적으로 한다. 이를 위해 전송단계에서 필요한 가장 빠른 압축인 LZ4 압축 알고리즘을 기반으로 Intel AVX 명령어를 사용하여 보다 압축속도를 증진시켜 실시간 시스템을 구축한다. 또한 전송 중 보안위협으로부터 보호하기 위해 Key 전달 기법과 AES 암호화 알고리즘에 대해 기술한다.

  • PDF

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

MPEG Video-based Point Cloud Compression 표준 소개

  • Jang, Ui-Seon
    • Broadcasting and Media Magazine
    • /
    • v.26 no.2
    • /
    • pp.18-30
    • /
    • 2021
  • 본 고에서는 최근 국제표준으로 완성된 MPEG Video-based Point Cloud Compression(V-PCC) 표준 기술에 대해 소개하고자 한다. AR/VR 등 새로운 미디어 응용의 출현과 함께 그 관심이 3D 그래픽 데이터에 더 많이 모아지는 가운데, 지금까지는 효율적인 압축에 관심이 높지 않았던 포인트 클라우드 데이터의 표준 압축 기술로 만들어진 V-PCC 표준의 표준화 현황과 주요 응용분야, 그리고 주요 압축 기술에 대하여 살펴보고자 한다.

Comparison and analysis of compression algorithms to improve transmission efficiency of manufacturing data (제조 현장 데이터 전송효율 향상을 위한 압축 알고리즘 비교 및 분석)

  • Lee, Min Jeong;Oh, Sung Bhin;Kim, Jin Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.94-103
    • /
    • 2022
  • As a large amount of data generated by sensors or devices at the manufacturing site is transmitted to the server or client, problems arise in network processing time delay and storage resource cost increase. To solve this problem, considering the manufacturing site, where real-time responsiveness and non-disruptive processes are essential, QRC (Quotient Remainder Compression) and BL_beta compression algorithms that enable real-time and lossless compression were applied to actual manufacturing site sensor data for the first time. As a result of the experiment, BL_beta had a higher compression rate than QRC. As a result of experimenting with the same data by slightly adjusting the data size of QRC, the compression rate of the QRC algorithm with the adjusted data size was 35.48% and 20.3% higher than the existing QRC and BL_beta compression algorithms.