• Title/Summary/Keyword: Redundancy method

Search Result 560, Processing Time 0.245 seconds

Analysis of Feature Map Compression Efficiency and Machine Task Performance According to Feature Frame Configuration Method (피처 프레임 구성 방안에 따른 피처 맵 압축 효율 및 머신 태스크 성능 분석)

  • Rhee, Seongbae;Lee, Minseok;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.318-331
    • /
    • 2022
  • With the recent development of hardware computing devices and software based frameworks, machine tasks using deep learning networks are expected to be utilized in various industrial fields and personal IoT devices. However, in order to overcome the limitations of high cost device for utilizing the deep learning network and that the user may not receive the results requested when only the machine task results are transmitted from the server, Collaborative Intelligence (CI) proposed the transmission of feature maps as a solution. In this paper, an efficient compression method for feature maps with vast data sizes to support the CI paradigm was analyzed and presented through experiments. This method increases redundancy by applying feature map reordering to improve compression efficiency in traditional video codecs, and proposes a feature map method that improves compression efficiency and maintains the performance of machine tasks by simultaneously utilizing image compression format and video compression format. As a result of the experiment, the proposed method shows 14.29% gain in BD-rate of BPP and mAP compared to the feature compression anchor of MPEG-VCM.

A Feature Map Compression Method for Multi-resolution Feature Map with PCA-based Transformation (PCA 기반 변환을 통한 다해상도 피처 맵 압축 방법)

  • Park, Seungjin;Lee, Minhun;Choi, Hansol;Kim, Minsub;Oh, Seoung-Jun;Kim, Younhee;Do, Jihoon;Jeong, Se Yoon;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.56-68
    • /
    • 2022
  • In this paper, we propose a compression method for multi-resolution feature maps for VCM. The proposed compression method removes the redundancy between the channels and resolution levels of the multi-resolution feature map through PCA-based transformation. According to each characteristic, the basis vectors and mean vector used for transformation, and the transformation coefficient obtained through the transformation are compressed using a VVC-based coder and DeepCABAC. In order to evaluate performance of the proposed method, the object detection performance was measured for the OpenImageV6 and COCO 2017 validation set, and the BD-rate of MPEG-VCM anchor and feature map compression anchor proposed in this paper was compared using bpp and mAP. As a result of the experiment, the proposed method shows a 25.71% BD-rate performance improvement compared to feature map compression anchor in OpenImageV6. Furthermore, for large objects of the COCO 2017 validation set, the BD-rate performance is improved by up to 43.72% compared to the MPEG-VCM anchor.

Resistance Factors of Driven Steel Pipe Piles for LRFD Design in Korea (LRFD 설계를 위한 국내 항타강관말뚝의 저항계수 산정)

  • Park, Jae Hyun;Huh, Jungwon;Kim, Myung Mo;Kwak, Kiseok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6C
    • /
    • pp.367-377
    • /
    • 2008
  • As part of study to develop LRFD (Load and Resistance Factor Design) codes for foundation structures in Korea, resistance factors for static bearing capacity of driven steel pipe piles were calibrated in the framework of reliability theory. The 57 data sets of static load tests and soil property tests conducted in the whole domestic area were collected and these load test piles were sorted into two cases: SPT N at pile tip less than 50, SPT N at pile tip equal to or more than 50. The static bearing capacity formula and the Meyerhof method using N values were applied to calculate the expected design bearing capacities of the piles. The resistance bias factors were evaluated for the two static design methods by comparing the representative measured bearing capacities with the expected design values. Reliability analysis was performed by two types of advanced methods: the First Order Reliability Method (FORM), and the Monte Carlo Simulation (MCS) method using resistance bias factor statistics. The target reliability indices are selected as 2.0 and 2.33 for group pile case and 2.5 for single pile case, in consideration of the reliability level of the current design practice, redundancy of pile group, acceptable risk level, construction quality control, and significance of individual structure. Resistance factors of driven steel pipe piles were recommended based on the results derived from the First Order Reliability Method and the Monte Carlo Simulation method.

A Study on Motion Estimator Design Using DCT DC Value (DCT 직류 값을 이용한 움직임 추정기 설계에 관한 연구)

  • Lee, Gwon-Cheol;Park, Jong-Jin;Jo, Won-Gyeong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.3
    • /
    • pp.258-268
    • /
    • 2001
  • The compression method is necessarily used to send the high quality moving picture that contains a number of data in image processing. In the field of moving picture compression method, the motion estimation algorithm is used to reduce the temporal redundancy. Block matching algorithm to be usually used is distinguished partial search algorithm with full search algorithm. Full search algorithm be used in this paper is the method to compare the reference block with entire block in the search window. It is very efficient and has simple data flow and control circuit. But the bigger the search window, the larger hardware size, because large computational operation is needed. In this paper, we design the full search block matching motion estimator. Using the DCT DC values, we decide luminance. And we apply 3 bit compare-selector using bit plane to I(Intra coded) picture, not using 8 bit luminance signals. Also it is suggested that use the same selective bit for the P(Predicted coded) and B(Bidirectional coded) picture. We compare based full search method with PSNR(Peak Signal to Noise Ratio) for C language modeling. Its condition is the reference block 8$\times$8, the search window 24$\times$24 and 352$\times$288 gray scale standard video images. The result has small difference that we cannot see. And we design the suggested motion estimator that hardware size is proved to reduce 38.3% for structure I and 30.7% for structure II. The memory is proved to reduce 31.3% for structure I and II.

  • PDF

Parameter Estimation for Multipath Error in GPS Dual Frequency Carrier Phase Measurements Using Unscented Kalman Filters

  • Lee, Eun-Sung;Chun, Se-Bum;Lee, Young-Jae;Kang, Tea-Sam;Jee, Gyu-In;Kim, Jeong-Rae
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.4
    • /
    • pp.388-396
    • /
    • 2007
  • This paper describes a multipath estimation method for Global Positioning System (GPS) dual frequency carrier phase measurements. Multipath is a major error source in high precision GPS applications, i.e., carrier phase measurements for precise positioning and attitude determinations. In order to estimate and remove multipath at carrier phase measurements, an array GPS antenna system has been used. The known geometry between the antennas is used to estimate multipath parameters. Dual frequency carrier phase measurements increase the redundancy of measurements, so it can reduce the number of antennas. The unscented Kalman filter (UKF) is recently applied to many areas to overcome some of the limitations of the extended Kalman filter (EKF) such as weakness to severe nonlinearity. This paper uses the UKF for estimating multipath parameters. A series of simulations were performed with GPS antenna arrays located on a straight line with one reflector. The geometry information of the antenna array reduces the number of estimated multipath parameters from four to three. Both the EKF and the UKF are used as estimation algorithms and the results of the EKF and the UKF are compared. When the initial parameters are far from true parameters, the UKF shows better performance than the EKF.

A Novel Error Detection Algorithm Based on the Structural Pattern of LZ78-Compression Data (LZ78 압축 데이터의 구조적 패턴에 기반한 새로운 오류 검출 알고리즘)

  • Gong, Myongsik;Kwon, Beom;Kim, Jinwoo;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1356-1363
    • /
    • 2016
  • In this paper, we propose a novel error detection algorithm for LZ78-compressed data. The conventional error detection method adds a certain number of parity bits in transmission, and the receiver checks the number of bits representing '1' to detect the errors. These conventional methods use additional bits resulting in increased redundancy in the compressed data which results in reduced effectiveness of the final compressed data. In this paper, we propose error detection algorithm using the structural properties of LZ78 compression without using additional bits in the compressed data. The simulation results show that the error detection ratio of the proposed algorithm is about 1.3 times better for error detection than conventional algorithms.

An Experimental Study on Automatic Summarization of Multiple News Articles (복수의 신문기사 자동요약에 관한 실험적 연구)

  • Kim, Yong-Kwang;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.1 s.59
    • /
    • pp.83-98
    • /
    • 2006
  • This study proposes a template-based method of automatic summarization of multiple news articles using the semantic categories of sentences. First, the semantic categories for core information to be included in a summary are identified from training set of documents and their summaries. Then, cue words for each slot of the template are selected for later classification of news sentences into relevant slots. When a news article is input, its event/accident category is identified, and key sentences are extracted from the news article and filled in the relevant slots. The template filled with simple sentences rather than original long sentences is used to generate a summary for an event/accident. In the user evaluation of the generated summaries, the results showed the 54.l% recall ratio and the 58.l% precision ratio in essential information extraction and 11.6% redundancy ratio.

Reliable Data Transmission Based on Erasure-resilient Code in Wireless Sensor Networks

  • Lei, Jian-Jun;Kwon, Gu-In
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.1
    • /
    • pp.62-77
    • /
    • 2010
  • Emerging applications with high data rates will need to transport bulk data reliably in wireless sensor networks. ARQ (Automatic Repeat request) or Forward Error Correction (FEC) code schemes can be used to provide reliable transmission in a sensor network. However, the naive ARQ approach drops the whole frame, even though there is a bit error in the frame and the FEC at the bit level scheme may require a highly complex method to adjust the amount of FEC redundancy. We propose a bulk data transmission scheme based on erasure-resilient code in this paper to overcome these inefficiencies. The sender fragments bulk data into many small blocks, encodes the blocks with LT codes and packages several such blocks into a frame. The receiver only drops the corrupted blocks (compared to the entire frame) and the original data can be reconstructed if sufficient error-free blocks are received. An incidental benefit is that the frame error rate (FER) becomes irrelevant to frame size (error recovery). A frame can therefore be sufficiently large to provide high utilization of the wireless channel bandwidth without sacrificing the effectiveness of error recovery. The scheme has been implemented as a new data link layer in TinyOS, and evaluated through experiments in a testbed of Zigbex motes. Results show single hop transmission throughput can be improved by at least 20% under typical wireless channel conditions. It also reduces the transmission time of a reasonable range of size files by more than 30%, compared to a frame ARQ scheme. The total number of bytes sent by all nodes in the multi-hop communication is reduced by more than 60% compared to the frame ARQ scheme.

The Analysis of Parallel Operating Characteristics for DC-DC Converter Using the Parallel Operation Model (병렬운전 모델을 이용한 DC-DC 컨버터의 병렬운전 특성해석)

  • Kim, Soo-Seok
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.5
    • /
    • pp.174-182
    • /
    • 2004
  • Consideration for parallel operation in a high power system has been increased due to the advantages of parallel operation like as high productivity, simplicity of design, and redundancy of power. Based on the small signal model of DC-DC Converter, the simple and exact power stage model of parallel operation system is derived and the parallel operation system using current balance method for the uniform current distribution among the parallel operation system is proposed. Using Simulation programs, which consists of nonidentical Converter modules and changes the position of master and slave automatically, the current distribution error is kept within the limit in the parallel operation system. To verify the high performance of the proposed Converter system for parallel operation, the parallel operation test, which has 2 Converter modules of 1 kW, is accomplished. Also, the simulation result is good agreement with the experiment result in the transient and starting characteristics.

A Length-based File Fuzzing Test Suite Reduction Algorithm for Evaluation of Software Vulnerability (소프트웨어 취약성 평가를 위한 길이기반 파일 퍼징 테스트 슈트 축약 알고리즘)

  • Lee, Jaeseo;Kim, Jong-Myong;Kim, SuYong;Yun, Young-Tae;Kim, Yong-Min;Noh, Bong-Nam
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.2
    • /
    • pp.231-242
    • /
    • 2013
  • Recently, automated software testing methods such as fuzzing have been researched to find software vulnerabilities. The purpose of fuzzing is to disclose software vulnerabilities by providing a software with malformed data. In order to increase the probability of vulnerability discovery by fuzzing, we must solve the test suite reduction problem because the probability depends on the test case quality. In this paper, we propose a new method to solve the test suite reduction problem which is suitable for the long test case such as file. First, we suggested the length of test case as a measure in addition to old measures such as coverage and redundancy. Next we designed a test suite reduction algorithm using the new measure. In the experimental results, the proposed algorithm showed better performance in the size and length reduction ratio of the test suite than previous studies. Finally, results from an empirical study suggested the viability of our proposed measure and algorithm for file fuzzing.