• Title/Summary/Keyword: 패리티비트

Search Result 79, Processing Time 0.022 seconds

An Enhancement of Learning Speed of the Error - Backpropagation Algorithm (오류 역전도 알고리즘의 학습속도 향상기법)

  • Shim, Bum-Sik;Jung, Eui-Yong;Yoon, Chung-Hwa;Kang, Kyung-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1759-1769
    • /
    • 1997
  • The Error BackPropagation (EBP) algorithm for multi-layered neural networks is widely used in various areas such as associative memory, speech recognition, pattern recognition and robotics, etc. Nevertheless, many researchers have continuously published papers about improvements over the original EBP algorithm. The main reason for this research activity is that EBP is exceeding slow when the number of neurons and the size of training set is large. In this study, we developed new learning speed acceleration methods using variable learning rate, variable momentum rate and variable slope for the sigmoid function. During the learning process, these parameters should be adjusted continuously according to the total error of network, and it has been shown that these methods significantly reduced learning time over the original EBP. In order to show the efficiency of the proposed methods, first we have used binary data which are made by random number generator and showed the vast improvements in terms of epoch. Also, we have applied our methods to the binary-valued Monk's data, 4, 5, 6, 7-bit parity checker and real-valued Iris data which are famous benchmark training sets for machine learning.

  • PDF

DCGAN-based Compensation for Soft Errors in Face Recognition systems based on a Cross-layer Approach (얼굴인식 시스템의 소프트에러에 대한 DCGSN 기반의 크로스 레이어 보상 방법)

  • Cho, Young-Hwan;Kim, Do-Yun;Lee, Seung-Hyeon;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.5
    • /
    • pp.430-437
    • /
    • 2021
  • In this paper, we propose a robust face recognition method against soft errors with a deep convolutional generative adversarial network(DCGAN) based compensation method by a cross-layer approach. When soft-errors occur in block data of JPEG files, these blocks can be decoded inappropriately. In previous results, these blocks have been replaced using a mean face, thereby improving recognition ratio to a certain degree. This paper uses a DCGAN-based compensation approach to extend the previous results. When soft errors are detected in an embedded system layer using parity bit checkers, they are compensated in the application layer using compensated block data by a DCGAN-based compensation method. Regarding soft errors and block data loss in facial images, a DCGAN architecture is redesigned to compensate for the block data loss. Simulation results show that the proposed method effectively compensates for performance degradation due to soft errors.

Advanced Multi-Pass Fast Correlation Attack on Stream Ciphers (스트림 암호에 대한 개선된 다중 경로 고속 상관 공격)

  • Kim, Hyun;Sung, Jae-Chul;Lee, Sang-Jin;Park, Hae-Ryong;Chun, Kil-Soo;Hong, Seok-Hie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.17 no.4
    • /
    • pp.53-60
    • /
    • 2007
  • In a known plaintext scenario, fast correlation attack is very powerful attack on stream ciphers. Most of fast correlation attacks consider the cryptographic problem as the suitable decoding problem. In this paper, we introduce advanced multi-pass fast correlation attack which is based on the fast correlation attack, which uses parity check equation and Fast Walsh Transform, proposed by Chose et al. and the Multi-pass fast correlation attack proposed by Zhang et al. We guess some bits of initial states of the target LFSR with the same method as previously proposed methods, but we can get one more bits at each passes and we will recover the initial states more efficiently.

Improve reliability of SSD through cluster analysis based on error rate of 3D-NAND flash memory and application of differentiated protection policy (3D-NAND 플래시 메모리의 오류율 기반 군집분석과 차별화된 보호정책 적용을 통한 SSD의 신뢰성 향상 방안)

  • Son, Seung woo;Oh, Min jin;Kim, Jaeho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.1-2
    • /
    • 2021
  • 3D NAND 플래시 메모리는 플래너(2D) NAND 셀을 적층하는 방식으로 단위 면적당 고용량을 제공한다. 하지만 적층 공정의 특성상 각 레이어별 또는 셀 위치에 따라 오류 발생 빈도가 달라질 수 있는 문제가 있다. 이와 같은 현상은 플래시 메모리의 쓰기/지우기(P/E) 횟수가 증가할 수록 두드러진다. SSD와 같은 대부분의 플래시 기반 저장장치는 오류 교정을 위하여 ECC를 사용한다. 이 방법은 모든 플래시 메모리 페이지에 대하여 고정된 보호 강도를 제공하므로 물리적 위치에 따라 에러 발생률이 각기 다르게 나타나는 3D NAND 플래시 메모리에서는 한계를 보인다. 따라서 본 논문에서는 오류 발생률 차이를 보이는 페이지와 레이어를 분류하여 각 영역별로 차별화된 보호강도를 적용한다. 우리는 페이지와 레이어별로 오류 발생률이 현저하게 달라지는 3K P/E 사이클에서 측정된 오류율을 바탕으로 페이지와 레이어를 분류하고 오류에 취약한 영역에 대해서는 패리티 데이터를 추가하여 차별화된 보호 강도를 제공한다. 오류 발생 횟수에 따른 영역 구분을 위하여 K-Means 머신러닝 알고리즘을 사용한다. 우리는 이와 같은 차별화된 보호정책이 3D NAND 플래시 메모리의 신뢰성과 수명향상에 기여할 수 있는 가능성을 보인다.

  • PDF

XOR-based High Quality Information Hiding Technique Utilizing Self-Referencing Virtual Parity Bit (자기참조 가상 패리티 비트를 이용한 XOR기반의 고화질 정보은닉 기술)

  • Choi, YongSoo;Kim, HyoungJoong;Lee, DalHo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.156-163
    • /
    • 2012
  • Recently, Information Hiding Technology are becoming increasingly demanding in the field of international security, military and medical image This paper proposes data hiding technique utilizing parity checker for gray level image. many researches have been adopted LSB substitution and XOR operation in the field of steganography for the low complexity, high embedding capacity and high image quality. But, LSB substitution methods are not secure through it's naive mechanism even though it achieves high embedding capacity. Proposed method replaces LSB of each pixel with XOR(between the parity check bit of other 7 MSBs and 1 Secret bit) within one pixel. As a result, stego-image(that is, steganogram) doesn't result in high image degradation. Eavesdropper couldn't easily detect the message embedding. This approach is applying the concept of symmetric-key encryption protocol onto steganography. Furthermore, 1bit of symmetric-key is generated by the self-reference of each pixel. Proposed method provide more 25% embedding rate against existing XOR operation-based methods and show the effect of the reversal rate of LSB about 2% improvement.

An Effective Error-Concealment Approach for Video Data Transmission over Internet (인터넷상의 비디오 데이타 전송에 효과적인 오류 은닉 기법)

  • 김진옥
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.736-745
    • /
    • 2002
  • In network delivery of compressed video, packets may be lost if the channel is unreliable like Internet. Such losses tend to of cur in burst like continuous bit-stream error. In this paper, we propose an effective error-concealment approach to which an error resilient video encoding approach is applied against burst errors and which reduces a complexity of error concealment at the decoder using data hiding. To improve the performance of error concealment, a temporal and spatial error resilient video encoding approach at encoder is developed to be robust against burst errors. For spatial area of error concealment, block shuffling scheme is introduced to isolate erroneous blocks caused by packet losses. For temporal area of error concealment, we embed parity bits in content data for motion vectors between intra frames or continuous inter frames and recovery loss packet with it at decoder after transmission While error concealment is performed on error blocks of video data at decoder, it is computationally costly to interpolate error video block using neighboring information. So, in this paper, a set of feature are extracted at the encoder and embedded imperceptibly into the original media. If some part of the media data is damaged during transmission, the embedded features can be extracted and used for recovery of lost data with bi-direction interpolation. The use of data hiding leads to reduced complexity at the decoder. Experimental results suggest that our approach can achieve a reasonable quality for packet loss up to 30% over a wide range of video materials.

Distributed Multi-view Video Coding Based on Illumination Compensation (조명보상 기반 분산 다시점 비디오 코딩)

  • Park, Sea-Nae;Sim, Dong-Gyu;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.17-26
    • /
    • 2008
  • In this paper, we propose a distributed multi-view video coding method employing illumination compensation for multi-view video coding. Distributed multi-view video coding (DMVC) methods can be classified either into a temporal or an inter-view interpolation-based ones according to ways to generate side information. DMVC with inter-view interpolation utilizes characteristics of multi-view videos to improve coding efficiency of the DMVC by using side information based on the inter-view interpolation. However, mismatch of camera parameters and illumination change between two views could bring about inaccurate side information generation. In this paper, a modified distributed multi-view coding method is presented by applying illumination compensation in generating the side information. In the proposed encoder system, in addition to parity bits for AC coefficients, DC coefficients are transmitted as well to the decoder side. By doing so, the decoder can generate more accurate side information by compensating illumination changes with the transmitted DC coefficients. We found that the proposed algorithm is $0.1{\sim}0.2\;dB$ better than the conventional algorithm that does not make use of illumination compensation.

Implementation of LDPC Decoder using High-speed Algorithms in Standard of Wireless LAN (무선 랜 규격에서의 고속 알고리즘을 이용한 LDPC 복호기 구현)

  • Kim, Chul-Seung;Kim, Min-Hyuk;Park, Tae-Doo;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2783-2790
    • /
    • 2010
  • In this paper, we first review LDPC codes in general and a belief propagation algorithm that works in logarithm domain. LDPC codes, which is chosen 802.11n for wireless local access network(WLAN) standard, require a large number of computation due to large size of coded block and iteration. Therefore, we presented three kinds of low computational algorithms for LDPC codes. First, sequential decoding with partial group is proposed. It has the same H/W complexity, and fewer number of iterations are required with the same performance in comparison with conventional decoder algorithm. Secondly, we have apply early stop algorithm. This method reduces number of unnecessary iterations. Third, early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Through the simulation, we knew that the iteration number are reduced by half using subset algorithm and early stop algorithm is reduced more than one iteration and computational complexity of early detected method is about 30% offs in case of check node update, 94% offs in case of check node update compared to conventional scheme. The LDPC decoder have been implemented in Xilinx System Generator and targeted to a Xilinx Virtx5-xc5vlx155t FPGA. When three algorithms are used, amount of device is about 45% off and the decoding speed is about two times faster than convectional scheme.

K-means clustering analysis and differential protection policy according to 3D NAND flash memory error rate to improve SSD reliability

  • Son, Seung-Woo;Kim, Jae-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.1-9
    • /
    • 2021
  • 3D-NAND flash memory provides high capacity per unit area by stacking 2D-NAND cells having a planar structure. However, due to the nature of the lamination process, there is a problem that the frequency of error occurrence may vary depending on each layer or physical cell location. This phenomenon becomes more pronounced as the number of write/erase(P/E) operations of the flash memory increases. Most flash-based storage devices such as SSDs use ECC for error correction. Since this method provides a fixed strength of data protection for all flash memory pages, it has limitations in 3D NAND flash memory, where the error rate varies depending on the physical location. Therefore, in this paper, pages and layers with different error rates are classified into clusters through the K-means machine learning algorithm, and differentiated data protection strength is applied to each cluster. We classify pages and layers based on the number of errors measured after endurance test, where the error rate varies significantly for each page and layer, and add parity data to stripes for areas vulnerable to errors to provides differentiate data protection strength. We show the possibility that this differentiated data protection policy can contribute to the improvement of reliability and lifespan of 3D NAND flash memory compared to the protection techniques using RAID-like or ECC alone.