• Title/Summary/Keyword: Belief propagation

Search Result 96, Processing Time 0.02 seconds

Convergence of Min-Sum Decoding of LDPC codes under a Gaussian Approximation (MIN-SUM 복호화 알고리즘을 이용한 LDPC 오류정정부호의 성능분석)

  • Heo, Jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.936-941
    • /
    • 2003
  • Density evolution was developed as a method for computing the capacity of low-density parity-check(LDPC) codes under the sum-product algorithm [1]. Based on the assumption that the passed messages on the belief propagation model can be approximated well by Gaussian random variables, a modified and simplified version of density evolution technique was introduced in [2]. Recently, the min-sum algorithm was applied to the density evolution of LDPC codes as an alternative decoding algorithm in [3]. Next question is how the min-sum algorithm is combined with a Gaussian approximation. In this paper, the capacity of various rate LDPC codes is obtained using the min-sum algorithm combined with the Gaussian approximation, which gives a simplest way of LDPC code analysis. Unlike the sum-product algorithm, the symmetry condition [4] is not maintained in the min-sum algorithm. Therefore, the variance as well as the mean of Gaussian distribution are recursively computed in this analysis. It is also shown that the min-sum threshold under a gaussian approximation is well matched to the simulation results.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A Comparative Study on Daesoon (大巡) Thought and Dangun (檀君) Thought: Focused on the Analysis of Narrative Structure and Motifs (대순사상과 단군사상 비교연구 - 서사구조와 모티프 분석을 중심으로 -)

  • Cha, Seon-keun
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.31
    • /
    • pp.199-235
    • /
    • 2018
  • Most of the new religions derived from Jeungsan have claimed that Jeungsan's religious thought reproduced Dangun [檀君] Thought in its original form. However, Daesoon Jinrihoe is the only religious order out of the many new religions within the Jeungsan lineage, which has constantly kept its distance from Dangun Thought since 1909 during the earliest period of proto-Daesoon Jinrihoe. Even a mere trace of Dangun cannot be found in the subject of faith or the doctrinal system of Daesoon Jinrihoe. In this context, this study aims to examine possible connections between Daesoon Thought and Dangun Thought in order to determine why other Jeungsanist religions frequently exhibit Dangunist features. Specifically, a major part of this study will be devoted to comparing and analyzing the narrative structure of Daesoon Thought and Dangun Thought as well as their respective motifs. In fact, Jeungsan does not seem to have ever mentioned Dangun in his recorded teachings, therefore, after his passing into the Heaven, most of the religious orders including Daesoon Jinrihoe derived from him did not pay any attention to Dangun Thought for almost for 40 years. These orders did not originally perceive Dangun as an object of belief. After Korea's liberation, Dangun became widely accepted as a pivotal role among the Korean people. As Dangun-nationalism claimed to unify Koreans as one great Korean ethnic society, the religious orders of Jeungsan lineage also climbed aboard this creed and their faiths or doctrines were acculturated to reflect this change. The reason for this has been attributed to following modern trends to increase success in propagation. In the meantime, Daesoon Jinrihoe was the only order that did not accept Dangun-nationalism because it was not a teaching given by the order's founder. And the two systems of thought have more dissimilarity than parallelism in terms of philosophical ideology. These seem to be the main reasons why Daesoon Jinrihoe did not adopt Dangun into its doctrine or belief system.

A Performance Analysis of Distributed Storage Codes for RGG/WSN (RGG/WSN을 위한 분산 저장 부호의 성능 분석)

  • Cheong, Ho-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.462-468
    • /
    • 2017
  • In this paper IoT/WSN(Internet of Things/Wireless Sensor Network) has been modeled with a random geometric graph. And a performance of the decentralized code for the efficient storage of data which is generated from WSN has been analyzed. WSN with n=100 or 200 has been modeled as a random geometric graph and has been simulated for their performance analysis. When the number of the total nodes of WSN is n=100 or 200, the successful decoding probability as decoding ratio ${\eta}$ depends more on the number of source nodes k rather than the number of nodes n. Especially, from the simulation results we can see that the successful decoding rate depends greatly on k value than n value and the successful decoding rate was above 70% when $${\eta}{\leq_-}2.0$$. We showed that the number of operations of BP(belief propagation) decoding scheme increased exponentially with k value from the simulation of the number of operations as a ${\eta}$. This is probably because the length of the LT code becomes longer as the number of source nodes increases and thus the decoding computation amount increases greatly.

Implementation of LDPC Decoder using High-speed Algorithms in Standard of Wireless LAN (무선 랜 규격에서의 고속 알고리즘을 이용한 LDPC 복호기 구현)

  • Kim, Chul-Seung;Kim, Min-Hyuk;Park, Tae-Doo;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2783-2790
    • /
    • 2010
  • In this paper, we first review LDPC codes in general and a belief propagation algorithm that works in logarithm domain. LDPC codes, which is chosen 802.11n for wireless local access network(WLAN) standard, require a large number of computation due to large size of coded block and iteration. Therefore, we presented three kinds of low computational algorithms for LDPC codes. First, sequential decoding with partial group is proposed. It has the same H/W complexity, and fewer number of iterations are required with the same performance in comparison with conventional decoder algorithm. Secondly, we have apply early stop algorithm. This method reduces number of unnecessary iterations. Third, early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Through the simulation, we knew that the iteration number are reduced by half using subset algorithm and early stop algorithm is reduced more than one iteration and computational complexity of early detected method is about 30% offs in case of check node update, 94% offs in case of check node update compared to conventional scheme. The LDPC decoder have been implemented in Xilinx System Generator and targeted to a Xilinx Virtx5-xc5vlx155t FPGA. When three algorithms are used, amount of device is about 45% off and the decoding speed is about two times faster than convectional scheme.

A Study on the Divinity of 'the Supreme God and Celestial Worthy of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin': Focusing on the Relationship between the Divine Qualities of Being 'the Celestial Worthy of Universal Transformation' and 'the Lord God of Great Creation in the Ninth Heaven' (구천응원뇌성보화천존상제 신격 연구 - '보화천존'과 '구천대원조화주신'의 관계를 중심으로 -)

  • Park, Yong-cheol
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.29
    • /
    • pp.71-100
    • /
    • 2017
  • This study focuses on examining 'the Supreme God and Celestial Worthy of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin', which Daesoon Jinrihoe believes in as the highest divinity. The name of this divinity was first found in Chinese Daoist scriptures. This study starts by considering the global propagation of virtue and then research connected to this topic. There are two alternative names for this divinity in relation to his human avatar, Kang Jeungsan, the subject of faith in Daesoon Jinrihoe. One is 'the Lord God of Great Creation in the Ninth Heaven' meaning the divinity before assuming a human avatar, and the other is 'the Celestial Worthy of Universal Transformation' the same divinity after he discarded his human avatar and returned to his celestial post. To understand how the belief system of Daesoon Jinrihoe differs from that of Daoism, it is necessary to study the divinity's change from being 'the Lord God of Great Creation in the Ninth Heaven' to becoming 'the Celestial Worthy of Universal Transformation'. If this distinction is not made clear, it brings about confusing arguments concerning the term 'Supreme God (Sangje)' as used in Daoism and Daesoon Jinrihoe. In order to offer a specific explanation, this study suggests three possible directions. The first hypothesis is that although these two names, 'the Celestial Worthy of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin' from Daoism and 'the Supreme God of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin' from Daesoon Jinrihoe, are similar, they actually have nothing to do with one another. The second hypothesis is that they are in fact the same divinity. Lastly, the third hypothesis is that they are closely connected, however, the former (the Celestial Worthy of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin) is a position needed to fulfill the mission of Jeungsan, whereas the latter (the Supreme God of the Ninth Heaven Who Spreads the Sound of the Thunder Corresponding to Primordial Origin) is a name received after the human avatar passes and the deity returns to the Noebu, 'the department of lightning'. These hypotheses face certain problems such as arbitrary mixing, the need for the theoretical clarity, and argumental weakness. Therefore, by leaving some unresolved questions, this study encourages future follow-up studies.