• Title/Summary/Keyword: Communication layer

Search Result 1,677, Processing Time 0.028 seconds

Performance and Iteration Number Statistics of Flexible Low Density Parity Check Codes (가변 LDPC 부호의 성능과 반복횟수 통계)

  • Seo, Young-Dong;Kong, Min-Han;Song, Moon-Kyou
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.1
    • /
    • pp.189-195
    • /
    • 2008
  • The OFDMA Physical layer in the WiMAX standard of IEEE 802.16e adopts 114 LDPC codes with various code rates and block sizes as a channel coding scheme to meet varying channel environments and different requirements for transmission performance. In this paper, the performances of the LDPC codes are evaluated according to various code rates and block-lengths throueh simulation studies using min-sum decoding algorithm in AWGN chamois. As the block-length increases and the code rate decreases, the BER performance improves. In the cases with code rates of 2/3 and 3/4, where two different codes ate specified for each code rate, the codes with code rates of 2/3A and 3/4B outperform those of 2/3B and 3/4A, respectively. Through the statistical analyses of the number of decoding iterations the decoding complexity and the word error rates of LDPC codes are estimated. The results can be used to trade-off between the performance and the complexity in designs of LDPC decoders.

A Design of AES-based Key Wrap/Unwrap Core for WiBro Security (와이브로 보안용 AES기반의 Key Wrap/Unwrap 코어 설계)

  • Kim, Jong-Hwan;Jeon, Heung-Woo;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1332-1340
    • /
    • 2007
  • This paper describes an efficient hardware design of key wrap/unwrap algorithm for security layer of WiBro system. The key wrap/unwrap core (WB_KeyWuW) is based on AES (Advanced Encryption Standard) algorithm, and performs encryption/decryption of 128bit TEK (Traffic Encryption Key) with 128bit KEK (Key Encryption Key). In order to achieve m area-efficient implementation, two design techniques are considered; First, round transformation block within AES core is designed using a shared structure for encryption/decryption. Secondly, SubByte/InvSubByte blocks that require the largest hardware in AES core are implemented by using field transformation technique. As a result, the gate count of the WB_KeyWuW core is reduced by about 25% compared with conventional LUT (Lookup Table)-based design. The WB_KeyWuW con designed in Verilog-HDL has about 14,300 gates, and the estimated throughput is about $16{\sim}22-Mbps$ at 100-MHz@3.3V, thus the designed core can be used as an IP for the hardware design of WiBro security system.

The System Of Microarray Data Classification Using Significant Gene Combination Method based on Neural Network. (신경망 기반의 유전자조합을 이용한 마이크로어레이 데이터 분류 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1243-1248
    • /
    • 2008
  • As development in technology of bioinformatics recently mates it possible to operate micro-level experiments, we can observe the expression pattern of total genome through on chip and analyze the interactions of thousands of genes at the same time. In this thesis, we used CDNA microarrays of 3840 genes obtained from neuronal differentiation experiment of cortical stem cells on white mouse with cancer. It analyzed and compared performance of each of the experiment result using existing DT, NB, SVM and multi-perceptron neural network classifier combined the similar scale combination method after constructing class classification model by extracting significant gene list with a similar scale combination method proposed in this paper through normalization. Result classifying in Multi-Perceptron neural network classifier for selected 200 genes using combination of PC(Pearson correlation coefficient) and ED(Euclidean distance coefficient) represented the accuracy of 98.84%, which show that it improve classification performance than case to experiment using other classifier.

Performance of 802.11b/g under Different Data Rates and Traffic Rates in Mobile IPv4 Environment (모바일 IPv4환경에서 802.11b/g 데이터 전송률과 트래픽 수신에 따른 효율)

  • Shrestha, Anish Prasad;Pyun, Jae-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2195-2200
    • /
    • 2008
  • The use of WLAN technology specially 802.11 b and g has seen huge subscriber growth in a relatively short time period due to low cost and ease of installation of WLAN hardware. These technologies provide multiple data rates ranging from 1 to 54 Mbps depending on modulation used in physical layer. This paper presents the performance of 802.11b and g under different data rates in mobile IPv4 environment. OPNET is used in the study as the network simulator. The performance metrics used in this wort is data traffic received by a WLAN station roaming amongst Access Point(AP) of different ESSs. It is found that much data packets are lost during handover between mobile agents. The traffic received by the roaming station varies based on the data rates provided by two different WLAN technologies.

A Scheme to Reduce the Transmission Delay for Real-Time Applications in Sensor Networks (센서 네트워크에서 실시간 응용을 위한 전송 지연 개선 기법)

  • Bin, Bong-Uk;Lee, Jong-Hyup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.8
    • /
    • pp.1493-1499
    • /
    • 2007
  • Real-time applications in a wireless sensor network environment require real-time transmissions from sensing nodes to sink nodes. Existing congestion control mechanisms have treated congestion problems in sensor networks, but they only adjust the reporting frequency or the sending rate in intermediate nodes. They were not suitable for real-time applications from the transmission delays point of view. In this paper, we suggest a new mechanism that can reduce the transmission delay and can increase the throughput for real-time applications in sensor network. This mechanism classifies data on the real-time characteristics, processes the data maintaining the real-time characteristics prior to the other data such as the non real-time data or the data lost the real-time characteristics. A modified frame format is also proposed in order to apply the mechanism to IEEE 802.15.4 MAC layer. The simulation based on ns-2 is accomplished in order to verify the performance of the suggested scheme from transmission delay and throughput standpoints. The simulation results show that the proposed algorithm has a better performance specifically when It applies to the real-time applications in sensor networks.

The Design and Practice of Disaster Response RL Environment Using Dimension Reduction Method for Training Performance Enhancement (학습 성능 향상을 위한 차원 축소 기법 기반 재난 시뮬레이션 강화학습 환경 구성 및 활용)

  • Yeo, Sangho;Lee, Seungjun;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.263-270
    • /
    • 2021
  • Reinforcement learning(RL) is the method to find an optimal policy through training. and it is one of popular methods for solving lifesaving and disaster response problems effectively. However, the conventional reinforcement learning method for disaster response utilizes either simple environment such as. grid and graph or a self-developed environment that are hard to verify the practical effectiveness. In this paper, we propose the design of a disaster response RL environment which utilizes the detailed property information of the disaster simulation in order to utilize the reinforcement learning method in the real world. For the RL environment, we design and build the reinforcement learning communication as well as the interface between the RL agent and the disaster simulation. Also, we apply the dimension reduction method for converting non-image feature vectors into image format which is effectively utilized with convolution layer to utilize the high-dimensional and detailed property of the disaster simulation. To verify the effectiveness of our proposed method, we conducted empirical evaluations and it shows that our proposed method outperformed conventional methods in the building fire damage.

Deep Learning Music genre automatic classification voting system using Softmax (소프트맥스를 이용한 딥러닝 음악장르 자동구분 투표 시스템)

  • Bae, June;Kim, Jangyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.27-32
    • /
    • 2019
  • Research that implements the classification process through Deep Learning algorithm, one of the outstanding human abilities, includes a unimodal model, a multi-modal model, and a multi-modal method using music videos. In this study, the results were better by suggesting a system to analyze each song's spectrum into short samples and vote for the results. Among Deep Learning algorithms, CNN showed superior performance in the category of music genre compared to RNN, and improved performance when CNN and RNN were applied together. The system of voting for each CNN result by Deep Learning a short sample of music showed better results than the previous model and the model with Softmax layer added to the model performed best. The need for the explosive growth of digital media and the automatic classification of music genres in numerous streaming services is increasing. Future research will need to reduce the proportion of undifferentiated songs and develop algorithms for the last category classification of undivided songs.

Active pulse classification algorithm using convolutional neural networks (콘볼루션 신경회로망을 이용한 능동펄스 식별 알고리즘)

  • Kim, Geunhwan;Choi, Seung-Ryul;Yoon, Kyung-Sik;Lee, Kyun-Kyung;Lee, Donghwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.106-113
    • /
    • 2019
  • In this paper, we propose an algorithm to classify the received active pulse when the active sonar system is operated as a non-cooperative mode. The proposed algorithm uses CNN (Convolutional Neural Networks) which shows good performance in various fields. As an input of CNN, time frequency analysis data which performs STFT (Short Time Fourier Transform) of the received signal is used. The CNN used in this paper consists of two convolution and pulling layers. We designed a database based neural network and a pulse feature based neural network according to the output layer design. To verify the performance of the algorithm, the data of 3110 CW (Continuous Wave) pulses and LFM (Linear Frequency Modulated) pulses received from the actual ocean were processed to construct training data and test data. As a result of simulation, the database based neural network showed 99.9 % accuracy and the feature based neural network showed about 96 % accuracy when allowing 2 pixel error.

Optimal Parameter Extraction based on Deep Learning for Premature Ventricular Contraction Detection (심실 조기 수축 비트 검출을 위한 딥러닝 기반의 최적 파라미터 검출)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1542-1550
    • /
    • 2019
  • Legacy studies for classifying arrhythmia have been studied to improve the accuracy of classification, Neural Network, Fuzzy, etc. Deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose optimal parameter extraction method based on a deep learning. For this purpose, R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval segment is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The detection and classification rate of R wave and PVC is evaluated through MIT-BIH arrhythmia database. The performance results indicate the average of 99.77% in R wave detection and 97.84% in PVC classification.

Parameter Extraction for Based on AR and Arrhythmia Classification through Deep Learning (AR 기반의 특징점 추출과 딥러닝을 통한 부정맥 분류)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1341-1347
    • /
    • 2020
  • Legacy studies for classifying arrhythmia have been studied in order to improve the accuracy of classification, Neural Network, Fuzzy, Machine Learning, etc. In particular, deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose parameter extraction based on AR and arrhythmia classification through a deep learning. For this purpose, the R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The classification rate of PVC is evaluated through MIT-BIH arrhythmia database. The achieved scores indicate arrhythmia classification rate of over 97%.