• Title/Summary/Keyword: Convolutional encoder

Search Result 92, Processing Time 0.027 seconds

Automated Design of Viterbi Decoder using Specification Parameters (사양변수를 이용한 비터비 복호기의 자동설계)

  • Kong, Myoung-Seok;Bae, Sung-Il;Kim, Jae-Seok
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.1
    • /
    • pp.1-11
    • /
    • 1999
  • In this paper, we proposed a design method of parameterized viterbi decoder, which automatically synthsizes the diverse viterbi deciders used in the digital mobile communication systems. It is designed to synthesize a viterbi decoder specified by user-provided parameters. Those parameters are constraint length, code rate generator polynomials of teh convolutional encoder, data rate and bits/frame of the data transmission, and soft decision bits of viterbi decoder. For the design of the parameterized viterbi decoder, we designed a user interface module C-language, and a viterbi decoder module in a hierarchical atructure using VHDL language and its generic statement. For the verification of the parameterized viterbi decoder, we compared our synthesized viterbi decoder with the conventional viterbi decoder which is designed for the IS-95 CDMA system. The proposed design method of the viterbi decoder will be a new method to obtain a required viterbi decoder in a very short time only by supplying the design parameters.

  • PDF

An Efficient Decoding Method for High Throughput in Underwater Communication (수중통신에서 고 전송률을 위한 효율적인 복호 방법)

  • Baek, Chang-Uk;Jung, Ji-Won;Chun, Seung-Yong;Kim, Woo-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.295-302
    • /
    • 2015
  • Acoustic channels are characterized by long multipath spreads that cause inter-symbol interference. The way in which this fact influences the design of the receiver structure is considered. To satisfy performance and throughput, we presented consecutive iterative BCJR (Bahl, Cocke, Jelinek, Raviv) equalization to improve the performance and throughput. To achieve low error performance, we resort to powerful BCJR equalization algorithms that iteratively update probabilistic information between inner decoder and outer decoder. Also, to achieve high throughput, we divide long packet into consecutive small packets, and the estimate channel information of previous packets are compensated to next packets. Based on experimental channel response, we confirmed that the performance is improved for long length packet size.

Transmission Error Detection and Copyright Protection for MPEG-2 Video Based on Channel Coded Watermark (채널 부호화된 워터마크 신호에 기반한 MPEG-2 비디오의 전송 오류 검출과 저작권 보호)

  • Bae, Chang-Seok;Yuk, Ying-Chung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.745-754
    • /
    • 2005
  • This paper proposes an information hiding algorithm using channel coding technique which can be used to detect transmission errors and to protect copyright for MPEG-2 video The watermark signal is generated by applying copyright information of video data to a convolutional encoder, and the signal is embedded into macro blocks in every frame while encoding to MPEG-2 video stream In the decoder, the embedded signal is detected from macro blocks in every frame, and the detected signal is used to localize transmission errors in the video stream. The detected signal can also be used to claim ownership of the video data by decoding it to the copyright Information. In this stage, errors in the detected watermark signal can be corrected by channel decoder. The 3 video sequences which consist of 300 frames each are applied to the proposed MPEG-2 codec. Experimental results show that the proposed method can detect transmission errors in the video stream while decoding and it can also reconstruct copyright information more correctly than the conventional method.

2D and 3D Hand Pose Estimation Based on Skip Connection Form (스킵 연결 형태 기반의 손 관절 2D 및 3D 검출 기법)

  • Ku, Jong-Hoe;Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1574-1580
    • /
    • 2020
  • Traditional pose estimation methods include using special devices or images through image processing. The disadvantage of using a device is that the environment in which the device can be used is limited and costly. The use of cameras and image processing has the advantage of reducing environmental constraints and costs, but the performance is lower. CNN(Convolutional Neural Networks) were studied for pose estimation just using only camera without these disadvantage. Various techniques were proposed to increase cognitive performance. In this paper, the effect of the skip connection on the network was experimented by using various skip connections on the joint recognition of the hand. Experiments have confirmed that the presence of additional skip connections other than the basic skip connections has a better effect on performance, but the network with downward skip connections is the best performance.

Structural health monitoring data anomaly detection by transformer enhanced densely connected neural networks

  • Jun, Li;Wupeng, Chen;Gao, Fan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.613-626
    • /
    • 2022
  • Guaranteeing the quality and integrity of structural health monitoring (SHM) data is very important for an effective assessment of structural condition. However, sensory system may malfunction due to sensor fault or harsh operational environment, resulting in multiple types of data anomaly existing in the measured data. Efficiently and automatically identifying anomalies from the vast amounts of measured data is significant for assessing the structural conditions and early warning for structural failure in SHM. The major challenges of current automated data anomaly detection methods are the imbalance of dataset categories. In terms of the feature of actual anomalous data, this paper proposes a data anomaly detection method based on data-level and deep learning technique for SHM of civil engineering structures. The proposed method consists of a data balancing phase to prepare a comprehensive training dataset based on data-level technique, and an anomaly detection phase based on a sophisticatedly designed network. The advanced densely connected convolutional network (DenseNet) and Transformer encoder are embedded in the specific network to facilitate extraction of both detail and global features of response data, and to establish the mapping between the highest level of abstractive features and data anomaly class. Numerical studies on a steel frame model are conducted to evaluate the performance and noise immunity of using the proposed network for data anomaly detection. The applicability of the proposed method for data anomaly classification is validated with the measured data of a practical supertall structure. The proposed method presents a remarkable performance on data anomaly detection, which reaches a 95.7% overall accuracy with practical engineering structural monitoring data, which demonstrates the effectiveness of data balancing and the robust classification capability of the proposed network.

A novel framework for correcting satellite-based precipitation products in Mekong river basin with discontinuous observed data

  • Xuan-Hien Le;Giang V. Nguyen;Sungho Jung;Giha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.173-173
    • /
    • 2023
  • The Mekong River Basin (MRB) is a crucial watershed in Asia, impacting over 60 million people across six developing nations. Accurate satellite-based precipitation products (SPPs) are essential for effective hydrological and watershed management in this region. However, the performance of SPPs has been varied and limited. The APHRODITE product, a unique gauge-based dataset for MRB, is widely used but is only available until 2015. In this study, we present a novel framework for correcting SPPs in the MRB by employing a deep learning approach that combines convolutional neural networks and encoder-decoder architecture to address pixel-by-pixel bias and enhance accuracy. The DLF was applied to four widely used SPPs (TRMM, CMORPH, CHIRPS, and PERSIANN-CDR) in MRB. For the original SPPs, the TRMM product outperformed the other SPPs. Results revealed that the DLF effectively bridged the spatial-temporal gap between the SPPs and the gauge-based dataset (APHRODITE). Among the four corrected products, ADJ-TRMM demonstrated the best performance, followed by ADJ-CDR, ADJ-CHIRPS, and ADJ-CMORPH. The DLF offered a robust and adaptable solution for bias correction in the MRB and beyond, capable of detecting intricate patterns and learning from data to make appropriate adjustments. With the discontinuation of the APHRODITE product, DLF represents a promising solution for generating a more current and reliable dataset for MRB research. This research showcased the potential of deep learning-based methods for improving the accuracy of SPPs, particularly in regions like the MRB, where gauge-based datasets are limited or discontinued.

  • PDF

Density map estimation based on deep-learning for pest control drone optimization (드론 방제의 최적화를 위한 딥러닝 기반의 밀도맵 추정)

  • Baek-gyeom Seong;Xiongzhe Han;Seung-hwa Yu;Chun-gu Lee;Yeongho Kang;Hyun Ho Woo;Hunsuk Lee;Dae-Hyun Lee
    • Journal of Drive and Control
    • /
    • v.21 no.2
    • /
    • pp.53-64
    • /
    • 2024
  • Global population growth has resulted in an increased demand for food production. Simultaneously, aging rural communities have led to a decrease in the workforce, thereby increasing the demand for automation in agriculture. Drones are particularly useful for unmanned pest control fields. However, the current method of uniform spraying leads to environmental damage due to overuse of pesticides and drift by wind. To address this issue, it is necessary to enhance spraying performance through precise performance evaluation. Therefore, as a foundational study aimed at optimizing drone-based pest control technologies, this research evaluated water-sensitive paper (WSP) via density map estimation using convolutional neural networks (CNN) with a encoder-decoder structure. To achieve more accurate estimation, this study implemented multi-task learning, incorporating an additional classifier for image segmentation alongside the density map estimation classifier. The proposed model in this study resulted in a R-squared (R2) of 0.976 for coverage area in the evaluation data set, demonstrating satisfactory performance in evaluating WSP at various density levels. Further research is needed to improve the accuracy of spray result estimations and develop a real-time assessment technology in the field.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Experimental Comparison of Network Intrusion Detection Models Solving Imbalanced Data Problem (데이터의 불균형성을 제거한 네트워크 침입 탐지 모델 비교 분석)

  • Lee, Jong-Hwa;Bang, Jiwon;Kim, Jong-Wouk;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.18-28
    • /
    • 2020
  • With the development of the virtual community, the benefits that IT technology provides to people in fields such as healthcare, industry, communication, and culture are increasing, and the quality of life is also improving. Accordingly, there are various malicious attacks targeting the developed network environment. Firewalls and intrusion detection systems exist to detect these attacks in advance, but there is a limit to detecting malicious attacks that are evolving day by day. In order to solve this problem, intrusion detection research using machine learning is being actively conducted, but false positives and false negatives are occurring due to imbalance of the learning dataset. In this paper, a Random Oversampling method is used to solve the unbalance problem of the UNSW-NB15 dataset used for network intrusion detection. And through experiments, we compared and analyzed the accuracy, precision, recall, F1-score, training and prediction time, and hardware resource consumption of the models. Based on this study using the Random Oversampling method, we develop a more efficient network intrusion detection model study using other methods and high-performance models that can solve the unbalanced data problem.