• Title/Summary/Keyword: Encoder-decoder

Search Result 451, Processing Time 0.022 seconds

Signaling Method of Multiple Motion Vector Resolutions Using Contradiction Testing (모순 검증을 통한 다중 움직임 벡터 해상도 시그널링 방법)

  • Won, Kwanghyun;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.107-118
    • /
    • 2015
  • Although most current video coding standards set a fixed motion vector resolution like quarter-pel accuracy, a scheme supporting multiple motion vector resolutions can improve the coding efficiency of video since it can allow to use just required motion vector accuracy depending on the video content and at the same time to generate more accurate motion predictor. However, the selected motion vector resolution for each motion vector is a signaling overhead. This paper proposes a contradiction testing-based signaling scheme of the motion vector resolution. The proposed method selects a best resolution for each motion vector among multiple candidates in such a way to produce the minimum amount of coded bits for the motion vector. The signaling overhead is reduced by contradiction testing that operates under a predefined criterion at both encoder and decoder with a purpose of pruning irrelevant candidate motion vector resolutions from signaling responsibility. Experimental results verified that the proposed scheme is effective in reducing coded motion information by achieving its $Bj{\o}ntegaard$ delta bit rate (BDBR) gain of about 4.01% on average (and up to 15.17%) compared to the conventional scheme with a fixed motion vector resolution.

Automatic Text Summarization based on Selective Copy mechanism against for Addressing OOV (미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약)

  • Lee, Tae-Seok;Seon, Choong-Nyoung;Jung, Youngim;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.58-65
    • /
    • 2019
  • Automatic text summarization is a process of shortening a text document by either extraction or abstraction. The abstraction approach inspired by deep learning methods scaling to a large amount of document is applied in recent work. Abstractive text summarization involves utilizing pre-generated word embedding information. Low-frequent but salient words such as terminologies are seldom included to dictionaries, that are so called, out-of-vocabulary(OOV) problems. OOV deteriorates the performance of Encoder-Decoder model in neural network. In order to address OOV words in abstractive text summarization, we propose a copy mechanism to facilitate copying new words in the target document and generating summary sentences. Different from the previous studies, the proposed approach combines accurate pointing information and selective copy mechanism based on bidirectional RNN and bidirectional LSTM. In addition, neural network gate model to estimate the generation probability and the loss function to optimize the entire abstraction model has been applied. The dataset has been constructed from the collection of abstractions and titles of journal articles. Experimental results demonstrate that both ROUGE-1 (based on word recall) and ROUGE-L (employed longest common subsequence) of the proposed Encoding-Decoding model have been improved to 47.01 and 29.55, respectively.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.

A Review of Seismic Full Waveform Inversion Based on Deep Learning (딥러닝 기반 탄성파 전파형 역산 연구 개관)

  • Sukjoon, Pyun;Yunhui, Park
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.227-241
    • /
    • 2022
  • Full waveform inversion (FWI) in the field of seismic data processing is an inversion technique that is used to estimate the velocity model of the subsurface for oil and gas exploration. Recently, deep learning (DL) technology has been increasingly used for seismic data processing, and its combination with FWI has attracted remarkable research efforts. For example, DL-based data processing techniques have been utilized for preprocessing input data for FWI, enabling the direct implementation of FWI through DL technology. DL-based FWI can be divided into the following methods: pure data-based, physics-based neural network, encoder-decoder, reparameterized FWI, and physics-informed neural network. In this review, we describe the theory and characteristics of the methods by systematizing them in the order of advancements. In the early days of DL-based FWI, the DL model predicted the velocity model by preparing a large training data set to adopt faithfully the basic principles of data science and apply a pure data-based prediction model. The current research trend is to supplement the shortcomings of the pure data-based approach using the loss function consisting of seismic data or physical information from the wave equation itself in deep neural networks. Based on these developments, DL-based FWI has evolved to not require a large amount of learning data, alleviating the cycle-skipping problem, which is an intrinsic limitation of FWI, and reducing computation times dramatically. The value of DL-based FWI is expected to increase continually in the processing of seismic data.

Speech Reinforcement Based on G.729A Speech Codec Parameter Under Near-End Background Noise Environments (근단 배경 잡음 환경에서 G.729A 음성부호화기 파라미터에 기반한 새로운 음성 강화 기법)

  • Choi, Jae-Hun;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.392-400
    • /
    • 2009
  • In this paper, we propose an effective speech reinforcement technique base on ITU-T G.729A CS-ACELP codec under the near-end background noise environments. In general, since the intelligibility of the far-end speech for the near-end listener is significantly reduced under near-end noise environments, we require a far-end speech reinforcement approach to avoid this phenomena. In contrast to the conventional speech reinforcement algorithm, we reinforce the excitation signal of the codec's parameters received from the far-end speech signal based on the G.729A speech codec under various background noise environments. Specifically, we first estimate the excitation signal of ambient noise at the near-end through the encoder of the G.729A speech codec, reinforcing the excitation signal of the far-end speech transmitted from the far-end. we specially propose a novel approach to directly reinforce the excitation signal of far-end speech signal based on the decoder of the G.729A. The performance of the proposed algorithm is evaluated by the CCR (Comparison Category Rating) test of the method for subjective determination of transmission quality in ITU-T P.800 under various noise environments and shows better performances compared with conventional SNR Recovery methods.

Development of an Anomaly Detection Algorithm for Verification of Radionuclide Analysis Based on Artificial Intelligence in Radioactive Wastes (방사성폐기물 핵종분석 검증용 이상 탐지를 위한 인공지능 기반 알고리즘 개발)

  • Seungsoo Jang;Jang Hee Lee;Young-su Kim;Jiseok Kim;Jeen-hyeng Kwon;Song Hyun Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.1
    • /
    • pp.19-32
    • /
    • 2023
  • The amount of radioactive waste is expected to dramatically increase with decommissioning of nuclear power plants such as Kori-1, the first nuclear power plant in South Korea. Accurate nuclide analysis is necessary to manage the radioactive wastes safely, but research on verification of radionuclide analysis has yet to be well established. This study aimed to develop the technology that can verify the results of radionuclide analysis based on artificial intelligence. In this study, we propose an anomaly detection algorithm for inspecting the analysis error of radionuclide. We used the data from 'Updated Scaling Factors in Low-Level Radwaste' (NP-5077) published by EPRI (Electric Power Research Institute), and resampling was performed using SMOTE (Synthetic Minority Oversampling Technique) algorithm to augment data. 149,676 augmented data with SMOTE algorithm was used to train the artificial neural networks (classification and anomaly detection networks). 324 NP-5077 report data verified the performance of networks. The anomaly detection algorithm of radionuclide analysis was divided into two modules that detect a case where radioactive waste was incorrectly classified or discriminate an abnormal data such as loss of data or incorrectly written data. The classification network was constructed using the fully connected layer, and the anomaly detection network was composed of the encoder and decoder. The latter was operated by loading the latent vector from the end layer of the classification network. This study conducted exploratory data analysis (i.e., statistics, histogram, correlation, covariance, PCA, k-mean clustering, DBSCAN). As a result of analyzing the data, it is complicated to distinguish the type of radioactive waste because data distribution overlapped each other. In spite of these complexities, our algorithm based on deep learning can distinguish abnormal data from normal data. Radionuclide analysis was verified using our anomaly detection algorithm, and meaningful results were obtained.

Improved AR-FGS Coding Scheme for Scalable Video Coding (확장형 비디오 부호화(SVC)의 AR-FGS 기법에 대한 부호화 성능 개선 기법)

  • Seo, Kwang-Deok;Jung, Soon-Heung;Kim, Jin-Soo;Kim, Jae-Gon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1173-1183
    • /
    • 2006
  • In this paper, we propose an efficient method for improving visual quality of AR-FGS (Adaptive Reference FGS) which is adopted as a key scheme for SVC (Scalable Video Coding) or H.264 scalable extension. The standard FGS (Fine Granularity Scalability) adopts AR-FGS that introduces temporal prediction into FGS layer by using a high quality reference signal which is constructed by the weighted average between the base layer reconstructed imageand enhancement reference to improve the coding efficiency in the FGS layer. However, when the enhancement stream is truncated at certain bitstream position in transmission, the rest of the data of the FGS layer will not be available at the FGS decoder. Thus the most noticeable problem of using the enhancement layer in prediction is the degraded visual quality caused by drifting because of the mismatch between the reference frame used by the FGS encoder and that by the decoder. To solve this problem, we exploit the principle of cyclical block coding that is used to encode quantized transform coefficients in a cyclical manner in the FGS layer. Encoding block coefficients in a cyclical manner places 'higher-value' bits earlier in the bitstream. The quantized transform coefficients included in the ealry coding cycle of cyclical block coding have higher probability to be correctly received and decoded than the others included in the later cycle of the cyclical block coding. Therefore, we can minimize visual quality degradation caused by bitstream truncation by adjusting weighting factor to control the contribution of the bitstream produced in each coding cycle of cyclical block coding when constructing the enhancement layer reference frame. It is shown by simulations that the improved AR-FGS scheme outperforms the standard AR-FGS by about 1 dB in maximum in the reconstructed visual quality.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.