• Title/Summary/Keyword: 인코더/디코더

Search Result 88, Processing Time 0.026 seconds

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Real-time Implementation of the AMR Speech Coder Using $OakDSPCore^{\circledR}$ ($OakDSPCore^{\circledR}$를 이용한 적응형 다중 비트 (AMR) 음성 부호화기의 실시간 구현)

  • 이남일;손창용;이동원;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.34-39
    • /
    • 2001
  • An adaptive multi-rate (AMR) speech coder was adopted as a standard of W-CDMA by 3GPP and ETSI. The AMR coder is based on the CELP algorithm operating at rates ranging from 12.2 kbps down to 4.75 kbps, and it is a source controlled codec according to the channel error conditions and the traffic loading. In this paper, we implement the DSP S/W of the AMR coder using OakDSPCore. The implementation is based on the CSD17C00A chip developed by C&S Technology, and it is tested using test vectors, for the AMR speech codec, provided by ETSI for the bit exact implementation. The DSP B/W requires 20.6 MIPS for the encoder and 2.7 MIPS for the decoder. Memories required by the Am coder were 21.97 kwords, 6.64 kwords and 15.1 kwords for code, data sections and data ROM, respectively. Also, actual sound input/output test using microphone and speaker demonstrates its proper real-time operation without distortions or delays.

  • PDF

Semi-supervised domain adaptation using unlabeled data for end-to-end speech recognition (라벨이 없는 데이터를 사용한 종단간 음성인식기의 준교사 방식 도메인 적응)

  • Jeong, Hyeonjae;Goo, Jahyun;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.29-37
    • /
    • 2020
  • Recently, the neural network-based deep learning algorithm has dramatically improved performance compared to the classical Gaussian mixture model based hidden Markov model (GMM-HMM) automatic speech recognition (ASR) system. In addition, researches on end-to-end (E2E) speech recognition systems integrating language modeling and decoding processes have been actively conducted to better utilize the advantages of deep learning techniques. In general, E2E ASR systems consist of multiple layers of encoder-decoder structure with attention. Therefore, E2E ASR systems require data with a large amount of speech-text paired data in order to achieve good performance. Obtaining speech-text paired data requires a lot of human labor and time, and is a high barrier to building E2E ASR system. Therefore, there are previous studies that improve the performance of E2E ASR system using relatively small amount of speech-text paired data, but most studies have been conducted by using only speech-only data or text-only data. In this study, we proposed a semi-supervised training method that enables E2E ASR system to perform well in corpus in different domains by using both speech or text only data. The proposed method works effectively by adapting to different domains, showing good performance in the target domain and not degrading much in the source domain.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

Efficient CT Image Denoising Using Deformable Convolutional AutoEncoder Model

  • Eon Seung, Seong;Seong Hyun, Han;Ji Hye, Heo;Dong Hoon, Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.25-33
    • /
    • 2023
  • Noise generated during the acquisition and transmission of CT images acts as a factor that degrades image quality. Therefore, noise removal to solve this problem is an important preprocessing process in image processing. In this paper, we remove noise by using a deformable convolutional autoencoder (DeCAE) model in which deformable convolution operation is applied instead of the existing convolution operation in the convolutional autoencoder (CAE) model of deep learning. Here, the deformable convolution operation can extract features of an image in a more flexible area than the conventional convolution operation. The proposed DeCAE model has the same encoder-decoder structure as the existing CAE model, but the encoder is composed of deformable convolutional layers and the decoder is composed of conventional convolutional layers for efficient noise removal. To evaluate the performance of the DeCAE model proposed in this paper, experiments were conducted on CT images corrupted by various noises, that is, Gaussian noise, impulse noise, and Poisson noise. As a result of the performance experiment, the DeCAE model has more qualitative and quantitative measures than the traditional filters, that is, the Mean filter, Median filter, Bilateral filter and NL-means method, as well as the existing CAE models, that is, MAE (Mean Absolute Error), PSNR (Peak Signal-to-Noise Ratio) and SSIM. (Structural Similarity Index Measure) showed excellent results.

Simplification Method for Lightweighting of Underground Geospatial Objects in a Mobile Environment (모바일 환경에서 지하공간객체의 경량화를 위한 단순화 방법)

  • Jong-Hoon Kim;Yong-Tae Kim;Hoon-Joon Kouh
    • Journal of Industrial Convergence
    • /
    • v.20 no.12
    • /
    • pp.195-202
    • /
    • 2022
  • Underground Geospatial Information Map Management System(UGIMMS) integrates various underground facilities in the underground space into 3D mesh data, and supports to check the 3D image and location of the underground facilities in the mobile app. However, there is a problem that it takes a long time to run in the app because various underground facilities can exist in some areas executed by the app and can be seen layer by layer. In this paper, we propose a deep learning-based K-means vertex clustering algorithm as a method to reduce the execution time in the app by reducing the size of the data by reducing the number of vertices in the 3D mesh data within the range that does not cause a problem in visibility. First, our proposed method obtains refined vertex feature information through a deep learning encoder-decoder based model. And second, the method was simplified by grouping similar vertices through K-means vertex clustering using feature information. As a result of the experiment, when the vertices of various underground facilities were reduced by 30% with the proposed method, the 3D image model was slightly deformed, but there was no missing part, so there was no problem in checking it in the app.

A Review of Seismic Full Waveform Inversion Based on Deep Learning (딥러닝 기반 탄성파 전파형 역산 연구 개관)

  • Sukjoon, Pyun;Yunhui, Park
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.227-241
    • /
    • 2022
  • Full waveform inversion (FWI) in the field of seismic data processing is an inversion technique that is used to estimate the velocity model of the subsurface for oil and gas exploration. Recently, deep learning (DL) technology has been increasingly used for seismic data processing, and its combination with FWI has attracted remarkable research efforts. For example, DL-based data processing techniques have been utilized for preprocessing input data for FWI, enabling the direct implementation of FWI through DL technology. DL-based FWI can be divided into the following methods: pure data-based, physics-based neural network, encoder-decoder, reparameterized FWI, and physics-informed neural network. In this review, we describe the theory and characteristics of the methods by systematizing them in the order of advancements. In the early days of DL-based FWI, the DL model predicted the velocity model by preparing a large training data set to adopt faithfully the basic principles of data science and apply a pure data-based prediction model. The current research trend is to supplement the shortcomings of the pure data-based approach using the loss function consisting of seismic data or physical information from the wave equation itself in deep neural networks. Based on these developments, DL-based FWI has evolved to not require a large amount of learning data, alleviating the cycle-skipping problem, which is an intrinsic limitation of FWI, and reducing computation times dramatically. The value of DL-based FWI is expected to increase continually in the processing of seismic data.

Latent Shifting and Compensation for Learned Video Compression (신경망 기반 비디오 압축을 위한 레이턴트 정보의 방향 이동 및 보상)

  • Kim, Yeongwoong;Kim, Donghyun;Jeong, Se Yoon;Choi, Jin Soo;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.31-43
    • /
    • 2022
  • Traditional video compression has developed so far based on hybrid compression methods through motion prediction, residual coding, and quantization. With the rapid development of technology through artificial neural networks in recent years, research on image compression and video compression based on artificial neural networks is also progressing rapidly, showing competitiveness compared to the performance of traditional video compression codecs. In this paper, a new method capable of improving the performance of such an artificial neural network-based video compression model is presented. Basically, we take the rate-distortion optimization method using the auto-encoder and entropy model adopted by the existing learned video compression model and shifts some components of the latent information that are difficult for entropy model to estimate when transmitting compressed latent representation to the decoder side from the encoder side, and finally compensates the distortion of lost information. In this way, the existing neural network based video compression framework, MFVC (Motion Free Video Compression) is improved and the BDBR (Bjøntegaard Delta-Rate) calculated based on H.264 is nearly twice the amount of bits (-27%) of MFVC (-14%). The proposed method has the advantage of being widely applicable to neural network based image or video compression technologies, not only to MFVC, but also to models using latent information and entropy model.