• Title/Summary/Keyword: GRU

Search Result 195, Processing Time 0.029 seconds

Implementation of FPGA-based Accelerator for GRU Inference with Structured Compression (구조적 압축을 통한 FPGA 기반 GRU 추론 가속기 설계)

  • Chae, Byeong-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.850-858
    • /
    • 2022
  • To deploy Gate Recurrent Units (GRU) on resource-constrained embedded devices, this paper presents a reconfigurable FPGA-based GRU accelerator that enables structured compression. Firstly, a dense GRU model is significantly reduced in size by hybrid quantization and structured top-k pruning. Secondly, the energy consumption on external memory access is greatly reduced by the proposed reuse computing pattern. Finally, the accelerator can handle a structured sparse model that benefits from the algorithm-hardware co-design workflows. Moreover, inference tasks can be flexibly performed using all functional dimensions, sequence length, and number of layers. Implemented on the Intel DE1-SoC FPGA, the proposed accelerator achieves 45.01 GOPs in a structured sparse GRU network without batching. Compared to the implementation of CPU and GPU, low-cost FPGA accelerator achieves 57 and 30x improvements in latency, 300 and 23.44x improvements in energy efficiency, respectively. Thus, the proposed accelerator is utilized as an early study of real-time embedded applications, demonstrating the potential for further development in the future.

Development of a Speed Prediction Model for Urban Network Based on Gated Recurrent Unit (GRU 기반의 도시부 도로 통행속도 예측 모형 개발)

  • Hoyeon Kim;Sangsoo Lee;Jaeseong Hwang
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.1
    • /
    • pp.103-114
    • /
    • 2023
  • This study collected various data of urban roadways to analyze the effect of travel speed change, and a GRU-based short-term travel speed prediction model was developed using such big data. The baseline model and the double exponential smoothing model were selected as comparison models, and prediction errors were evaluated using the RMSE index. The model evaluation results revealed that the average RMSE of the baseline model and the double exponential smoothing model were 7.46 and 5.94, respectively. The average RMSE predicted by the GRU model was 5.08. Although there are deviations for each of the 15 links, most cases showed minimal errors in the GRU model, and the additional scatter plot analysis presented the same result. These results indicate that the prediction error can be reduced, and the model application speed can be improved when applying the GRU-based model in the process of generating travel speed information on urban roadways.

Development of Demand Forecasting Model for Public Bicycles in Seoul Using GRU (GRU 기법을 활용한 서울시 공공자전거 수요예측 모델 개발)

  • Lee, Seung-Woon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-25
    • /
    • 2022
  • After the first Covid-19 confirmed case occurred in Korea in January 2020, interest in personal transportation such as public bicycles not public transportation such as buses and subways, increased. The demand for 'Ddareungi', a public bicycle operated by the Seoul Metropolitan Government, has also increased. In this study, a demand prediction model of a GRU(Gated Recurrent Unit) was presented based on the rental history of public bicycles by time zone(2019~2021) in Seoul. The usefulness of the GRU method presented in this study was verified based on the rental history of Around Exit 1 of Yeouido, Yeongdengpo-gu, Seoul. In particular, it was compared and analyzed with multiple linear regression models and recurrent neural network models under the same conditions. In addition, when developing the model, in addition to weather factors, the Seoul living population was used as a variable and verified. MAE and RMSE were used as performance indicators for the model, and through this, the usefulness of the GRU model proposed in this study was presented. As a result of this study, the proposed GRU model showed higher prediction accuracy than the traditional multi-linear regression model and the LSTM model and Conv-LSTM model, which have recently been in the spotlight. Also the GRU model was faster than the LSTM model and the Conv-LSTM model. Through this study, it will be possible to help solve the problem of relocation in the future by predicting the demand for public bicycles in Seoul more quickly and accurately.

Bidirectional GRU-GRU CRF based Citation Metadata Recognition (Bidirectional GRU-GRU CRF 기반 참고문헌 메타데이터 인식)

  • Kim, Seon-wu;Ji, Seon-young;Seol, Jae-wook;Jeong, Hee-seok;Choi, Sung-pil
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.461-464
    • /
    • 2018
  • 최근 학술문헌이 급격하게 증가함에 따라, 학술문헌간의 연결성 및 메타데이터 추출 등의 핵심 자원으로서 활용할 수 있는 참고문헌에 대한 활용 연구가 진행되고 있다. 본 연구에서는 국내 학술지의 참고문헌이 가진 각 메타데이터를 자동적으로 인식하여 추출할 수 있는 참고문헌 메타데이터 인식에 대하여, 연속적 레이블링 방법론을 기반으로 접근한다. 심층학습 기술 중 연속적 레이블링에 우수한 성능을 보이고 있는 Bidirectional GRU-GRU CRF 모델을 기반으로 참고문헌 메타데이터 인식에 적용하였으며, 2010년 이후의 10종의 학술지내의 144,786건의 논문을 활용하여 추출한 169,668건의 참고문헌을 가공하여 실험하였다. 실험 결과, 실험집합에 대하여 F1 점수 97.21%의 우수한 성능을 보였다.

  • PDF

Korean Sentiment Analysis Using Natural Network: Based on IKEA Review Data

  • Sim, YuJeong;Yun, Dai Yeol;Hwang, Chi-gon;Moon, Seok-Jae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.173-178
    • /
    • 2021
  • In this paper, we find a suitable methodology for Korean Sentiment Analysis through a comparative experiment in which methods of embedding and natural network models are learned at the highest accuracy and fastest speed. The embedding method compares word embeddeding and Word2Vec. The model compares and experiments representative neural network models CNN, RNN, LSTM, GRU, Bi-LSTM and Bi-GRU with IKEA review data. Experiments show that Word2Vec and BiGRU had the highest accuracy and second fastest speed with 94.23% accuracy and 42.30 seconds speed. Word2Vec and GRU were found to have the third highest accuracy and fastest speed with 92.53% accuracy and 26.75 seconds speed.

MAGRU: Multi-layer Attention with GRU for Logistics Warehousing Demand Prediction

  • Ran Tian;Bo Wang;Chu Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.528-550
    • /
    • 2024
  • Warehousing demand prediction is an essential part of the supply chain, providing a fundamental basis for product manufacturing, replenishment, warehouse planning, etc. Existing forecasting methods cannot produce accurate forecasts since warehouse demand is affected by external factors such as holidays and seasons. Some aspects, such as consumer psychology and producer reputation, are challenging to quantify. The data can fluctuate widely or do not show obvious trend cycles. We introduce a new model for warehouse demand prediction called MAGRU, which stands for Multi-layer Attention with GRU. In the model, firstly, we perform the embedding operation on the input sequence to quantify the external influences; after that, we implement an encoder using GRU and the attention mechanism. The hidden state of GRU captures essential time series. In the decoder, we use attention again to select the key hidden states among all-time slices as the data to be fed into the GRU network. Experimental results show that this model has higher accuracy than RNN, LSTM, GRU, Prophet, XGboost, and DARNN. Using mean absolute error (MAE) and symmetric mean absolute percentage error(SMAPE) to evaluate the experimental results, MAGRU's MAE, RMSE, and SMAPE decreased by 7.65%, 10.03%, and 8.87% over GRU-LSTM, the current best model for solving this type of problem.

Natural Language Generation Using SC-GRU Encoder-Decoder Model (SC-GRU encoder-decoder 모델을 이용한 자연어생성)

  • Kim, Geonyeong;Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.167-171
    • /
    • 2017
  • 자연어 생성은 특정한 조건들을 만족하는 문장을 생성하는 연구로, 이러한 조건들은 주로 표와 같은 축약되고 구조화된 의미 표현으로 주어지며 사용자가 자연어로 생성된 문장을 받아야 하는 어떤 분야에서든 응용이 가능하다. 본 논문에서는 SC(Semantically Conditioned)-GRU기반 encoder-decoder모델을 이용한 자연어 생성 모델을 제안한다. 본 논문에서 제안한 모델이 SF Hotel 데이터에서는 0.8645 BLEU의 성능을, SF Restaurant 데이터에서는 0.7570 BLEU의 성능을 보였다.

  • PDF

Natural Language Generation Using SC-GRU Encoder-Decoder Model (SC-GRU encoder-decoder 모델을 이용한 자연어생성)

  • Kim, Geonyeong;Lee, Changki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.167-171
    • /
    • 2017
  • 자연어 생성은 특정한 조건들을 만족하는 문장을 생성하는 연구로, 이러한 조건들은 주로 표와 같은 축약되고 구조화된 의미 표현으로 주어지며 사용자가 자연어로 생성된 문장을 받아야 하는 어떤 분야에서든 응용이 가능하다. 본 논문에서는 SC(Semantically Conditioned)-GRU기반 encoder-decoder모델을 이용한 자연어 생성 모델을 제안한다. 본 논문에서 제안한 모델이 SF Hotel 데이터에서는 0.8645 BLEU의 성능을, SF Restaurant 데이터에서는 0.7570 BLEU의 성능을 보였다.

  • PDF

Forecasting of erythrocyte sedimentation rate using gated recurrent unit (GRU) neural network (Gated recurrent unit (GRU) 신경망을 이용한 적혈구 침강속도 예측)

  • Lee, Jaejin;Hong, Hyeonji;Song, Jae Min;Yeom, Eunseop
    • Journal of the Korean Society of Visualization
    • /
    • v.19 no.1
    • /
    • pp.57-61
    • /
    • 2021
  • In order to determine erythrocyte sedimentation rate (ESR) indicating acute phase inflammation, a Westergren method has been widely used because it is cheap and easy to be implemented. However, the Westergren method requires quite a long time for 1 hour. In this study, a gated recurrent unit (GRU) neural network was used to reduce measurement time of ESR evaluation. The sedimentation sequences of the erythrocytes were acquired by the camera and data processed through image processing were used as an input data into the neural network models. The performance of a proposed models was evaluated based on mean absolute error. The results show that GRU model provides best accurate prediction than others within 30 minutes.

A Study on Korean Sentiment Analysis Rate Using Neural Network and Ensemble Combination

  • Sim, YuJeong;Moon, Seok-Jae;Lee, Jong-Youg
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.268-273
    • /
    • 2021
  • In this paper, we propose a sentiment analysis model that improves performance on small-scale data. A sentiment analysis model for small-scale data is proposed and verified through experiments. To this end, we propose Bagging-Bi-GRU, which combines Bi-GRU, which learns GRU, which is a variant of LSTM (Long Short-Term Memory) with excellent performance on sequential data, in both directions and the bagging technique, which is one of the ensembles learning methods. In order to verify the performance of the proposed model, it is applied to small-scale data and large-scale data. And by comparing and analyzing it with the existing machine learning algorithm, Bi-GRU, it shows that the performance of the proposed model is improved not only for small data but also for large data.