• Title/Summary/Keyword: Deep Networks

Search Result 1,173, Processing Time 0.022 seconds

Application of deep neural networks for high-dimensional large BWR core neutronics

  • Abu Saleem, Rabie;Radaideh, Majdi I.;Kozlowski, Tomasz
    • Nuclear Engineering and Technology
    • /
    • v.52 no.12
    • /
    • pp.2709-2716
    • /
    • 2020
  • Compositions of large nuclear cores (e.g. boiling water reactors) are highly heterogeneous in terms of fuel composition, control rod insertions and flow regimes. For this reason, they usually lack high order of symmetry (e.g. 1/4, 1/8) making it difficult to estimate their neutronic parameters for large spaces of possible loading patterns. A detailed hyperparameter optimization technique (a combination of manual and Gaussian process search) is used to train and optimize deep neural networks for the prediction of three neutronic parameters for the Ringhals-1 BWR unit: power peaking factors (PPF), control rod bank level, and cycle length. Simulation data is generated based on half-symmetry using PARCS core simulator by shuffling a total of 196 assemblies. The results demonstrate a promising performance by the deep networks as acceptable mean absolute error values are found for the global maximum PPF (~0.2) and for the radially and axially averaged PPF (~0.05). The mean difference between targets and predictions for the control rod level is about 5% insertion depth. Lastly, cycle length labels are predicted with 82% accuracy. The results also demonstrate that 10,000 samples are adequate to capture about 80% of the high-dimensional space, with minor improvements found for larger number of samples. The promising findings of this work prove the ability of deep neural networks to resolve high dimensionality issues of large cores in the nuclear area.

Predicting the Real Estate Price Index Using Deep Learning (딥 러닝을 이용한 부동산가격지수 예측)

  • Bae, Seong Wan;Yu, Jung Suk
    • Korea Real Estate Review
    • /
    • v.27 no.3
    • /
    • pp.71-86
    • /
    • 2017
  • The purpose of this study was to apply the deep running method to real estate price index predicting and to compare it with the time series analysis method to test the possibility of its application to real estate market forecasting. Various real estate price indices were predicted using the DNN (deep neural networks) and LSTM (long short term memory networks) models, both of which draw on the deep learning method, and the ARIMA (autoregressive integrated moving average) model, which is based on the time seies analysis method. The results of the study showed the following. First, the predictive power of the deep learning method is superior to that of the time series analysis method. Second, among the deep learning models, the predictability of the DNN model is slightly superior to that of the LSTM model. Third, the deep learning method and the ARIMA model are the least reliable tools for predicting the housing sales prices index among the real estate price indices. Drawing on the deep learning method, it is hoped that this study will help enhance the accuracy in predicting the real estate market dynamics.

An Approximate DRAM Architecture for Energy-efficient Deep Learning

  • Nguyen, Duy Thanh;Chang, Ik-Joon
    • Journal of Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-37
    • /
    • 2020
  • We present an approximate DRAM architecture for energy-efficient deep learning. Our key premise is that by bounding memory errors to non-critical information, we can significantly reduce DRAM refresh energy without compromising recognition accuracy of deep neural networks. To validate the key premise, we make extensive Monte-Carlo simulations for several well-known convolutional neural networks such as LeNet, ConvNet and AlexNet with the input of MINIST, CIFAR-10, and ImageNet, respectively. We assume that the highest-order 8-bits (in single precision) and 4-bits (in half precision) are protected from retention errors under the proposed architecture and then, randomly inject bit-errors to unprotected bits with various bit-error-rates. Here, recognition accuracies of the above convolutional neural networks are successfully maintained up to the 10-5-order bit-error-rate. We simulate DRAM energy during inference of the above convolutional neural networks, where the proposed architecture shows the possibility of considerable energy saving up to 10 ~ 37.5% of total DRAM energy.

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Deep learning-based scalable and robust channel estimator for wireless cellular networks

  • Anseok Lee;Yongjin Kwon;Hanjun Park;Heesoo Lee
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.915-924
    • /
    • 2022
  • In this paper, we present a two-stage scalable channel estimator (TSCE), a deep learning (DL)-based scalable, and robust channel estimator for wireless cellular networks, which is made up of two DL networks to efficiently support different resource allocation sizes and reference signal configurations. Both networks use the transformer, one of cutting-edge neural network architecture, as a backbone for accurate estimation. For computation-efficient global feature extractions, we propose using window and window averaging-based self-attentions. Our results show that TSCE learns wireless propagation channels correctly and outperforms both traditional estimators and baseline DL-based estimators. Additionally, scalability and robustness evaluations are performed, revealing that TSCE is more robust in various environments than the baseline DL-based estimators.

Improving the performance for Relation Networks using parameters tuning (파라미터 튜닝을 통한 Relation Networks 성능개선)

  • Lee, Hyun-Ok;Lim, Heui-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.377-380
    • /
    • 2018
  • 인간의 추론 능력이란 문제에 주어진 조건을 보고 문제 해결에 필요한 것이 무엇인지를 논리적으로 생각해 보는 것으로 문제 상황 속에서 일정한 규칙이나 성질을 발견하고 이를 수학적인 방법으로 법칙을 찾아내거나 해결하는 능력을 말한다. 이러한 인간인지 능력과 유사한 인공지능 시스템을 개발하는데 있어서 핵심적 도전은 비구조적 데이터(unstructured data)로부터 그 개체들(object)과 그들간의 관계(relation)에 대해 추론하는 능력을 부여하는 것이라고 할 수 있다. 지금까지 딥러닝(deep learning) 방법은 구조화 되지 않은 데이터로부터 문제를 해결하는 엄청난 진보를 가져왔지만, 명시적으로 개체간의 관계를 고려하지 않고 이를 수행해왔다. 최근 발표된 구조화되지 않은 데이터로부터 복잡한 관계 추론을 수행하는 심층신경망(deep neural networks)은 관계추론(relational reasoning)의 시도를 이해하는데 기대할 만한 접근법을 보여주고 있다. 그 첫 번째는 관계추론을 위한 간단한 신경망 모듈(A simple neural network module for relational reasoning) 인 RN(Relation Networks)이고, 두 번째는 시각적 관찰을 기반으로 실제대상의 미래 상태를 예측하는 범용 목적의 VIN(Visual Interaction Networks)이다. 관계 추론을 수행하는 이들 심층신경망(deep neural networks)은 세상을 객체(objects)와 그들의 관계(their relations)라는 체계로 분해하고, 신경망(neural networks)이 피상적으로는 매우 달라 보이지만 근본적으로는 공통관계를 갖는 장면들에 대하여 객체와 관계라는 새로운 결합(combinations)을 일반화할 수 있는 강력한 추론 능력(powerful ability to reason)을 보유할 수 있다는 것을 보여주고 있다. 본 논문에서는 관계 추론을 수행하는 심층신경망(deep neural networks) 중에서 Sort-of-CLEVR 데이터 셋(dataset)을 사용하여 RN(Relation Networks)의 성능을 재현 및 관찰해 보았으며, 더 나아가 파라미터(parameters) 튜닝을 통하여 RN(Relation Networks) 모델의 성능 개선방법을 제시하여 보았다.

A study on Deep Q-Networks based Auto-scaling in NFV Environment (NFV 환경에서의 Deep Q-Networks 기반 오토 스케일링 기술 연구)

  • Lee, Do-Young;Yoo, Jae-Hyoung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.1-10
    • /
    • 2020
  • Network Function Virtualization (NFV) is a key technology of 5G networks that has the advantage of enabling building and operating networks flexibly. However, NFV can complicate network management because it creates numerous virtual resources that should be managed. In NFV environments, service function chaining (SFC) composed of virtual network functions (VNFs) is widely used to apply a series of network functions to traffic. Therefore, it is required to dynamically allocate the right amount of computing resources or instances to SFC for meeting service requirements. In this paper, we propose Deep Q-Networks (DQN)-based auto-scaling to operate the appropriate number of VNF instances in SFC. The proposed approach not only resizes the number of VNF instances in SFC composed of multi-tier architecture but also selects a tier to be scaled in response to dynamic traffic forwarding through SFC.

Prediction of Static and Dynamic Behavior of Truss Structures Using Deep Learning (딥러닝을 이용한 트러스 구조물의 정적 및 동적 거동 예측)

  • Sim, Eun-A;Lee, Seunghye;Lee, Jaehong
    • Journal of Korean Association for Spatial Structures
    • /
    • v.18 no.4
    • /
    • pp.69-80
    • /
    • 2018
  • In this study, an algorithm applying deep learning to the truss structures was proposed. Deep learning is a method of raising the accuracy of machine learning by creating a neural networks in a computer. Neural networks consist of input layers, hidden layers and output layers. Numerous studies have focused on the introduction of neural networks and performed under limited examples and conditions, but this study focused on two- and three-dimensional truss structures to prove the effectiveness of algorithms. and the training phase was divided into training model based on the dataset size and epochs. At these case, a specific data value was selected and the error rate was shown by comparing the actual data value with the predicted value, and the error rate decreases as the data set and the number of hidden layers increases. In consequence, it showed that it is possible to predict the result quickly and accurately without using a numerical analysis program when applying the deep learning technique to the field of structural analysis.

Sound Event Detection based on Deep Neural Networks (딥 뉴럴네트워크 기반의 소리 이벤트 검출)

  • Chung, Suk-Hwan;Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.2
    • /
    • pp.389-396
    • /
    • 2019
  • In this paper, various architectures of deep neural networks were applied for sound event detection and their performances were compared using a common audio database. The FNN, CNN, RNN and CRNN were implemented using hyper-parameters optimized for the database as well as the architecture of each neural network. Among the implemented deep neural networks, CRNN performed best at all testing conditions and CNN followed CRNN in performance. Although RNN has a merit in tracking the time-correlations in audio signals, it showed poor performance compared with CNN and CRNN.

Generative Adversarial Networks: A Literature Review

  • Cheng, Jieren;Yang, Yue;Tang, Xiangyan;Xiong, Naixue;Zhang, Yuan;Lei, Feifei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4625-4647
    • /
    • 2020
  • The Generative Adversarial Networks, as one of the most creative deep learning models in recent years, has achieved great success in computer vision and natural language processing. It uses the game theory to generate the best sample in generator and discriminator. Recently, many deep learning models have been applied to the security field. Along with the idea of "generative" and "adversarial", researchers are trying to apply Generative Adversarial Networks to the security field. This paper presents the development of Generative Adversarial Networks. We review traditional generation models and typical Generative Adversarial Networks models, analyze the application of their models in natural language processing and computer vision. To emphasize that Generative Adversarial Networks models are feasible to be used in security, we separately review the contributions that their defenses in information security, cyber security and artificial intelligence security. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.