• Title/Summary/Keyword: deep neural networks

Search Result 866, Processing Time 0.02 seconds

Physics informed neural networks for surrogate modeling of accidental scenarios in nuclear power plants

  • Federico Antonello;Jacopo Buongiorno;Enrico Zio
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3409-3416
    • /
    • 2023
  • Licensing the next-generation of nuclear reactor designs requires extensive use of Modeling and Simulation (M&S) to investigate system response to many operational conditions, identify possible accidental scenarios and predict their evolution to undesirable consequences that are to be prevented or mitigated via the deployment of adequate safety barriers. Deep Learning (DL) and Artificial Intelligence (AI) can support M&S computationally by providing surrogates of the complex multi-physics high-fidelity models used for design. However, DL and AI are, generally, low-fidelity 'black-box' models that do not assure any structure based on physical laws and constraints, and may, thus, lack interpretability and accuracy of the results. This poses limitations on their credibility and doubts about their adoption for the safety assessment and licensing of novel reactor designs. In this regard, Physics Informed Neural Networks (PINNs) are receiving growing attention for their ability to integrate fundamental physics laws and domain knowledge in the neural networks, thus assuring credible generalization capabilities and credible predictions. This paper presents the use of PINNs as surrogate models for accidental scenarios simulation in Nuclear Power Plants (NPPs). A case study of a Loss of Heat Sink (LOHS) accidental scenario in a Nuclear Battery (NB), a unique class of transportable, plug-and-play microreactors, is considered. A PINN is developed and compared with a Deep Neural Network (DNN). The results show the advantages of PINNs in providing accurate solutions, avoiding overfitting, underfitting and intrinsically ensuring physics-consistent results.

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

Artificial intelligence, machine learning, and deep learning in women's health nursing

  • Jeong, Geum Hee
    • Women's Health Nursing
    • /
    • v.26 no.1
    • /
    • pp.5-9
    • /
    • 2020
  • Artificial intelligence (AI), which includes machine learning and deep learning has been introduced to nursing care in recent years. The present study reviews the following topics: the concepts of AI, machine learning, and deep learning; examples of AI-based nursing research; the necessity of education on AI in nursing schools; and the areas of nursing care where AI is useful. AI refers to an intelligent system consisting not of a human, but a machine. Machine learning refers to computers' ability to learn without being explicitly programmed. Deep learning is a subset of machine learning that uses artificial neural networks consisting of multiple hidden layers. It is suggested that the educational curriculum should include big data, the concept of AI, algorithms and models of machine learning, the model of deep learning, and coding practice. The standard curriculum should be organized by the nursing society. An example of an area of nursing care where AI is useful is prenatal nursing interventions based on pregnant women's nursing records and AI-based prediction of the risk of delivery according to pregnant women's age. Nurses should be able to cope with the rapidly developing environment of nursing care influenced by AI and should understand how to apply AI in their field. It is time for Korean nurses to take steps to become familiar with AI in their research, education, and practice.

Deep learning-based scalable and robust channel estimator for wireless cellular networks

  • Anseok Lee;Yongjin Kwon;Hanjun Park;Heesoo Lee
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.915-924
    • /
    • 2022
  • In this paper, we present a two-stage scalable channel estimator (TSCE), a deep learning (DL)-based scalable, and robust channel estimator for wireless cellular networks, which is made up of two DL networks to efficiently support different resource allocation sizes and reference signal configurations. Both networks use the transformer, one of cutting-edge neural network architecture, as a backbone for accurate estimation. For computation-efficient global feature extractions, we propose using window and window averaging-based self-attentions. Our results show that TSCE learns wireless propagation channels correctly and outperforms both traditional estimators and baseline DL-based estimators. Additionally, scalability and robustness evaluations are performed, revealing that TSCE is more robust in various environments than the baseline DL-based estimators.

A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks (Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구)

  • Kang, Boo-Sik
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.123-130
    • /
    • 2019
  • One of the most commonly used methods of web recommendation techniques is collaborative filtering. Many studies on collaborative filtering have suggested ways to improve accuracy. This study proposes a method of movie recommendation using Word2Vec and an ensemble convolutional neural networks. First, in the user, movie, and rating information, construct the user sentences and movie sentences. It inputs user sentences and movie sentences into Word2Vec to obtain user vectors and movie vectors. User vectors are entered into user convolution model and movie vectors are input to movie convolution model. The user and the movie convolution models are linked to a fully connected neural network model. Finally, the output layer of the fully connected neural network outputs forecasts of user movie ratings. Experimentation results showed that the accuracy of the technique proposed in this study accuracy of conventional collaborative filtering techniques was improved compared to those of conventional collaborative filtering technique and the technique using Word2Vec and deep neural networks proposed in a similar study.

A Hierarchical Deep Convolutional Neural Network for Crop Species and Diseases Classification (Deep Convolutional Neural Network(DCNN)을 이용한 계층적 농작물의 종류와 질병 분류 기법)

  • Borin, Min;Rah, HyungChul;Yoo, Kwan-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1653-1671
    • /
    • 2022
  • Crop diseases affect crop production, more than 30 billion USD globally. We proposed a classification study of crop species and diseases using deep learning algorithms for corn, cucumber, pepper, and strawberry. Our study has three steps of species classification, disease detection, and disease classification, which is noteworthy for using captured images without additional processes. We designed deep learning approach of deep learning convolutional neural networks based on Mask R-CNN model to classify crop species. Inception and Resnet models were presented for disease detection and classification sequentially. For classification, we trained Mask R-CNN network and achieved loss value of 0.72 for crop species classification and segmentation. For disease detection, InceptionV3 and ResNet101-V2 models were trained for nodes of crop species on 1,500 images of normal and diseased labels, resulting in the accuracies of 0.984, 0.969, 0.956, and 0.962 for corn, cucumber, pepper, and strawberry by InceptionV3 model with higher accuracy and AUC. For disease classification, InceptionV3 and ResNet 101-V2 models were trained for nodes of crop species on 1,500 images of diseased label, resulting in the accuracies of 0.995 and 0.992 for corn and cucumber by ResNet101 with higher accuracy and AUC whereas 0.940 and 0.988 for pepper and strawberry by Inception.

Performance Comparisons of GAN-Based Generative Models for New Product Development (신제품 개발을 위한 GAN 기반 생성모델 성능 비교)

  • Lee, Dong-Hun;Lee, Se-Hun;Kang, Jae-Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.867-871
    • /
    • 2022
  • Amid the recent rapid trend change, the change in design has a great impact on the sales of fashion companies, so it is inevitable to be careful in choosing new designs. With the recent development of the artificial intelligence field, various machine learning is being used a lot in the fashion market to increase consumers' preferences. To contribute to increasing reliability in the development of new products by quantifying abstract concepts such as preferences, we generate new images that do not exist through three adversarial generative neural networks (GANs) and numerically compare abstract concepts of preferences using pre-trained convolution neural networks (CNNs). Deep convolutional generative adversarial networks (DCGAN), Progressive growing adversarial networks (PGGAN), and Dual Discriminator generative adversarial networks (DANs), which were trained to produce comparative, high-level, and high-level images. The degree of similarity measured was considered as a preference, and the experimental results showed that D2GAN showed a relatively high similarity compared to DCGAN and PGGAN.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

Extraction of Protein-Protein Interactions based on Convolutional Neural Network (CNN) (Convolutional Neural Network (CNN) 기반의 단백질 간 상호 작용 추출)

  • Choi, Sung-Pil
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.3
    • /
    • pp.194-198
    • /
    • 2017
  • In this paper, we propose a revised Deep Convolutional Neural Network (DCNN) model to extract Protein-Protein Interaction (PPIs) from the scientific literature. The proposed method has the merit of improving performance by applying various global features in addition to the simple lexical features used in conventional relation extraction approaches. In the experiments using AIMed, which is the most famous collection used for PPI extraction, the proposed model shows state-of-the art scores (78.0 F-score) revealing the best performance so far in this domain. Also, the paper shows that, without conducting feature engineering using complicated language processing, convolutional neural networks with embedding can achieve superior PPIE performance.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.