• Title/Summary/Keyword: Deep Learning AI

Search Result 611, Processing Time 0.027 seconds

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

A study on estimating the main dimensions of a small fishing boat using deep learning (딥러닝을 이용한 연안 소형 어선 주요 치수 추정 연구)

  • JANG, Min Sung;KIM, Dong-Joon;ZHAO, Yang
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.58 no.3
    • /
    • pp.272-280
    • /
    • 2022
  • The first step is to determine the principal dimensions of the design ship, such as length between perpendiculars, beam, draft and depth when accomplishing the design of a new vessel. To make this process easier, a database with a large amount of existing ship data and a regression analysis technique are needed. Recently, deep learning, a branch of artificial intelligence (AI) has been used in regression analysis. In this paper, deep learning neural networks are used for regression analysis to find the regression function between the input and output data. To find the neural network structure with the highest accuracy, the errors of neural network structures with varying the number of the layers and the nodes are compared. In this paper, Python TensorFlow Keras API and MATLAB Deep Learning Toolbox are used to build deep learning neural networks. Constructed DNN (deep neural networks) makes helpful in determining the principal dimension of the ship and saves much time in the ship design process.

Deep Learning Model Validation Method Based on Image Data Feature Coverage (영상 데이터 특징 커버리지 기반 딥러닝 모델 검증 기법)

  • Lim, Chang-Nam;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.375-384
    • /
    • 2021
  • Deep learning techniques have been proven to have high performance in image processing and are applied in various fields. The most widely used methods for validating a deep learning model include a holdout verification method, a k-fold cross verification method, and a bootstrap method. These legacy methods consider the balance of the ratio between classes in the process of dividing the data set, but do not consider the ratio of various features that exist within the same class. If these features are not considered, verification results may be biased toward some features. Therefore, we propose a deep learning model validation method based on data feature coverage for image classification by improving the legacy methods. The proposed technique proposes a data feature coverage that can be measured numerically how much the training data set for training and validation of the deep learning model and the evaluation data set reflects the features of the entire data set. In this method, the data set can be divided by ensuring coverage to include all features of the entire data set, and the evaluation result of the model can be analyzed in units of feature clusters. As a result, by providing feature cluster information for the evaluation result of the trained model, feature information of data that affects the trained model can be provided.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

Implementation of an alarm system with AI image processing to detect whether a helmet is worn or not and a fall accident (헬멧 착용 여부 및 쓰러짐 사고 감지를 위한 AI 영상처리와 알람 시스템의 구현)

  • Yong-Hwa Jo;Hyuek-Jae Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.150-159
    • /
    • 2022
  • This paper presents an implementation of detecting whether a helmet is worn and there is a fall accident through individual image analysis in real-time from extracting the image objects of several workers active in the industrial field. In order to detect image objects of workers, YOLO, a deep learning-based computer vision model, was used, and for whether a helmet is worn or not, the extracted images with 5,000 different helmet learning data images were applied. For whether a fall accident occurred, the position of the head was checked using the Pose real-time body tracking algorithm of Mediapipe, and the movement speed was calculated to determine whether the person fell. In addition, to give reliability to the result of a falling accident, a method to infer the posture of an object by obtaining the size of YOLO's bounding box was proposed and implemented. Finally, Telegram API Bot and Firebase DB server were implemented for notification service to administrators.

Application of Deep Learning to Solar Data: 3. Generation of Solar images from Galileo sunspot drawings

  • Lee, Harim;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyunjin;Kim, Taeyoung;Shin, Gyungin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.81.2-81.2
    • /
    • 2019
  • We develop an image-to-image translation model, which is a popular deep learning method based on conditional Generative Adversarial Networks (cGANs), to generate solar magnetograms and EUV images from sunspot drawings. For this, we train the model using pairs of sunspot drawings from Mount Wilson Observatory (MWO) and their corresponding SDO/HMI magnetograms and SDO/AIA EUV images (512 by 512) from January 2012 to September 2014. We test the model by comparing pairs of actual SDO images (magnetogram and EUV images) and the corresponding AI-generated ones from October to December in 2014. Our results show that bipolar structures and coronal loop structures of AI-generated images are consistent with those of the original ones. We find that their unsigned magnetic fluxes well correlate with those of the original ones with a good correlation coefficient of 0.86. We also obtain pixel-to-pixel correlations EUV images and AI-generated ones. The average correlations of 92 test samples for several SDO lines are very good: 0.88 for AIA 211, 0.87 for AIA 1600 and 0.93 for AIA 1700. These facts imply that AI-generated EUV images quite similar to AIA ones. Applying this model to the Galileo sunspot drawings in 1612, we generate HMI-like magnetograms and AIA-like EUV images of the sunspots. This application will be used to generate solar images using historical sunspot drawings.

  • PDF

SAR Recognition of Target Variants Using Channel Attention Network without Dimensionality Reduction (차원축소 없는 채널집중 네트워크를 이용한 SAR 변형표적 식별)

  • Park, Ji-Hoon;Choi, Yeo-Reum;Chae, Dae-Young;Lim, Ho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.3
    • /
    • pp.219-230
    • /
    • 2022
  • In implementing a robust automatic target recognition(ATR) system with synthetic aperture radar(SAR) imagery, one of the most important issues is accurate classification of target variants, which are the same targets with different serial numbers, configurations and versions, etc. In this paper, a deep learning network with channel attention modules is proposed to cope with the recognition problem for target variants based on the previous research findings that the channel attention mechanism selectively emphasizes the useful features for target recognition. Different from other existing attention methods, this paper employs the channel attention modules without dimensionality reduction along the channel direction from which direct correspondence between feature map channels can be preserved and the features valuable for recognizing SAR target variants can be effectively derived. Experiments with the public benchmark dataset demonstrate that the proposed scheme is superior to the network with other existing channel attention modules.

Comparative Study of Deep Learning Model for Semantic Segmentation of Water System in SAR Images of KOMPSAT-5 (아리랑 5호 위성 영상에서 수계의 의미론적 분할을 위한 딥러닝 모델의 비교 연구)

  • Kim, Min-Ji;Kim, Seung Kyu;Lee, DoHoon;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.206-214
    • /
    • 2022
  • The way to measure the extent of damage from floods and droughts is to identify changes in the extent of water systems. In order to effectively grasp this at a glance, satellite images are used. KOMPSAT-5 uses Synthetic Aperture Radar (SAR) to capture images regardless of weather conditions such as clouds and rain. In this paper, various deep learning models are applied to perform semantic segmentation of the water system in this SAR image and the performance is compared. The models used are U-net, V-Net, U2-Net, UNet 3+, PSPNet, Deeplab-V3, Deeplab-V3+ and PAN. In addition, performance comparison was performed when the data was augmented by applying elastic deformation to the existing SAR image dataset. As a result, without data augmentation, U-Net was the best with IoU of 97.25% and pixel accuracy of 98.53%. In case of data augmentation, Deeplab-V3 showed IoU of 95.15% and V-Net showed the best pixel accuracy of 96.86%.

Comparative Analysis of Baseflow Separation using Conventional and Deep Learning Techniques

  • Yusuff, Kareem Kola;Shiksa, Bastola;Park, Kidoo;Jung, Younghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.149-149
    • /
    • 2022
  • Accurate quantitative evaluation of baseflow contribution to streamflow is imperative to address seasonal drought vulnerability, flood occurrence and groundwater management concerns for efficient and sustainable water resources management in watersheds. Several baseflow separation algorithms using recursive filters, graphical method and tracer or chemical balance have been developed but resulting baseflow outputs always show wide variations, thereby making it hard to determine best separation technique. Therefore, the current global shift towards implementation of artificial intelligence (AI) in water resources is employed to compare the performance of deep learning models with conventional hydrograph separation techniques to quantify baseflow contribution to streamflow of Piney River watershed, Tennessee from 2001-2021. Streamflow values are obtained from the USGS station 03602500 and modeled to generate values of Baseflow Index (BI) using Web-based Hydrograph Analysis (WHAT) model. Annual and seasonal baseflow outputs from the traditional separation techniques are compared with results of Long Short Term Memory (LSTM) and simple Gated Recurrent Unit (GRU) models. The GRU model gave optimal BFI values during the four seasons with average NSE = 0.98, KGE = 0.97, r = 0.89 and future baseflow volumes are predicted. AI offers easier and more accurate approach to groundwater management and surface runoff modeling to create effective water policy frameworks for disaster management.

  • PDF

Deep learning-based AI constitutive modeling for sandstone and mudstone under cyclic loading conditions

  • Luyuan Wu;Meng Li;Jianwei Zhang;Zifa Wang;Xiaohui Yang;Hanliang Bian
    • Geomechanics and Engineering
    • /
    • v.37 no.1
    • /
    • pp.49-64
    • /
    • 2024
  • Rocks undergoing repeated loading and unloading over an extended period, such as due to earthquakes, human excavation, and blasting, may result in the gradual accumulation of stress and deformation within the rock mass, eventually reaching an unstable state. In this study, a CNN-CCM is proposed to address the mechanical behavior. The structure and hyperparameters of CNN-CCM include Conv2D layers × 5; Max pooling2D layers × 4; Dense layers × 4; learning rate=0.001; Epoch=50; Batch size=64; Dropout=0.5. Training and validation data for deep learning include 71 rock samples and 122,152 data points. The AI Rock Constitutive Model learned by CNN-CCM can predict strain values(ε1) using Mass (M), Axial stress (σ1), Density (ρ), Cyclic number (N), Confining pressure (σ3), and Young's modulus (E). Five evaluation indicators R2, MAPE, RMSE, MSE, and MAE yield respective values of 0.929, 16.44%, 0.954, 0.913, and 0.542, illustrating good predictive performance and generalization ability of model. Finally, interpreting the AI Rock Constitutive Model using the SHAP explaining method reveals that feature importance follows the order N > M > σ1 > E > ρ > σ3.Positive SHAP values indicate positive effects on predicting strain ε1 for N, M, σ1, and σ3, while negative SHAP values have negative effects. For E, a positive value has a negative effect on predicting strain ε1, consistent with the influence patterns of conventional physical rock constitutive equations. The present study offers a novel approach to the investigation of the mechanical constitutive model of rocks under cyclic loading and unloading conditions.