• Title/Summary/Keyword: deep-learning

Search Result 5,508, Processing Time 0.034 seconds

Analysis Temporal Variations Marine Debris by using Raspberry Pi and YOLOv5 (라즈베리파이와 YOLOv5를 이용한 해양쓰레기 시계열 변화량 분석)

  • Bo-Ram, Kim;Mi-So, Park;Jea-Won, Kim;Ye-Been, Do;Se-Yun, Oh;Hong-Joo, Yoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1249-1258
    • /
    • 2022
  • Marine debris is defined as a substance that is intentionally or inadvertently left on the shore or is introduced or discharged into the ocean, which has or is likely to have a harmful effect on the marine environments. In this study, the detection of marine debris and the analysis of the amount of change on marine debris were performed using the object detection method for an efficient method of identifying the quantity of marine debris and analyzing the amount of change. The study area is Yuho Mongdol Beach in the northeastern part of Geoje Island, and the amount of change was analyzed through images collected at 15-minute intervals for 32 days from September 12 to October 14, 2022. Marine debris detection using YOLOv5x, a one-stage object detection model, derived the performance of plastic bottles mAP 0.869 and styrofoam buoys mAP 0.862. As a result, marine debris showed a large decrease at 8-day intervals, and it was found that the quantity of Styrofoam buoys was about three times larger and the range of change was also larger.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

Prospect of future water resources in the basins of Chungju Dam and Soyang-gang Dam using a physics-based distributed hydrological model and a deep-learning-based LSTM model (물리기반 분포형 수문 모형과 딥러닝 기반 LSTM 모형을 활용한 충주댐 및 소양강댐 유역의 미래 수자원 전망)

  • Kim, Yongchan;Kim, Youngran;Hwang, Seonghwan;Kim, Dongkyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.12
    • /
    • pp.1115-1124
    • /
    • 2022
  • The impact of climate change on water resources was evaluated for Chungju Dam and Soyang-gang Dam basins by constructing an integrated modeling framework consisting of a dam inflow prediction model based on the Variable Infiltration Capacity (VIC) model, a distributed hydrologic model, and an LSTM based dam outflow prediction model. Considering the uncertainty of future climate data, four models of CMIP6 GCM were used as input data of VIC model for future period (2021-2100). As a result of applying future climate data, the average inflow for period increased as the future progressed, and the inflow in the far future (2070-2100) increased by up to 22% compared to that of the observation period (1986-2020). The minimum value of dam discharge lasting 4~50 days was significantly lower than the observed value. This indicates that droughts may occur over a longer period than observed in the past, meaning that citizens of Seoul metropolitan areas may experience severe water shortages due to future droughts. In addition, compared to the near and middle futures, the change in water storage has occurred rapidly in the far future, suggesting that the difficulties of water resource management may increase.

A study on the improvement of artificial intelligence-based Parking control system to prevent vehicle access with fake license plates (위조번호판 부착 차량 출입 방지를 위한 인공지능 기반의 주차관제시스템 개선 방안)

  • Jang, Sungmin;Iee, Jeongwoo;Park, Jonghyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.57-74
    • /
    • 2022
  • Recently, artificial intelligence parking control systems have increased the recognition rate of vehicle license plates using deep learning, but there is a problem that they cannot determine vehicles with fake license plates. Despite these security problems, several institutions have been using the existing system so far. For example, in an experiment using a counterfeit license plate, there are cases of successful entry into major government agencies. This paper proposes an improved system over the existing artificial intelligence parking control system to prevent vehicles with such fake license plates from entering. The proposed method is to use the degree of matching of the front feature points of the vehicle as a passing criterion using the ORB algorithm that extracts information on feature points characterized by an image, just as the existing system uses the matching of vehicle license plates as a passing criterion. In addition, a procedure for checking whether a vehicle exists inside was included in the proposed system to prevent the entry of the same type of vehicle with a fake license plate. As a result of the experiment, it showed the improved performance in identifying vehicles with fake license plates compared to the existing system. These results confirmed that the methods proposed in this paper could be applied to the existing parking control system while taking the flow of the original artificial intelligence parking control system to prevent vehicles with fake license plates from entering.

DNN Model for Calculation of UV Index at The Location of User Using Solar Object Information and Sunlight Characteristics (태양객체 정보 및 태양광 특성을 이용하여 사용자 위치의 자외선 지수를 산출하는 DNN 모델)

  • Ga, Deog-hyun;Oh, Seung-Taek;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.29-35
    • /
    • 2022
  • UV rays have beneficial or harmful effects on the human body depending on the degree of exposure. An accurate UV information is required for proper exposure to UV rays per individual. The UV rays' information is provided by the Korea Meteorological Administration as one component of daily weather information in Korea. However, it does not provide an accurate UVI at the user's location based on the region's Ultraviolet index. Some operate measuring instrument to obtain an accurate UVI, but it would be costly and inconvenient. Studies which assumed the UVI through environmental factors such as solar radiation and amount of cloud have been introduced, but those studies also could not provide service to individual. Therefore, this paper proposes a deep learning model to calculate UVI using solar object information and sunlight characteristics to provide an accurate UVI at individual location. After selecting the factors, which were considered as highly correlated with UVI such as location and size and illuminance of sun and which were obtained through the analysis of sky images and solar characteristics data, a data set for DNN model was constructed. A DNN model that calculates the UVI was finally realized by entering the solar object information and sunlight characteristics extracted through Mask R-CNN. In consideration of the domestic UVI recommendation standards, it was possible to accurately calculate UVI within the range of MAE 0.26 compared to the standard equipment in the performance evaluation for days with UVI above and below 8.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

A Multi-speaker Speech Synthesis System Using X-vector (x-vector를 이용한 다화자 음성합성 시스템)

  • Jo, Min Su;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.675-681
    • /
    • 2021
  • With the recent growth of the AI speaker market, the demand for speech synthesis technology that enables natural conversation with users is increasing. Therefore, there is a need for a multi-speaker speech synthesis system that can generate voices of various tones. In order to synthesize natural speech, it is required to train with a large-capacity. high-quality speech DB. However, it is very difficult in terms of recording time and cost to collect a high-quality, large-capacity speech database uttered by many speakers. Therefore, it is necessary to train the speech synthesis system using the speech DB of a very large number of speakers with a small amount of training data for each speaker, and a technique for naturally expressing the tone and rhyme of multiple speakers is required. In this paper, we propose a technology for constructing a speaker encoder by applying the deep learning-based x-vector technique used in speaker recognition technology, and synthesizing a new speaker's tone with a small amount of data through the speaker encoder. In the multi-speaker speech synthesis system, the module for synthesizing mel-spectrogram from input text is composed of Tacotron2, and the vocoder generating synthesized speech consists of WaveNet with mixture of logistic distributions applied. The x-vector extracted from the trained speaker embedding neural networks is added to Tacotron2 as an input to express the desired speaker's tone.

Development of Graph based Deep Learning methods for Enhancing the Semantic Integrity of Spaces in BIM Models (BIM 모델 내 공간의 시멘틱 무결성 검증을 위한 그래프 기반 딥러닝 모델 구축에 관한 연구)

  • Lee, Wonbok;Kim, Sihyun;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • BIM models allow building spaces to be instantiated and recognized as unique objects independently of model elements. These instantiated spaces provide the required semantics that can be leveraged for building code checking, energy analysis, and evacuation route analysis. However, theses spaces or rooms need to be designated manually, which in practice, lead to errors and omissions. Thus, most BIM models today does not guarantee the semantic integrity of space designations, limiting their potential applicability. Recent studies have explored ways to automate space allocation in BIM models using artificial intelligence algorithms, but they are limited in their scope and relatively low classification accuracy. This study explored the use of Graph Convolutional Networks, an algorithm exclusively tailored for graph data structures. The goal was to utilize not only geometry information but also the semantic relational data between spaces and elements in the BIM model. Results of the study confirmed that the accuracy was improved by about 8% compared to algorithms that only used geometric distinctions of the individual spaces.

Performance Evaluation of YOLOv5s for Brain Hemorrhage Detection Using Computed Tomography Images (전산화단층영상 기반 뇌출혈 검출을 위한 YOLOv5s 성능 평가)

  • Kim, Sungmin;Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.1
    • /
    • pp.25-34
    • /
    • 2022
  • Brain computed tomography (CT) is useful for brain lesion diagnosis, such as brain hemorrhage, due to non-invasive methodology, 3-dimensional image provision, low radiation dose. However, there has been numerous misdiagnosis owing to a lack of radiologist and heavy workload. Recently, object detection technologies based on artificial intelligence have been developed in order to overcome the limitations of traditional diagnosis. In this study, the applicability of a deep learning-based YOLOv5s model was evaluated for brain hemorrhage detection using brain CT images. Also, the effect of hyperparameters in the trained YOLOv5s model was analyzed. The YOLOv5s model consisted of backbone, neck and output modules. The trained model was able to detect a region of brain hemorrhage and provide the information of the region. The YOLOv5s model was trained with various activation functions, optimizer functions, loss functions and epochs, and the performance of the trained model was evaluated in terms of brain hemorrhage detection accuracy and training time. The results showed that the trained YOLOv5s model is able to provide a bounding box for a region of brain hemorrhage and the accuracy of the corresponding box. The performance of the YOLOv5s model was improved by using the mish activation function, the stochastic gradient descent (SGD) optimizer function and the completed intersection over union (CIoU) loss function. Also, the accuracy and training time of the YOLOv5s model increased with the number of epochs. Therefore, the YOLOv5s model is suitable for brain hemorrhage detection using brain CT images, and the performance of the model can be maximized by using appropriate hyperparameters.

A Korean menu-ordering sentence text-to-speech system using conformer-based FastSpeech2 (콘포머 기반 FastSpeech2를 이용한 한국어 음식 주문 문장 음성합성기)

  • Choi, Yerin;Jang, JaeHoo;Koo, Myoung-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.359-366
    • /
    • 2022
  • In this paper, we present the Korean menu-ordering Sentence Text-to-Speech (TTS) system using conformer-based FastSpeech2. Conformer is the convolution-augmented transformer, which was originally proposed in Speech Recognition. Combining two different structures, the Conformer extracts better local and global features. It comprises two half Feed Forward module at the front and the end, sandwiching the Multi-Head Self-Attention module and Convolution module. We introduce the Conformer in Korean TTS, as we know it works well in Korean Speech Recognition. For comparison between transformer-based TTS model and Conformer-based one, we train FastSpeech2 and Conformer-based FastSpeech2. We collected a phoneme-balanced data set and used this for training our models. This corpus comprises not only general conversation, but also menu-ordering conversation consisting mainly of loanwords. This data set is the solution to the current Korean TTS model's degradation in loanwords. As a result of generating a synthesized sound using ParallelWave Gan, the Conformer-based FastSpeech2 achieved superior performance of MOS 4.04. We confirm that the model performance improved when the same structure was changed from transformer to Conformer in the Korean TTS.