• Title/Summary/Keyword: Deep neural networks

Search Result 859, Processing Time 0.028 seconds

A Study on the Improvement of Source Code Static Analysis Using Machine Learning (기계학습을 이용한 소스코드 정적 분석 개선에 관한 연구)

  • Park, Yang-Hwan;Choi, Jin-Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1131-1139
    • /
    • 2020
  • The static analysis of the source code is to find the remaining security weaknesses for a wide range of source codes. The static analysis tool is used to check the result, and the static analysis expert performs spying and false detection analysis on the result. In this process, the amount of analysis is large and the rate of false positives is high, so a lot of time and effort is required, and a method of efficient analysis is required. In addition, it is rare for experts to analyze only the source code of the line where the defect occurred when performing positive/false detection analysis. Depending on the type of defect, the surrounding source code is analyzed together and the final analysis result is delivered. In order to solve the difficulty of experts discriminating positive and false positives using these static analysis tools, this paper proposes a method of determining whether or not the security weakness found by the static analysis tools is a spy detection through artificial intelligence rather than an expert. In addition, the optimal size was confirmed through an experiment to see how the size of the training data (source code around the defects) used for such machine learning affects the performance. This result is expected to help the static analysis expert's job of classifying positive and false positives after static analysis.

EF Sensor-Based Hand Motion Detection and Automatic Frame Extraction (EF 센서기반 손동작 신호 감지 및 자동 프레임 추출)

  • Lee, Hummin;Jung, Sunil;Kim, Youngchul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.102-108
    • /
    • 2020
  • In this paper, we propose a real-time method of detecting hand motions and extracting the signal frame induced by EF(Electric Field) sensors. The signal induced by hand motion includes not only noises caused by various environmental sources as well as sensor's physical placement, but also different initial off-set conditions. Thus, it has been considered as a challenging problem to detect the motion signal and extract the motion frame automatically in real-time. In this study, we remove the PLN(Power Line Noise) using LPF with 10Hz cut-off and successively apply MA(Moving Average) filter to obtain clean and smooth input motion signals. To sense a hand motion, we use two thresholds(positive and negative thresholds) with offset value to detect a starting as well as an ending moment of the motion. Using this approach, we can achieve the correct motion detection rate over 98%. Once the final motion frame is determined, the motion signals are normalized to be used in next process of classification or recognition stage such as LSTN deep neural networks. Our experiment and analysis show that our proposed methods produce better than 98% performance in correct motion detection rate as well as in frame-matching rate.

Research Status of Satellite-based Evapotranspiration and Soil Moisture Estimations in South Korea (위성기반 증발산량 및 토양수분량 산정 국내 연구동향)

  • Choi, Ga-young;Cho, Younghyun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1141-1180
    • /
    • 2022
  • The application of satellite imageries has increased in the field of hydrology and water resources in recent years. However, challenges have been encountered on obtaining accurate evapotranspiration and soil moisture. Therefore, present researches have emphasized the necessity to obtain estimations of satellite-based evapotranspiration and soil moisture with related development researches. In this study, we presented the research status in Korea by investigating the current trends and methodologies for evapotranspiration and soil moisture. As a result of examining the detailed methodologies, we have ascertained that, in general, evapotranspiration is estimated using Energy balance models, such as Surface Energy Balance Algorithm for Land (SEBAL) and Mapping Evapotranspiration with Internalized Calibration (METRIC). In addition, Penman-Monteith and Priestley-Taylor equations are also used to estimate evapotranspiration. In the case of soil moisture, in general, active (AMSR-E, AMSR2, MIRAS, and SMAP) and passive (ASCAT and SAR)sensors are used for estimation. In terms of statistics, deep learning, as well as linear regression equations and artificial neural networks, are used for estimating these parameters. There were a number of research cases in which various indices were calculated using satellite-based data and applied to the characterization of drought. In some cases, hydrological cycle factors of evapotranspiration and soil moisture were calculated based on the Land Surface Model (LSM). Through this process, by comparing, reviewing, and presenting major detailed methodologies, we intend to use these references in related research, and lay the foundation for the advancement of researches on the calculation of satellite-based hydrological cycle data in the future.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

A Multi-speaker Speech Synthesis System Using X-vector (x-vector를 이용한 다화자 음성합성 시스템)

  • Jo, Min Su;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.675-681
    • /
    • 2021
  • With the recent growth of the AI speaker market, the demand for speech synthesis technology that enables natural conversation with users is increasing. Therefore, there is a need for a multi-speaker speech synthesis system that can generate voices of various tones. In order to synthesize natural speech, it is required to train with a large-capacity. high-quality speech DB. However, it is very difficult in terms of recording time and cost to collect a high-quality, large-capacity speech database uttered by many speakers. Therefore, it is necessary to train the speech synthesis system using the speech DB of a very large number of speakers with a small amount of training data for each speaker, and a technique for naturally expressing the tone and rhyme of multiple speakers is required. In this paper, we propose a technology for constructing a speaker encoder by applying the deep learning-based x-vector technique used in speaker recognition technology, and synthesizing a new speaker's tone with a small amount of data through the speaker encoder. In the multi-speaker speech synthesis system, the module for synthesizing mel-spectrogram from input text is composed of Tacotron2, and the vocoder generating synthesized speech consists of WaveNet with mixture of logistic distributions applied. The x-vector extracted from the trained speaker embedding neural networks is added to Tacotron2 as an input to express the desired speaker's tone.

Chest CT Image Patch-Based CNN Classification and Visualization for Predicting Recurrence of Non-Small Cell Lung Cancer Patients (비소세포폐암 환자의 재발 예측을 위한 흉부 CT 영상 패치 기반 CNN 분류 및 시각화)

  • Ma, Serie;Ahn, Gahee;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Non-small cell lung cancer (NSCLC) accounts for a high proportion of 85% among all lung cancer and has a significantly higher mortality rate (22.7%) compared to other cancers. Therefore, it is very important to predict the prognosis after surgery in patients with non-small cell lung cancer. In this study, the types of preoperative chest CT image patches for non-small cell lung cancer patients with tumor as a region of interest are diversified into five types according to tumor-related information, and performance of single classifier model, ensemble classifier model with soft-voting method, and ensemble classifier model using 3 input channels for combination of three different patches using pre-trained ResNet and EfficientNet CNN networks are analyzed through misclassification cases and Grad-CAM visualization. As a result of the experiment, the ResNet152 single model and the EfficientNet-b7 single model trained on the peritumoral patch showed accuracy of 87.93% and 81.03%, respectively. In addition, ResNet152 ensemble model using the image, peritumoral, and shape-focused intratumoral patches which were placed in each input channels showed stable performance with an accuracy of 87.93%. Also, EfficientNet-b7 ensemble classifier model with soft-voting method using the image and peritumoral patches showed accuracy of 84.48%.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

Evaluation of Future Turbidity Water and Eutrophication in Chungju Lake by Climate Change Using CE-QUAL-W2 (CE-QUAL-W2를 이용한 충주호의 기후변화에 따른 탁수 및 부영양화 영향평가)

  • Ahn, So Ra;Ha, Rim;Yoon, Sung Wan;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.2
    • /
    • pp.145-159
    • /
    • 2014
  • This study is to evaluate the future climate change impact on turbidity water and eutrophication for Chungju Lake by using CE-QUAL-W2 reservoir water quality model coupled with SWAT watershed model. The SWAT was calibrated and validated using 11 years (2000~2010) daily streamflow data at three locations and monthly stream water quality data at two locations. The CE-QUAL-W2 was calibrated and validated for 2 years (2008 and 2010) water temperature, suspended solid, total nitrogen, total phosphorus, and Chl-a. For the future assessment, the SWAT results were used as boundary conditions for CE-QUAL-W2 model run. To evaluate the future water quality variation in reservoir, the climate data predicted by MM5 RCM(Regional Climate Model) of Special Report on Emissions Scenarios (SRES) A1B for three periods (2013~2040, 2041~2070 and 2071~2100) were downscaled by Artificial Neural Networks method to consider Typhoon effect. The RCM temperature and precipitation outputs and historical records were used to generate pollutants loading from the watershed. By the future temperature increase, the lake water temperature showed $0.5^{\circ}C$ increase in shallow depth while $-0.9^{\circ}C$ in deep depth. The future annual maximum sediment concentration into the lake from the watershed showed 17% increase in wet years. The future lake residence time above 10 mg/L suspended solids (SS) showed increases of 6 and 17 days in wet and dry years respectively comparing with normal year. The SS occupying rate of the lake also showed increases of 24% and 26% in both wet and dry year respectively. In summary, the future lake turbidity showed longer lasting with high concentration comparing with present behavior. Under the future lake environment by the watershed and within lake, the future maximum Chl-a concentration showed increases of 19 % in wet year and 3% in dry year respectively.