• Title/Summary/Keyword: deep-learning

Search Result 5,679, Processing Time 0.031 seconds

A Study on the Build of Equipment Predictive Maintenance Solutions Based on On-device Edge Computer

  • Lee, Yong-Hwan;Suh, Jin-Hyung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.165-172
    • /
    • 2020
  • In this paper we propose an uses on-device-based edge computing technology and big data analysis methods through the use of on-device-based edge computing technology and analysis of big data, which are distributed computing paradigms that introduce computations and storage devices where necessary to solve problems such as transmission delays that occur when data is transmitted to central centers and processed in current general smart factories. However, even if edge computing-based technology is applied in practice, the increase in devices on the network edge will result in large amounts of data being transferred to the data center, resulting in the network band reaching its limits, which, despite the improvement of network technology, does not guarantee acceptable transfer speeds and response times, which are critical requirements for many applications. It provides the basis for developing into an AI-based facility prediction conservation analysis tool that can apply deep learning suitable for big data in the future by supporting intelligent facility management that can support productivity growth through research that can be applied to the field of facility preservation and smart factory industry with integrated hardware technology that can accommodate these requirements and factory management and control technology.

A Development for Sea Surface Salinity Algorithm Using GOCI in the East China Sea (GOCI를 이용한 동중국해 표층 염분 산출 알고리즘 개발)

  • Kim, Dae-Won;Kim, So-Hyun;Jo, Young-Heon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1307-1315
    • /
    • 2021
  • The Changjiang Diluted Water (CDW) spreads over the East China Sea every summer and significantly affects the sea surface salinity changes in the seas around Jeju Island and the southern coast of Korea peninsula. Sometimes its effect extends to the eastern coast of Korea peninsula through the Korea Strait. Specifically, the CDW has a significant impact on marine physics and ecology and causes damage to fisheries and aquaculture. However, due to the limited field surveys, continuous observation of the CDW in the East China Sea is practically difficult. Many studies have been conducted using satellite measurements to monitor CDW distribution in near-real time. In this study, an algorithm for estimating Sea Surface Salinity (SSS) in the East China Sea was developed using the Geostationary Ocean Color Imager (GOCI). The Multilayer Perceptron Neural Network (MPNN) method was employed for developing an algorithm, and Soil Moisture Active Passive (SMAP) SSS data was selected for the output. In the previous study, an algorithm for estimating SSS using GOCI was trained by 2016 observation data. By comparison, the train data period was extended from 2015 to 2020 to improve the algorithm performance. The validation results with the National Institute of Fisheries Science (NIFS) serial oceanographic observation data from 2011 to 2019 show 0.61 of coefficient of determination (R2) and 1.08 psu of Root Mean Square Errors (RMSE). This study was carried out to develop an algorithm for monitoring the surface salinity of the East China Sea using GOCI and is expected to contribute to the development of the algorithm for estimating SSS by using GOCI-II.

A Study of the Definition and Components of Data Literacy for K-12 AI Education (초·중등 AI 교육을 위한 데이터 리터러시 정의 및 구성 요소 연구)

  • Kim, Seulki;Kim, Taeyoung
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.5
    • /
    • pp.691-704
    • /
    • 2021
  • The development of AI technology has brought about a big change in our lives. The importance of AI and data education is also growing as AI's influence from life to society to the economy grows. In response, the OECD Education Research Report and various domestic information and curriculum studies deal with data literacy and present it as an essential competency. However, the definition of data literacy and the content and scope of the components vary among researchers. Thus, we analyze the semantic similarity of words through Word2Vec deep learning natural language processing methods along with the definitions of key data literacy studies and analysis of word frequency utilized in components, to present objective and comprehensive definition and components. It was revised and supplemented by expert review, and we defined data literacy as the 'basic ability of knowledge construction and communication to collect, analyze, and use data and process it as information for problem solving'. Furthermore we propose the components of each category of knowledge, skills, values and attitudes. We hope that the definition and components of data literacy derived from this study will serve as a good foundation for the systematization and education research of AI education related to students' future competency.

Conformer with lexicon transducer for Korean end-to-end speech recognition (Lexicon transducer를 적용한 conformer 기반 한국어 end-to-end 음성인식)

  • Son, Hyunsoo;Park, Hosung;Kim, Gyujin;Cho, Eunsoo;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.530-536
    • /
    • 2021
  • Recently, due to the development of deep learning, end-to-end speech recognition, which directly maps graphemes to speech signals, shows good performance. Especially, among the end-to-end models, conformer shows the best performance. However end-to-end models only focuses on the probability of which grapheme will appear at the time. The decoding process uses a greedy search or beam search. This decoding method is easily affected by the final probability output by the model. In addition, the end-to-end models cannot use external pronunciation and language information due to structual problem. Therefore, in this paper conformer with lexicon transducer is proposed. We compare phoneme-based model with lexicon transducer and grapheme-based model with beam search. Test set is consist of words that do not appear in training data. The grapheme-based conformer with beam search shows 3.8 % of CER. The phoneme-based conformer with lexicon transducer shows 3.4 % of CER.

News Article Analysis of the 4th Industrial Revolution and Advertising before and after COVID-19: Focusing on LDA and Word2vec (코로나 이전과 이후의 4차 산업혁명과 광고의 뉴스기사 분석 : LDA와 Word2vec을 중심으로)

  • Cha, Young-Ran
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.9
    • /
    • pp.149-163
    • /
    • 2021
  • The 4th industrial revolution refers to the next-generation industrial revolution led by information and communication technologies such as artificial intelligence (AI), Internet of Things (IoT), robot technology, drones, autonomous driving and virtual reality (VR) and it also has made a significant impact on the development of the advertising industry. However, the world is rapidly changing to a non-contact, non-face-to-face living environment to prevent the spread of COVID 19. Accordingly, the role of the 4th industrial revolution and advertising is changing. Therefore, in this study, text analysis was performed using Big Kinds to examine the 4th industrial revolution and changes in advertising before and after COVID 19. Comparisons were made between 2019 before COVID 19 and 2020 after COVID 19. Main topics and documents were classified through LDA topic model analysis and Word2vec, a deep learning technique. As the result of the study showed that before COVID 19, policies, contents, AI, etc. appeared, but after COVID 19, the field gradually expanded to finance, advertising, and delivery services utilizing data. Further, education appeared as an important issue. In addition, if the use of advertising related to the 4th industrial revolution technology was mainstream before COVID 19, keywords such as participation, cooperation, and daily necessities, were more actively used for education on advanced technology, while talent cultivation appeared prominently. Thus, these research results are meaningful in suggesting a multifaceted strategy that can be applied theoretically and practically, while suggesting the future direction of advertising in the 4th industrial revolution after COVID 19.

Design of Fetal Health Classification Model for Hospital Operation Management (효율적인 병원보건관리를 위한 태아건강분류 모델)

  • Chun, Je-Ran
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.263-268
    • /
    • 2021
  • The purpose of this study was to propose a model which is suitable for the actual delivery system by designing a fetal delivery hospital operation management and fetal health classification model. The number of deaths during childbirth is similar to the number of maternal mortality rate of 295,000 as of 2017. Among those numbers, 94% of deaths are preventable in most cases. Therefore, in this paper, we proposed a model that predicts the health condition of the fetus using data like heart rate of fetuses, fetal movements, uterine contractions, etc. that are extracted from the Cardiotocograms(CTG) test using a random forest. If the redundancy of the data is unbalanced, This proposed model guarantees a stable management of the fetal delivery health management system. To secure the accuracy of the fetal delivery health management system, we remove the outlier which embedded in the system, by setting thresholds for the upper and lower standard deviations. In addition, as the proportion of the sequence class uses the health status of fetus, a small number of classes were replicated by data-resampling to balance the classes. We had the 4~5% improvement and as the result we reached the accuracy of 97.75%. It is expected that the developed model will contribute to prevent death and effective fetal health management, also disease prevention by predicting and managing the fetus'deaths and diseases accurately in advance.

Performance Analysis of Object Detection Neural Network According to Compression Ratio of RGB and IR Images (RGB와 IR 영상의 압축률에 따른 객체 탐지 신경망 성능 분석)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Lee, Hee Kyung;Choo, Hyon-Gon;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.155-166
    • /
    • 2021
  • Most object detection algorithms are studied based on RGB images. Because the RGB cameras are capturing images based on light, however, the object detection performance is poor when the light condition is not good, e.g., at night or foggy days. On the other hand, high-quality infrared(IR) images regardless of weather condition and light can be acquired because IR images are captured by an IR sensor that makes images with heat information. In this paper, we performed the object detection algorithm based on the compression ratio in RGB and IR images to show the detection capabilities. We selected RGB and IR images that were taken at night from the Free FLIR Thermal dataset for the ADAS(Advanced Driver Assistance Systems) research. We used the pre-trained object detection network for RGB images and a fine-tuned network that is tuned based on night RGB and IR images. Experimental results show that higher object detection performance can be acquired using IR images than using RGB images in both networks.

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.

An Empirical Study on Predictive Modeling to enhance the Product-Technical Roadmap (제품-기술로드맵 개발을 강화하기 위한 예측모델링에 관한 실증 연구)

  • Park, Kigon;Kim, YoungJun
    • Journal of Technology Innovation
    • /
    • v.29 no.4
    • /
    • pp.1-30
    • /
    • 2021
  • Due to the recent development of system semiconductors, technical innovation for the electric devices of the automobile industry is rapidly progressing. In particular, the electric device of automobiles is accelerating technology development competition among automobile parts makers, and the development cycle is also changing rapidly. Due to these changes, the importance of strategic planning for R&D is further strengthened. Due to the paradigm shift in the automobile industry, the Product-Technical Roadmap (P/TRM), one of the R&D strategies, analyzes technology forecasting, technology level evaluation, and technology acquisition method (Make/Collaborate/Buy) at the planning stage. The product-technical roadmap is a tool that identifies customer needs of products and technologies, selects technologies and sets development directions. However, most companies are developing the product-technical roadmap through a qualitative method that mainly relies on the technical papers, patent analysis, and expert Delphi method. In this study, empirical research was conducted through simulations that can supplement and strengthen the product-technical roadmap centered on the automobile industry by fusing Gartner's hype cycle, cumulative moving average-based data preprocessing, and deep learning (LSTM) time series analysis techniques. The empirical study presented in this paper can be used not only in the automobile industry but also in other manufacturing fields in general. In addition, from the corporate point of view, it is considered that it will become a foundation for moving forward as a leading company by providing products to the market in a timely manner through a more accurate product-technical roadmap, breaking away from the roadmap preparation method that has relied on qualitative methods.

Development of a method for urban flooding detection using unstructured data and deep learing (비정형 데이터와 딥러닝을 활용한 내수침수 탐지기술 개발)

  • Lee, Haneul;Kim, Hung Soo;Kim, Soojun;Kim, Donghyun;Kim, Jongsung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1233-1242
    • /
    • 2021
  • In this study, a model was developed to determine whether flooding occurred using image data, which is unstructured data. CNN-based VGG16 and VGG19 were used to develop the flood classification model. In order to develop a model, images of flooded and non-flooded images were collected using web crawling method. Since the data collected using the web crawling method contains noise data, data irrelevant to this study was primarily deleted, and secondly, the image size was changed to 224×224 for model application. In addition, image augmentation was performed by changing the angle of the image for diversity of image. Finally, learning was performed using 2,500 images of flooding and 2,500 images of non-flooding. As a result of model evaluation, the average classification performance of the model was found to be 97%. In the future, if the model developed through the results of this study is mounted on the CCTV control center system, it is judged that the respons against flood damage can be done quickly.