• Title/Summary/Keyword: Machine Learning Procedure

Search Result 116, Processing Time 0.027 seconds

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Land Use Feature Extraction and Sprawl Development Prediction from Quickbird Satellite Imagery Using Dempster-Shafer and Land Transformation Model

  • Saharkhiz, Maryam Adel;Pradhan, Biswajeet;Rizeei, Hossein Mojaddadi;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.1
    • /
    • pp.15-27
    • /
    • 2020
  • Accurate knowledge of land use/land cover (LULC) features and their relative changes over upon the time are essential for sustainable urban management. Urban sprawl growth has been always also a worldwide concern that needs to carefully monitor particularly in a developing country where unplanned building constriction has been expanding at a high rate. Recently, remotely sensed imageries with a very high spatial/spectral resolution and state of the art machine learning approaches sent the urban classification and growth monitoring to a higher level. In this research, we classified the Quickbird satellite imagery by object-based image analysis of Dempster-Shafer (OBIA-DS) for the years of 2002 and 2015 at Karbala-Iraq. The real LULC changes including, residential sprawl expansion, amongst these years, were identified via change detection procedure. In accordance with extracted features of LULC and detected trend of urban pattern, the future LULC dynamic was simulated by using land transformation model (LTM) in geospatial information system (GIS) platform. Both classification and prediction stages were successfully validated using ground control points (GCPs) through accuracy assessment metric of Kappa coefficient that indicated 0.87 and 0.91 for 2002 and 2015 classification as well as 0.79 for prediction part. Detail results revealed a substantial growth in building over fifteen years that mostly replaced by agriculture and orchard field. The prediction scenario of LULC sprawl development for 2030 revealed a substantial decline in green and agriculture land as well as an extensive increment in build-up area especially at the countryside of the city without following the residential pattern standard. The proposed method helps urban decision-makers to identify the detail temporal-spatial growth pattern of highly populated cities like Karbala. Additionally, the results of this study can be considered as a probable future map in order to design enough future social services and amenities for the local inhabitants.

Cyclist's Performance Evaluation Used Ergonomic Method (인간공학적 방법을 이용한 사이클 선수의 경기력 평가 (우수선수의 경기력 벤치마킹을 중심으로))

  • Hah, Chong-Ku;Jang, Young-Kwan;Ki, Jae-Sug
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2009.11a
    • /
    • pp.15-24
    • /
    • 2009
  • Cycling that transform human energy into mechanical energy is one of the man-machine systems out of sports fields. Benchmarking means " improving ourselves by learning from others ", therefore benchmarking toward dominant cyclist is necessary on field. the goals of this study were to provide important factors on multi-disciplines (kinematics, physiology, power, psychology) for a tailored-training program that is suitable to individual characteristics. Two cyclist participated in this study and gave consent to the experimental procedure. one was dominant cyclist (years:21 yrs, height:177 cm, mass:70 kg), and the other was non-dominant cyclist(years:21, height:176, mass:70). Kinematic data were recorded using six infrared cameras (240Hz) and QTM (software). Physiological data (VO2max, AT) were acquired according to graded exercising test with cycle ergometer and power with Wingate test used by Bar-Or et. al ( 1977) and to evaluate muscle function with Cybex. Psychological data were collected with competitive state anxiety inventory (CSAI-2) that were devised by Martens et. al (1990) and with athletes' self-management questionnaire (ASMQ) of Huh (2003). It appears that the dominant's CV(coefficient of variability) was higher than non-dominant's CV in Sports Biomechanics domain, that the dominant's values for all factors ware higher than non-dominant's values in physical, and physiological domain, and their values between cognitive anxiety and somatic anxiety were contrary to each other in psychology. Further research on multi-disciplines may lead to the development of tailored-optimal training programs applicable with key factors to enhance athletic performance by means of research including athlete, coach and parents.

  • PDF

State Feedback Control for Model Matching Inclusion of Asynchronous Sequential Machines with Model Uncertainty (모델 불확실성을 가진 비동기 순차 머신의 모델 정합 포함을 위한 상태 피드백 제어)

  • Yang, Jung-Min;Park, Yong-Kuk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.7-14
    • /
    • 2010
  • Stable-state behaviors of asynchronous sequential machines represented as finite state machines can be corrected by feedback control schemes. In this paper, we propose a state feedback control scheme for input/state asynchronous machines with uncertain transitions. The considered asynchronous machine is deterministic, but its state transition function is partially known due to model uncertainty or inner logic errors. The control objective is to compensate the behavior of the closed-loop system so that it matches a sub-behavior of a prescribed model despite uncertain transitions. Furthermore, during the execution of corrective action, the controller reflects the exact knowledge of transitions into the next step, i.e., the range of the behavior of the closed-loop system can be enlarged through learning. The design procedure for the proposed controller is described in a case study.

Evolutionary Design of Radial Basis Function-based Polynomial Neural Network with the aid of Information Granulation (정보 입자화를 통한 방사형 기저 함수 기반 다항식 신경 회로망의 진화론적 설계)

  • Park, Ho-Sung;Jin, Yong-Ha;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.4
    • /
    • pp.862-870
    • /
    • 2011
  • In this paper, we introduce a new topology of Radial Basis Function-based Polynomial Neural Networks (RPNN) that is based on a genetically optimized multi-layer perceptron with Radial Polynomial Neurons (RPNs). This study offers a comprehensive design methodology involving mechanisms of optimization algorithms, especially Fuzzy C-Means (FCM) clustering method and Particle Swarm Optimization (PSO) algorithms. In contrast to the typical architectures encountered in Polynomial Neural Networks (PNNs), our main objective is to develop a design strategy of RPNNs as follows : (a) The architecture of the proposed network consists of Radial Polynomial Neurons (RPNs). In here, the RPN is fully reflective of the structure encountered in numeric data which are granulated with the aid of Fuzzy C-Means (FCM) clustering method. The RPN dwells on the concepts of a collection of radial basis function and the function-based nonlinear (polynomial) processing. (b) The PSO-based design procedure being applied at each layer of RPNN leads to the selection of preferred nodes of the network (RPNs) whose local characteristics (such as the number of input variables, a collection of the specific subset of input variables, the order of the polynomial, and the number of clusters as well as a fuzzification coefficient in the FCM clustering) can be easily adjusted. The performance of the RPNN is quantified through the experimentation where we use a number of modeling benchmarks - NOx emission process data of gas turbine power plant and learning machine data(Automobile Miles Per Gallon Data) already experimented with in fuzzy or neurofuzzy modeling. A comparative analysis reveals that the proposed RPNN exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literature.

Improved Environment Recognition Algorithms for Autonomous Vehicle Control (자율주행 제어를 위한 향상된 주변환경 인식 알고리즘)

  • Bae, Inhwan;Kim, Yeounghoo;Kim, Taekyung;Oh, Minho;Ju, Hyunsu;Kim, Seulki;Shin, Gwanjun;Yoon, Sunjae;Lee, Chaejin;Lim, Yongseob;Choi, Gyeungho
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.2
    • /
    • pp.35-43
    • /
    • 2019
  • This paper describes the improved environment recognition algorithms using some type of sensors like LiDAR and cameras. Additionally, integrated control algorithm for an autonomous vehicle is included. The integrated algorithm was based on C++ environment and supported the stability of the whole driving control algorithms. As to the improved vision algorithms, lane tracing and traffic sign recognition were mainly operated with three cameras. There are two algorithms developed for lane tracing, Improved Lane Tracing (ILT) and Histogram Extension (HIX). Two independent algorithms were combined into one algorithm - Enhanced Lane Tracing with Histogram Extension (ELIX). As for the enhanced traffic sign recognition algorithm, integrated Mutual Validation Procedure (MVP) by using three algorithms - Cascade, Reinforced DSIFT SVM and YOLO was developed. Comparing to the results for those, it is convincing that the precision of traffic sign recognition is substantially increased. With the LiDAR sensor, static and dynamic obstacle detection and obstacle avoidance algorithms were focused. Therefore, improved environment recognition algorithms, which are higher accuracy and faster processing speed than ones of the previous algorithms, were proposed. Moreover, by optimizing with integrated control algorithm, the memory issue of irregular system shutdown was prevented. Therefore, the maneuvering stability of the autonomous vehicle in severe environment were enhanced.

Fault Classification of a Blade Pitch System in a Floating Wind Turbine Based on a Recurrent Neural Network

  • Cho, Seongpil;Park, Jongseo;Choi, Minjoo
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.4
    • /
    • pp.287-295
    • /
    • 2021
  • This paper describes a recurrent neural network (RNN) for the fault classification of a blade pitch system of a spar-type floating wind turbine. An artificial neural network (ANN) can effectively recognize multiple faults of a system and build a training model with training data for decision-making. The ANN comprises an encoder and a decoder. The encoder uses a gated recurrent unit, which is a recurrent neural network, for dimensionality reduction of the input data. The decoder uses a multilayer perceptron (MLP) for diagnosis decision-making. To create data, we use a wind turbine simulator that enables fully coupled nonlinear time-domain numerical simulations of offshore wind turbines considering six fault types including biases and fixed outputs in pitch sensors and excessive friction, slit lock, incorrect voltage, and short circuits in actuators. The input data are time-series data collected by two sensors and two control inputs under the condition that of one fault of the six types occurs. A gated recurrent unit (GRU) that is one of the RNNs classifies the suggested faults of the blade pitch system. The performance of fault classification based on the gate recurrent unit is evaluated by a test procedure, and the results indicate that the proposed scheme works effectively. The proposed ANN shows a 1.4% improvement in its performance compared to an MLP-based approach.

Enhancing Acute Kidney Injury Prediction through Integration of Drug Features in Intensive Care Units

  • Gabriel D. M. Manalu;Mulomba Mukendi Christian;Songhee You;Hyebong Choi
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.434-442
    • /
    • 2023
  • The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting.

Performance Optimization Strategies for Fully Utilizing Apache Spark (아파치 스파크 활용 극대화를 위한 성능 최적화 기법)

  • Myung, Rohyoung;Yu, Heonchang;Choi, Sukyong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Enhancing performance of big data analytics in distributed environment has been issued because most of the big data related applications such as machine learning techniques and streaming services generally utilize distributed computing frameworks. Thus, optimizing performance of those applications at Spark has been actively researched. Since optimizing performance of the applications at distributed environment is challenging because it not only needs optimizing the applications themselves but also requires tuning of the distributed system configuration parameters. Although prior researches made a huge effort to improve execution performance, most of them only focused on one of three performance optimization aspect: application design, system tuning, hardware utilization. Thus, they couldn't handle an orchestration of those aspects. In this paper, we deeply analyze and model the application processing procedure of the Spark. Through the analyzed results, we propose performance optimization schemes for each step of the procedure: inner stage and outer stage. We also propose appropriate partitioning mechanism by analyzing relationship between partitioning parallelism and performance of the applications. We applied those three performance optimization schemes to WordCount, Pagerank, and Kmeans which are basic big data analytics and found nearly 50% performance improvement when all of those schemes are applied.