• Title/Summary/Keyword: Smart Machine

Search Result 863, Processing Time 0.027 seconds

Explainable Photovoltaic Power Forecasting Scheme Using BiLSTM (BiLSTM 기반의 설명 가능한 태양광 발전량 예측 기법)

  • Park, Sungwoo;Jung, Seungmin;Moon, Jaeuk;Hwang, Eenjun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.339-346
    • /
    • 2022
  • Recently, the resource depletion and climate change problem caused by the massive usage of fossil fuels for electric power generation has become a critical issue worldwide. According to this issue, interest in renewable energy resources that can replace fossil fuels is increasing. Especially, photovoltaic power has gaining much attention because there is no risk of resource exhaustion compared to other energy resources and there are low restrictions on installation of photovoltaic system. In order to use the power generated by the photovoltaic system efficiently, a more accurate photovoltaic power forecasting model is required. So far, even though many machine learning and deep learning-based photovoltaic power forecasting models have been proposed, they showed limited success in terms of interpretability. Deep learning-based forecasting models have the disadvantage of being difficult to explain how the forecasting results are derived. To solve this problem, many studies are being conducted on explainable artificial intelligence technique. The reliability of the model can be secured if it is possible to interpret how the model derives the results. Also, the model can be improved to increase the forecasting accuracy based on the analysis results. Therefore, in this paper, we propose an explainable photovoltaic power forecasting scheme based on BiLSTM (Bidirectional Long Short-Term Memory) and SHAP (SHapley Additive exPlanations).

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.

Performance Evaluation and Analysis on Single and Multi-Network Virtualization Systems with Virtio and SR-IOV (가상화 시스템에서 Virtio와 SR-IOV 적용에 대한 단일 및 다중 네트워크 성능 평가 및 분석)

  • Jaehak Lee;Jongbeom Lim;Heonchang Yu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.48-59
    • /
    • 2024
  • As functions that support virtualization on their own in hardware are developed, user applications having various workloads are operating efficiently in the virtualization system. SR-IOV is a virtualization support function that takes direct access to PCI devices, thus giving a high I/O performance by minimizing the need for hypervisor or operating system interventions. With SR-IOV, network I/O acceleration can be realized in virtualization systems that have relatively long I/O paths compared to bare-metal systems and frequent context switches between the user area and kernel area. To take performance advantages of SR-IOV, network resource management policies that can derive optimal network performance when SR-IOV is applied to an instance such as a virtual machine(VM) or container are being actively studied.This paper evaluates and analyzes the network performance of SR-IOV implementing I/O acceleration is compared with Virtio in terms of 1) network delay, 2) network throughput, 3) network fairness, 4) performance interference, and 5) multi-network. The contributions of this paper are as follows. First, the network I/O process of Virtio and SR-IOV was clearly explained in the virtualization system, and second, the evaluation results of the network performance of Virtio and SR-IOV were analyzed based on various performance metrics. Third, the system overhead and the possibility of optimization for the SR-IOV network in a virtualization system with high VM density were experimentally confirmed. The experimental results and analysis of the paper are expected to be referenced in the network resource management policy for virtualization systems that operate network-intensive services such as smart factories, connected cars, deep learning inference models, and crowdsourcing.

Development of Algorithm for Vibration Analysis Automation of Rotating Equipments Based on ISO 20816 (ISO 20816 기반 회전기기 진동분석 자동화 알고리즘 개발)

  • JaeWoong Lee;Ugiyeon Lee;Jeongseok Oh
    • Journal of the Korean Institute of Gas
    • /
    • v.28 no.2
    • /
    • pp.93-104
    • /
    • 2024
  • Facility diagnosis is essential for the smooth operation and life extension of rotating equipment used in industrial sites. Compared to other diagnostic methods, vibration diagnosis can find most of the initial defects, such as unbalance, alignment failure, bearing defects and resonance, compared to other diagnostic methods. Therefore, vibration analysis is the most commonly used facility diagnosis method in industrial sites, and is usefully used as a predictive preservation (PdM) technology to manage the condition of the facility. However, since the vibration diagnosis method is performed based on experience based on the standard, it is carried out by experts. Therefore, it is intended to contribute to the reliability of the facility by establishing a system that anyone can easily judge defects by establishing a vibration diagnosis method performed based on experience as a knowledgeable code system. An algorithm was developed based on the ISO-20816 standard for vibration measurement, and the reliability was verified by comparing the results of vibration measurement at various demonstration sites such as petrochemical plant compressors, hydrogen charging stations, and industrial machinery with the results of analysis using a development system. The developed algorithm can contribute to predictive maintenance (PdM) technology that anyone can diagnose the condition of the rotating machine at industrial sites and identify defects early to replace parts at the exact time of replacement. Furthermore, it is expected that it will contribute to reducing maintenance costs and downtime due to the failure of rotating machines when applied to various industrial sites such as oil refining facilities, transportation, production facilities, and aviation facilities.

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.

A Study on the RFID's Application Environment and Application Measure for Security (RFID의 보안업무 적용환경과 적용방안에 관한 연구)

  • Chung, Tae-Hwang
    • Korean Security Journal
    • /
    • no.21
    • /
    • pp.155-175
    • /
    • 2009
  • RFID that provide automatic identification by reading a tag attached to material through radio frequency without direct touch has some specification, such as rapid identification, long distance identification and penetration, so it is being used for distribution, transportation and safety by using the frequency of 125KHz, 134KHz, 13.56MHz, 433.92MHz, 900MHz, and 2.45GHz. Also it is one of main part of Ubiquitous that means connecting to net-work any time and any place they want. RFID is expected to be new growth industry worldwide, so Korean government think it as prospective field and promote research project and exhibition business program to linked with industry effectively. RFID could be used for access control of person and vehicle according to section and for personal certify with password. RFID can provide more confident security than magnetic card, so it could be used to prevent forgery of register card, passport and the others. Active RFID could be used for protecting operation service using it's long distance date transmission by application with positioning system. And RFID's identification and tracking function can provide effective visitor management through visitor's register, personal identification, position check and can control visitor's movement in the secure area without their approval. Also RFID can make possible of the efficient management and prevention of loss of carrying equipments and others. RFID could be applied to copying machine to manager and control it's user, copying quantity and It could provide some function such as observation of copy content, access control of user. RFID tag adhered to small storage device prevent carrying out of item using the position tracking function and control carrying-in and carrying-out of material efficiently. magnetic card and smart card have been doing good job in identification and control of person, but RFID can do above functions. RFID is very useful device but we should consider the prevention of privacy during its application.

  • PDF

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

Validation of nutrient intake of smartphone application through comparison of photographs before and after meals (식사 전후의 사진 비교를 통한 스마트폰 앱의 영양소섭취량 타당도 평가)

  • Lee, Hyejin;Kim, Eunbin;Kim, Su Hyeon;Lim, Haeun;Park, Yeong Mi;Kang, Joon Ho;Kim, Heewon;Kim, Jinho;Park, Woong-Yang;Park, Seongjin;Kim, Jinki;Yang, Yoon Jung
    • Journal of Nutrition and Health
    • /
    • v.53 no.3
    • /
    • pp.319-328
    • /
    • 2020
  • Purpose: This study was conducted to evaluate the validity of the Gene-Health application in terms of estimating energy and macronutrients. Methods: The subjects were 98 health adults participating in a weight-control intervention study. They recorded their diets in the Gene-Health application, took photographs before and after every meal on the same day, and uploaded them to the Gene-Health application. The amounts of foods and drinks consumed were estimated based on the photographs by trained experts, and the nutrient intakes were calculated using the CAN-Pro 5.0 program, which was named 'Photo Estimation'. The energy and macronutrients estimated from the Gene-Health application were compared with those from a Photo Estimation. The mean differences in energy and macronutrient intakes between the two methods were compared using paired t-test. Results: The mean energy intakes of Gene-Health and Photo Estimation were 1,937.0 kcal and 1,928.3 kcal, respectively. There were no significant differences in intakes of energy, carbohydrate, fat, and energy from fat (%) between two methods. The protein intake and energy from protein (%) of the Gene-Health were higher than those from the Photo Estimation. The energy from carbohydrate (%) for the Photo Estimation was higher than that of the Gene-Health. The Pearson correlation coefficients, weighted Kappa coefficients, and adjacent agreements for energy and macronutrient intakes between the two methods ranged from 0.382 to 0.607, 0.588 to 0.649, and 79.6% to 86.7%, respectively. Conclusion: The Gene-Health application shows acceptable validity as a dietary intake assessment tool for energy and macronutrients. Further studies with female subjects and various age groups will be needed.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.