• Title/Summary/Keyword: AI 기법

Search Result 539, Processing Time 0.035 seconds

Domain Knowledge Incorporated Counterfactual Example-Based Explanation for Bankruptcy Prediction Model (부도예측모형에서 도메인 지식을 통합한 반사실적 예시 기반 설명력 증진 방법)

  • Cho, Soo Hyun;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.307-332
    • /
    • 2022
  • One of the most intensively conducted research areas in business application study is a bankruptcy prediction model, a representative classification problem related to loan lending, investment decision making, and profitability to financial institutions. Many research demonstrated outstanding performance for bankruptcy prediction models using artificial intelligence techniques. However, since most machine learning algorithms are "black-box," AI has been identified as a prominent research topic for providing users with an explanation. Although there are many different approaches for explanations, this study focuses on explaining a bankruptcy prediction model using a counterfactual example. Users can obtain desired output from the model by using a counterfactual-based explanation, which provides an alternative case. This study introduces a counterfactual generation technique based on a genetic algorithm (GA) that leverages both domain knowledge (i.e., causal feasibility) and feature importance from a black-box model along with other critical counterfactual variables, including proximity, distribution, and sparsity. The proposed method was evaluated quantitatively and qualitatively to measure the quality and the validity.

An Exploratory Research Trends Analysis in Journal of the Korea Contents Association using Topic Modeling (토픽 모델링을 활용한 한국콘텐츠학회 논문지 연구 동향 탐색)

  • Seok, Hye-Eun;Kim, Soo-Young;Lee, Yeon-Su;Cho, Hyun-Young;Lee, Soo-Kyoung;Kim, Kyoung-Hwa
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.95-106
    • /
    • 2021
  • The purpose of this study is to derive major topics in content R&D and provide directions for academic development by exploring research trends over the past 20 years using topic modeling targeting 9,858 papers published in the Journal of the Korean Contents Association. To secure the reliability and validity of the extracted topics, not only the quantitative evaluation technique but also the qualitative technique were applied step-by-step and repeated until a corpus of the level agreed upon by the researchers was generated, and detailed analysis procedures were presented accordingly. As a result of the analysis, 8 core topics were extracted. This shows that the Korean Contents Association is publishing convergence and complex research papers in various fields without limiting to a specific academic field. Also, before 2012, the proportion of topics in the field of engineering and technology appeared relatively high, while after 2012, the proportion of topics in the field of social sciences appeared relatively high. Specifically, the topic of 'social welfare' showed a fourfold increase in the second half compared to the first half. Through topic-specific trend analysis, we focused on the turning point in time at which the inflection point of the trend line appeared, explored the external variables that affected the research trend of the topic, and identified the relationship between the topic and the external variable. It is hoped that the results of this study can provide implications for active discussions in domestic content-related R&D and industrial fields.

Study on future advertising change according to the development of artificial intelligence and metaverse (인공지능과 메타버스 발전에 따른 미래 광고 변화에 관한 연구)

  • Ahn, Jong-Bae
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.873-879
    • /
    • 2022
  • In the future, AI and the metaverse are becoming so powerful that their application areas and influences are swallowing up the world. The advertising field is no exception, and it is becoming more important to predict, analyze, and strategize these future changes. In order to study the future change of advertising according to the development of artificial intelligence and metaverse, literature research related to the development of artificial intelligence and metaverse technology and the resulting change in the advertising environment, in-depth interviews with future and advertising experts, and Delphi technique research method I want to study change. First, through this study, we would like to examine the opinions of experts through in-depth interviews on the development of artificial intelligence and metaverse technology and the changes in the advertising sector in the post-coronavirus era of civilizational transformation. In addition, the Delphi technique is used to determine how important the change is by future advertising technology area, future advertising media area, future advertising form area, future advertising effect area, future advertising application area, and future advertising process area, and at what point in the future it will change. In addition, we want to study how the future advertising form will change in detail. Also, based on this, we would like to propose a countermeasure for the advertising industry.

Deep Learning-based Object Detection of Panels Door Open in Underground Utility Tunnel (딥러닝 기반 지하공동구 제어반 문열림 인식)

  • Gyunghwan Kim;Jieun Kim;Woosug Jung
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.665-672
    • /
    • 2023
  • Purpose: Underground utility tunnel is facility that is jointly house infrastructure such as electricity, water and gas in city, causing condensation problems due to lack of airflow. This paper aims to prevent electricity leakage fires caused by condensation by detecting whether the control panel door in the underground utility tunnel is open using a deep learning model. Method: YOLO, a deep learning object recognition model, is trained to recognize the opening and closing of the control panel door using video data taken by a robot patrolling the underground utility tunnel. To improve the recognition rate, image augmentation is used. Result: Among the image enhancement techniques, we compared the performance of the YOLO model trained using mosaic with that of the YOLO model without mosaic, and found that the mosaic technique performed better. The mAP for all classes were 0.994, which is high evaluation result. Conclusion: It was able to detect the control panel even when there were lights off or other objects in the underground cavity. This allows you to effectively manage the underground utility tunnel and prevent disasters.

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF

Development of real-time defect detection technology for water distribution and sewerage networks (시나리오 기반 상·하수도 관로의 실시간 결함검출 기술 개발)

  • Park, Dong, Chae;Choi, Young Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1177-1185
    • /
    • 2022
  • The water and sewage system is an infrastructure that provides safe and clean water to people. In particular, since the water and sewage pipelines are buried underground, it is very difficult to detect system defects. For this reason, the diagnosis of pipelines is limited to post-defect detection, such as system diagnosis based on the images taken after taking pictures and videos with cameras and drones inside the pipelines. Therefore, real-time detection technology of pipelines is required. Recently, pipeline diagnosis technology using advanced equipment and artificial intelligence techniques is being developed, but AI-based defect detection technology requires a variety of learning data because the types and numbers of defect data affect the detection performance. Therefore, in this study, various defect scenarios are implemented using 3D printing model to improve the detection performance when detecting defects in pipelines. Afterwards, the collected images are performed to pre-processing such as classification according to the degree of risk and labeling of objects, and real-time defect detection is performed. The proposed technique can provide real-time feedback in the pipeline defect detection process, and it would be minimizing the possibility of missing diagnoses and improve the existing water and sewerage pipe diagnosis processing capability.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

Study on the Selection of Optimal Operation Position Using AI Techniques (인공지능 기법에 의한 최적 운항자세 선정에 관한 연구)

  • Dong-Woo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.681-687
    • /
    • 2023
  • The selection technique for optimal operation position selection technique is used to present the initial bow and stern draft with minimum resistance, for achievingthat is, the optimal fuel consumption efficiency at a given operating displacement and speed. The main purpose of this studypaper is to develop a program to select the optimal operating position with maximum energy efficiency under given operating conditions based on the effective power data of the target ship. This program was written as a Python-based GUI (Graphic User Interface) usingbased on artificial intelligence techniques sucho that ship owners could easily use the GUIit. In the process, tThe introduction of the target ship, the collection of effective power data through computational fluid dynamics (CFD), the learning method of the effective power model using deep learning, and the program for presenting the optimal operation position using the deep neural network (DNN) model were specifically explained. Ships are loaded and unloaded for each operation, which changes the cargo load and changes the displacement. The shipowners wants to know the optimal operating position with minimum resistance, that is, maximum energy efficiency, according to the given speed of each displacement. The developed GUI can be installed on the ship's tablet PC and application and used to determineselect the optimal operating position.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Real-Time Scheduling Scheme based on Reinforcement Learning Considering Minimizing Setup Cost (작업 준비비용 최소화를 고려한 강화학습 기반의 실시간 일정계획 수립기법)

  • Yoo, Woosik;Kim, Sungjae;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.2
    • /
    • pp.15-27
    • /
    • 2020
  • This study starts with the idea that the process of creating a Gantt Chart for schedule planning is similar to Tetris game with only a straight line. In Tetris games, the X axis is M machines and the Y axis is time. It is assumed that all types of orders can be worked without separation in all machines, but if the types of orders are different, setup cost will be incurred without delay. In this study, the game described above was named Gantris and the game environment was implemented. The AI-scheduling table through in-depth reinforcement learning compares the real-time scheduling table with the human-made game schedule. In the comparative study, the learning environment was studied in single order list learning environment and random order list learning environment. The two systems to be compared in this study are four machines (Machine)-two types of system (4M2T) and ten machines-six types of system (10M6T). As a performance indicator of the generated schedule, a weighted sum of setup cost, makespan and idle time in processing 100 orders were scheduled. As a result of the comparative study, in 4M2T system, regardless of the learning environment, the learned system generated schedule plan with better performance index than the experimenter. In the case of 10M6T system, the AI system generated a schedule of better performance indicators than the experimenter in a single learning environment, but showed a bad performance index than the experimenter in random learning environment. However, in comparing the number of job changes, the learning system showed better results than those of the 4M2T and 10M6T, showing excellent scheduling performance.