• 제목/요약/키워드: Artificial Intelligence Applications

Search Result 462, Processing Time 0.051 seconds

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

A Design of Smart Sensor Framework for Smart Home System Bsed on Layered Architecture (계층 구조에 기반을 둔 스마트 홈 시스템를 위한 스마트 센서 프레임워크의 설계)

  • Chung, Won-Ho;Kim, Yu-Bin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.4
    • /
    • pp.49-59
    • /
    • 2017
  • Smart sensing plays a key role in a variety of IoT applications, and its importance is growing more and more together with the development of artificial intelligence. Therefore the importance of smart sensors cannot be overemphasized. However, most studies related to smart sensors have been focusing on specific application purposes, for example, security, energy saving, monitoring, and there are not much effort on researches on how to efficiently configure various types of smart sensors to be needed in the future. In this paper, a component-based framework with hierarchical structure for efficient construction of smart sensor is proposed and its application to smart home is designed and implemented. The proposed method shows that various types of smart sensors to be appeared in the near future can be configured through the design and development of necessary components within the proposed software framework. In addition, since it has a layered architecture, the configuration of the smart sensor can be expanded by inserting the internal or external layers. In particular, it is possible to independently design the internal and external modules when designing an IoT application service through connection with the external device layer. A small-scale smart home system is designed and implemented using the proposed method, and a home cloud operating as an external layer, is further designed to accommodate and manage multiple smart homes. By developing and thus adding the components of each layer, it will be possible to efficiently extend the range of applications such as smart cars, smart buildings, smart factories an so on.

Analysis of functions and applications of intelligent tutoring system for personalized adaptive learning in mathematics (개인 맞춤형 수학 학습을 위한 인공지능 교육시스템의 기능과 적용 사례 분석)

  • Sung, Jihyun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.303-326
    • /
    • 2023
  • Mathematics is a discipline with a strong systemic structure, and learning deficits in previous stages have a great influence on the next stages of learning. Therefore, it is necessary to frequently check whether students have learned well and to provide immediate feedback, and for this purpose, intelligent tutoring system(ITS) can be used in math education. For this reason, it is necessary to reveal how the intelligent tutoring system is effective in personalized adaptive learning. The purpose of this study is to investigate the functions and applications of intelligent tutoring system for personalized adaptive learning in mathematics. To achieve this goal, literature reviews and surveys with students were applied to derive implications. Based on the literature reviews, the functions of intelligent tutoring system for personalized adaptive learning were derived. They can be broadly divided into diagnosis and evaluation, analysis and prediction, and feedback and content delivery. The learning and lesson plans were designed by them and it was applied to fifth graders in elementary school for about three months. As a result of this study, intelligent tutoring system was mostly supporting personalized adaptive learning in mathematics in several ways. Also, the researcher suggested that more sophisticated materials and technologies should be developed for effective personalized adaptive learning in mathematics by using intelligent tutoring system.

Analysis of Users' Sentiments and Needs for ChatGPT through Social Media on Reddit (Reddit 소셜미디어를 활용한 ChatGPT에 대한 사용자의 감정 및 요구 분석)

  • Hye-In Na;Byeong-Hee Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2024
  • ChatGPT, as a representative chatbot leveraging generative artificial intelligence technology, is used valuable not only in scientific and technological domains but also across diverse sectors such as society, economy, industry, and culture. This study conducts an explorative analysis of user sentiments and needs for ChatGPT by examining global social media discourse on Reddit. We collected 10,796 comments on Reddit from December 2022 to August 2023 and then employed keyword analysis, sentiment analysis, and need-mining-based topic modeling to derive insights. The analysis reveals several key findings. The most frequently mentioned term in ChatGPT-related comments is "time," indicative of users' emphasis on prompt responses, time efficiency, and enhanced productivity. Users express sentiments of trust and anticipation in ChatGPT, yet simultaneously articulate concerns and frustrations regarding its societal impact, including fears and anger. In addition, the topic modeling analysis identifies 14 topics, shedding light on potential user needs. Notably, users exhibit a keen interest in the educational applications of ChatGPT and its societal implications. Moreover, our investigation uncovers various user-driven topics related to ChatGPT, encompassing language models, jobs, information retrieval, healthcare applications, services, gaming, regulations, energy, and ethical concerns. In conclusion, this analysis provides insights into user perspectives, emphasizing the significance of understanding and addressing user needs. The identified application directions offer valuable guidance for enhancing existing products and services or planning the development of new service platforms.

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Big Data Based Dynamic Flow Aggregation over 5G Network Slicing

  • Sun, Guolin;Mareri, Bruce;Liu, Guisong;Fang, Xiufen;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4717-4737
    • /
    • 2017
  • Today, smart grids, smart homes, smart water networks, and intelligent transportation, are infrastructure systems that connect our world more than we ever thought possible and are associated with a single concept, the Internet of Things (IoT). The number of devices connected to the IoT and hence the number of traffic flow increases continuously, as well as the emergence of new applications. Although cutting-edge hardware technology can be employed to achieve a fast implementation to handle this huge data streams, there will always be a limit on size of traffic supported by a given architecture. However, recent cloud-based big data technologies fortunately offer an ideal environment to handle this issue. Moreover, the ever-increasing high volume of traffic created on demand presents great challenges for flow management. As a solution, flow aggregation decreases the number of flows needed to be processed by the network. The previous works in the literature prove that most of aggregation strategies designed for smart grids aim at optimizing system operation performance. They consider a common identifier to aggregate traffic on each device, having its independent static aggregation policy. In this paper, we propose a dynamic approach to aggregate flows based on traffic characteristics and device preferences. Our algorithm runs on a big data platform to provide an end-to-end network visibility of flows, which performs high-speed and high-volume computations to identify the clusters of similar flows and aggregate massive number of mice flows into a few meta-flows. Compared with existing solutions, our approach dynamically aggregates large number of such small flows into fewer flows, based on traffic characteristics and access node preferences. Using this approach, we alleviate the problem of processing a large amount of micro flows, and also significantly improve the accuracy of meeting the access node QoS demands. We conducted experiments, using a dataset of up to 100,000 flows, and studied the performance of our algorithm analytically. The experimental results are presented to show the promising effectiveness and scalability of our proposed approach.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • v.63 no.3
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

Generation of High-Resolution Chest X-rays using Multi-scale Conditional Generative Adversarial Network with Attention (주목 메커니즘 기반의 멀티 스케일 조건부 적대적 생성 신경망을 활용한 고해상도 흉부 X선 영상 생성 기법)

  • Ann, Kyeongjin;Jang, Yeonggul;Ha, Seongmin;Jeon, Byunghwan;Hong, Youngtaek;Shim, Hackjoon;Chang, Hyuk-Jae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-12
    • /
    • 2020
  • In the medical field, numerical imbalance of data due to differences in disease prevalence is a common problem. It reduces the performance of a artificial intelligence network, leading to difficulties in learning a network with good performance. Recently, generative adversarial network (GAN) technology has been introduced as a way to address this problem, and its ability has been demonstrated by successful applications in various fields. However, it is still difficult to achieve good results in solving problems with performance degraded by numerical imbalances because the image resolution of the previous studies is not yet good enough and the structure in the image is modeled locally. In this paper, we propose a multi-scale conditional generative adversarial network based on attention mechanism, which can produce high resolution images to solve the numerical imbalance problem of chest X-ray image data. The network was able to produce images for various diseases by controlling condition variables with only one network. It's efficient and effective in that the network don't need to be learned independently for all disease classes and solves the problem of long distance dependency in image generation with self-attention mechanism.