• Title/Summary/Keyword: AI서비스

Search Result 760, Processing Time 0.031 seconds

How Market Reacts on the Metaverse Initiatives? An Event Study (메타버스 투자 추진이 기업 가치에 미치는 영향 분석: 이벤트 연구 방법론)

  • Mina Baek;Jeongha Kim;Dongwon Lee
    • Information Systems Review
    • /
    • v.25 no.4
    • /
    • pp.183-204
    • /
    • 2023
  • Due to the COVID-19 pandemic, lots of occasions need to be held in online environment. This is the reason why "Metaverse" gets lots of attention in 2021. A number of companies made announcements on Metaverse, and this situation also boomed stock market. This paper investigates the relationship between Metaverse initiatives and business value of the firm (i.e., stock prices). We examine this relationship by using event study method with Lexis-Nexis News data from 2019 to 2021. The results indicate that Metaverse initiatives significantly impact positive influence on firm's value. In the technological perspective, technical factors affect more positive market returns, including Metaverse enablers (e.g., NFT, VR devices, digital twin) and common infrastructure (e.g., semiconductor, AI, cloud), and especially virtual environment was emphasized. Additionally, in the strategical perspective, radical innovation (e.g., pivoting, acquisition) impact more positive market return rather than incremental innovation (e.g., partnership, investment). Also, firms from non-service industries can achieve benefits from Metaverse initiatives rather than service industry in some degree.

Comparative analysis of the digital circuit designing ability of ChatGPT (ChatGPT을 활용한 디지털회로 설계 능력에 대한 비교 분석)

  • Kihun Nam
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.967-971
    • /
    • 2023
  • Recently, a variety of AI-based platform services are available, and one of them is ChatGPT that processes a large quantity of data in the natural language and generates an answer after self-learning. ChatGPT can perform various tasks including software programming in the IT sector. Particularly, it may help generate a simple program and correct errors using C Language, which is a major programming language. Accordingly, it is expected that ChatGPT is capable of effectively using Verilog HDL, which is a hardware language created in C Language. Verilog HDL synthesis, however, is to generate imperative sentences in a logical circuit form and thus it needs to be verified whether the products are executed properly. In this paper, we aim to select small-scale logical circuits for ease of experimentation and to verify the results of circuits generated by ChatGPT and human-designed circuits. As to experimental environments, Xilinx ISE 14.7 was used for module modeling, and the xc3s1000 FPGA chip was used for module embodiment. Comparative analysis was performed on the use area and processing time of FPGA to compare the performance of ChatGPT products and Verilog HDL products.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

A Time Series Forecasting Model with the Option to Choose between Global and Clustered Local Models for Hotel Demand Forecasting (호텔 수요 예측을 위한 전역/지역 모델을 선택적으로 활용하는 시계열 예측 모델)

  • Keehyun Park;Gyeongho Jung;Hyunchul Ahn
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.31-47
    • /
    • 2024
  • With the advancement of artificial intelligence, the travel and hospitality industry is also adopting AI and machine learning technologies for various purposes. In the tourism industry, demand forecasting is recognized as a very important factor, as it directly impacts service efficiency and revenue maximization. Demand forecasting requires the consideration of time-varying data flows, which is why statistical techniques and machine learning models are used. In recent years, variations and integration of existing models have been studied to account for the diversity of demand forecasting data and the complexity of the natural world, which have been reported to improve forecasting performance concerning uncertainty and variability. This study also proposes a new model that integrates various machine-learning approaches to improve the accuracy of hotel sales demand forecasting. Specifically, this study proposes a new time series forecasting model based on XGBoost that selectively utilizes a local model by clustering with DTW K-means and a global model using the entire data to improve forecasting performance. The hotel demand forecasting model that selectively utilizes global and regional models proposed in this study is expected to impact the growth of the hotel and travel industry positively and can be applied to forecasting in other business fields in the future.

Exploring the Effects of Passive Haptic Factors When Interacting with a Virtual Pet in Immersive VR Environment (몰입형 VR 환경에서 가상 반려동물과 상호작용에 관한 패시브 햅틱 요소의 영향 분석)

  • Donggeun KIM;Dongsik Jo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.125-132
    • /
    • 2024
  • Recently, with immersive virtual reality(IVR) technologies, various services such as education, training, entertainment, industry, healthcare and remote collaboration have been applied. In particular, researches are actively being studied to visualize and interact with virtual humans, research on virtual pets in IVR is also emerging. For interaction with the virtual pet, similar to real-world interaction scenarios, the most important thing is to provide physical contact such as haptic and non-verbal interaction(e.g., gesture). This paper investigates the effects on factors (e.g., shape and texture) of passive haptic feedbacks using mapping physical props corresponding to the virtual pet. Experimental results show significant differences in terms of immersion, co-presence, realism, and friendliness depending on the levels of texture elements when interacting with virtual pets by passive haptic feedback. Additionally, as the main findings of this study by statistical interaction between two variables, we found that there was Uncanny valley effect in terms of friendliness. With our results, we will expect to be able to provide guidelines for creating interactive contents with the virtual pet in immersive VR environments.

AI-based early detection to prevent user churn in MMORPG (MMORPG 게임의 이탈 유저에 대한 인공지능 기반 조기 탐지)

  • Minhyuk Lee;Sunwoo Park;Sunghwan Lee;Suin Kim;Yoonyoung Cho;Daesub Song;Moonyoung Lee;Yoonsuh Jung
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.4
    • /
    • pp.525-539
    • /
    • 2024
  • Massive multiplayer online role playing game (MMORPG) is a common type of game these days. Predicting user churn in MMORPG is a crucial task. The retention rate of users is deeply associated with the lifespan and revenue of the service. If the churn of a specific user can be predicted in advance, targeted promotions can be used to encourage their stay. Therefore, not only the accuracy of churn prediction but also the speed at which signs of churn can be detected is important. In this paper, we propose methods to identify early signs of churn by utilizing the daily predicted user retention probabilities. We train various deep learning and machine learning models using log data and estimate user retention probabilities. By analyzing the change patterns in these probabilities, we provide empirical rules for early identification of users at high risk of churn. Performance evaluations confirm that our methodology is more effective at detecting high risk users than existing methods based on login days. Finally, we suggest novel methods for customized marketing strategies. For this purpose, we provide guidelines of the percentage of accessed users who are at risk of churn.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Research Trends of Health Recommender Systems (HRS): Applying Citation Network Analysis and GraphSAGE (건강추천시스템(HRS) 연구 동향: 인용네트워크 분석과 GraphSAGE를 활용하여)

  • Haryeom Jang;Jeesoo You;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.57-84
    • /
    • 2023
  • With the development of information and communications technology (ICT) and big data technology, anyone can easily obtain and utilize vast amounts of data through the Internet. Therefore, the capability of selecting high-quality data from a large amount of information is becoming more important than the capability of just collecting them. This trend continues in academia; literature reviews, such as systematic and non-systematic reviews, have been conducted in various research fields to construct a healthy knowledge structure by selecting high-quality research from accumulated research materials. Meanwhile, after the COVID-19 pandemic, remote healthcare services, which have not been agreed upon, are allowed to a limited extent, and new healthcare services such as health recommender systems (HRS) equipped with artificial intelligence (AI) and big data technologies are in the spotlight. Although, in practice, HRS are considered one of the most important technologies to lead the future healthcare industry, literature review on HRS is relatively rare compared to other fields. In addition, although HRS are fields of convergence with a strong interdisciplinary nature, prior literature review studies have mainly applied either systematic or non-systematic review methods; hence, there are limitations in analyzing interactions or dynamic relationships with other research fields. Therefore, in this study, the overall network structure of HRS and surrounding research fields were identified using citation network analysis (CNA). Additionally, in this process, in order to address the problem that the latest papers are underestimated in their citation relationships, the GraphSAGE algorithm was applied. As a result, this study identified 'recommender system', 'wireless & IoT', 'computer vision', and 'text mining' as increasingly important research fields related to HRS research, and confirmed that 'personalization' and 'privacy' are emerging issues in HRS research. The study findings would provide both academic and practical insights into identifying the structure of the HRS research community, examining related research trends, and designing future HRS research directions.