• Title/Summary/Keyword: 인공지능에 대한 지식

Search Result 180, Processing Time 0.03 seconds

Interpretation of depositional setting and sedimentary facies of the late Cenozoic sediments in the southern Ulleung Basin margin, East Sea(Sea of Japan), by an expert system, PLAYMAKER2 (PLAYMAKER2, 전문가 시스템을 이용한 동해 울릉분지 남부 신생대 후기 퇴적층의 퇴적환경 해석)

  • Cheong Daekyo
    • The Korean Journal of Petroleum Geology
    • /
    • v.6 no.1_2 s.7
    • /
    • pp.20-24
    • /
    • 1998
  • Expert system is one type of artificial intelligence softwares that incorporate problem-solving knowledges and experiences of human experts by use of symbolic reasoning and rules about a specific topic. In this study, an expert system, PLAYMAKER2, is used to interpret sedimentary facies and depositional settings of the sedimentary sequence. The original version of the expert system, PLAYMAKER, was developed in University of South Carolina in 1990, and modified into the present PLAYMAKER2 with some changes in the knowledge-base of the previous system. The late Cenozoic sedimentary sequence with maximum 10,000 m in thickness, which is located in the Korean Oil Exploration Block VI-1 at the southwestern margin of the Ulleung Basin, is analysed by the expert system, PLAYMAKER2. The Cenozoic sedimentary sequence is divided into two units-lower Miocene and upper Pliocene-Pleistocene sediments. The depositional settings and sedimentary facies of the Miocene sediments interpreted by PLAYMAKER2 in terms of belief values are: for depositional settings, slope; $57.4\%$, shelf; $21.4\%$, basin; $10.1\%$, and for sedimentary facies, submarine fan; $35.7\%$, continental slope; $26.3\%$, delta; $16.1\%$, deep basinplain; $6.1\%$ continental shelf; $3.2\%$, shelf margin; $1.4\%$. The depositional settings and sedimentary facies of the Pliocene-Pleistocene sediments in terms of belief values we: for depositional settings, slope; $59.0\%$, shelf; $22.8\%$, basin; $7.0\%$, and for sedimentary facies, delta; $24.1\%$, continental slope; $22.2\%$, submarine fan; $17.3\%$, continental shelf; $7.0\%$, deep basinplain; $4.8\%$, shelf margin; $2.6\%$. The comparison of the depositional settings and sedimentary facies consulted by PLAYMAKER2 with those of the classical interpretation from previous studies shows resonable similarity for the both sedimentary units-the lower Miocene sediments and the upper Pliocene-Pleistocene sediments. It demonstrates that PLAYMAKER2 is an efficient tool to interpret the depositional setting and sedimentary facies for sediments. However, to be a more reliable system, many sedimentologists should work to refine and add geological rules in the knowledge-base of the expert system, PLAYMAKER2.

  • PDF

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

The Impact of O4O Selection Attributes on Customer Satisfaction and Loyalty: Focusing on the Case of Fresh Hema in China (O4O 선택속성이 고객만족도 및 고객충성도에 미치는 영향: 중국 허마셴셩 사례를 중심으로)

  • Cui, Chengguo;Yang, Sung-Byung
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.249-269
    • /
    • 2020
  • Recently, as the online market has matured, it is facing many problems to prevent the growth. The most common problem is the homogenization of online products, which fails to increase the number of customers any more. Moreover, although the portion of the online market has increased significantly, it now becomes essential to expand offline for further development. In response, many online firms have recently sought to expand their businesses and marketing channels by securing offline spaces that can complement the limitations of online platforms, on top of their existing advantages of online channels. Based on their competitive advantage in terms of analyzing large volumes of customer data utilizing information technologies (e.g., big data and artificial intelligence), they are reinforcing their offline influence as well through this online for offline (O4O) business model. On the other hand, most of the existing research has primarily focused on online to offline (O2O) business model, and there is still a lack of research on O4O business models, which have been actively attempted in various industrial fields in recent years. Since a few of O4O-related studies have been conducted only in an experience marketing setting following a case study method, it is critical to conduct an empirical study on O4O selection attributes and their impact on customer satisfaction and loyalty. Therefore, focusing on China's representative O4O business model, 'Fresh Hema,' this study attempts to identify some key selection attributes specialized for O4O services from the customers' viewpoint and examine the impact of these attributes on customer satisfaction and loyalty. The results of the structural equation modeling (SEM) with 300 O4O (Fresh Hema) experienced customers, reveal that, out of seven O4O selection attributes, four (mobile app quality, mobile payment, product quality, and store facilities) have an impact on customer satisfaction, which also leads to customer loyalty (reuse intention, recommendation intention, and brand attachment). This study would help managers in an O4O area well adapt to rapidly changing customer needs and provide them with some guidelines for enhancing both customer satisfaction and loyalty by allocating more resources to more significant selection attributes, rather than less significant ones.

Bus-only Lane and Traveling Vehicle's License Plate Number Recognition for Realizing V2I in C-ITS Environments (C-ITS 환경에서 V2I 실현을 위한 버스 전용 차선 및 주행 차량 번호판 인식)

  • Im, Changjae;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.87-104
    • /
    • 2015
  • Currently the IoT (Internet of Things) environments and related technologies are being developed rapidly through the networks for connecting many intelligent objects. The IoT is providing artificial intelligent services combined with context recognition based knowledge and communication methods between human and objects and objects to objects. With the help of IoT technology, many research works are being developed using the C-ITS (Cooperative Intelligent Transport System) which uses road infrastructure and traveling vehicles as traffic control infrastructures and resources for improving and increasing driver's convenience and safety through two way communication such as bus-only lane and license plate recognition and road accidents, works ahead reports, which are eventually for advancing traffic effectiveness. In this paper, a system for deciding whether the traveling vehicle is possible or not to drive on bus-only lane in highway is researched using the lane and number plate recognition on the road in C-ITS traffic infrastructure environments. The number plates of vehicles on the straight ahead and sides are identified after the location of bus-only lane is discovered through the lane recognition method. Research results and experimental outcomes are presented which are supposed to be used by traffic management infrastructure and controlling system in future.

The Transformation of Norms and Social Problems: Focusing on the COVID-19 Pandemic (규범의 전환과 사회문제: 코로나를 중심으로)

  • Lee, Jangju
    • Korean Journal of Culture and Social Issue
    • /
    • v.28 no.3
    • /
    • pp.513-527
    • /
    • 2022
  • This study was conducted to examining the socio-cultural impact of the COVID-19 pandemic that swept the world around 2020, and the transformation of norms and social problems due to COVID-19. For this, the characteristics of changes in the socio-cultural norms of the 14th century European Black Death, a representative example of the pandemic, were derived, and based on this, the COVID-19 pandemic was analyzed. The Black Death served as an opportunity to change social norms based on the existing religious authority and the power of the feudal system to the Enlightenment. The population declination and labor shortage also promoted commercialization and mechanization. Printing, which spread during this period, led to the popularization of knowledge, which raised the level of thinking and led to epochal scientific development. This became the foundation of the Industrial Revolution. Like the recent Black Death, COVID-19 has triggered changes in social norms. The technological environment of metaverse, a mixture of virtual and reality, has changed the norm of a consistent identity into free and open identities exerting various potentials through alternate characters. In addition, meme, which are about people being friendly to those with the same worldview as him on the metaverse, weakened the sense of isolation in non-face-to-face situations. Artificial intelligence (AI), which developed during the COVID-19 pandemic, has entered the stage of being used for creative activities beyond the function of assisting humans. Discussions were held on what new social problems would be created by the social norms changed due to the COVID-19 pandemic.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

A Study on Dataset Generation Method for Korean Language Information Extraction from Generative Large Language Model and Prompt Engineering (생성형 대규모 언어 모델과 프롬프트 엔지니어링을 통한 한국어 텍스트 기반 정보 추출 데이터셋 구축 방법)

  • Jeong Young Sang;Ji Seung Hyun;Kwon Da Rong Sae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.481-492
    • /
    • 2023
  • This study explores how to build a Korean dataset to extract information from text using generative large language models. In modern society, mixed information circulates rapidly, and effectively categorizing and extracting it is crucial to the decision-making process. However, there is still a lack of Korean datasets for training. To overcome this, this study attempts to extract information using text-based zero-shot learning using a generative large language model to build a purposeful Korean dataset. In this study, the language model is instructed to output the desired result through prompt engineering in the form of "system"-"instruction"-"source input"-"output format", and the dataset is built by utilizing the in-context learning characteristics of the language model through input sentences. We validate our approach by comparing the generated dataset with the existing benchmark dataset, and achieve 25.47% higher performance compared to the KLUE-RoBERTa-large model for the relation information extraction task. The results of this study are expected to contribute to AI research by showing the feasibility of extracting knowledge elements from Korean text. Furthermore, this methodology can be utilized for various fields and purposes, and has potential for building various Korean datasets.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

An Analysis of the Dynamics between Media Coverage and Stock Market on Digital New Deal Policy: Focusing on Companies Related to the Fourth Industrial Revolution (디지털 뉴딜 정책에 대한 언론 보도량과 주식 시장의 동태적 관계 분석: 4차산업혁명 관련 기업을 중심으로)

  • Sohn, Kwonsang;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.33-53
    • /
    • 2021
  • In the crossroads of social change caused by the spread of the Fourth Industrial Revolution and the prolonged COVID-19, the Korean government announced the Digital New Deal policy on July 14, 2020. The Digital New Deal policy's primary goal is to create new businesses by accelerating digital transformation in the public sector and industries around data, networks, and artificial intelligence technologies. However, in a rapidly changing social environment, information asymmetry of the future benefits of technology can cause differences in the public's ability to analyze the direction and effectiveness of policies, resulting in uncertainty about the practical effects of policies. On the other hand, the media leads the formation of discourse through communicators' role to disseminate government policies to the public and provides knowledge about specific issues through the news. In other words, as the media coverage of a particular policy increases, the issue concentration increases, which also affects public decision-making. Therefore, the purpose of this study is to verify the dynamic relationship between the media coverage and the stock market on the Korean government's digital New Deal policy using Granger causality, impulse response functions, and variance decomposition analysis. To this end, the daily stock turnover ratio, daily price-earnings ratio, and EWMA volatility of digital technology-based companies related to the digital new deal policy among KOSDAQ listed companies were set as variables. As a result, keyword search volume, daily stock turnover ratio, EWMA volatility have a bi-directional Granger causal relationship with media coverage. And an increase in media coverage has a high impact on keyword search volume on digital new deal policies. Also, the impulse response analysis on media coverage showed a sharp drop in EWMA volatility. The influence gradually increased over time and played a role in mitigating stock market volatility. Based on this study's findings, the amount of media coverage of digital new deals policy has a significant dynamic relationship with the stock market.

Current status and future of insect smart factory farm using ICT technology (ICT기술을 활용한 곤충스마트팩토리팜의 현황과 미래)

  • Seok, Young-Seek
    • Food Science and Industry
    • /
    • v.55 no.2
    • /
    • pp.188-202
    • /
    • 2022
  • In the insect industry, as the scope of application of insects is expanded from pet insects and natural enemies to feed, edible and medicinal insects, the demand for quality control of insect raw materials is increasing, and interest in securing the safety of insect products is increasing. In the process of expanding the industrial scale, controlling the temperature and humidity and air quality in the insect breeding room and preventing the spread of pathogens and other pollutants are important success factors. It requires a controlled environment under the operating system. European commercial insect breeding facilities have attracted considerable investor interest, and insect companies are building large-scale production facilities, which became possible after the EU approved the use of insect protein as feedstock for fish farming in July 2017. Other fields, such as food and medicine, have also accelerated the application of cutting-edge technology. In the future, the global insect industry will purchase eggs or small larvae from suppliers and a system that focuses on the larval fattening, i.e., production raw material, until the insects mature, and a system that handles the entire production process from egg laying, harvesting, and initial pre-treatment of larvae., increasingly subdivided into large-scale production systems that cover all stages of insect larvae production and further processing steps such as milling, fat removal and protein or fat fractionation. In Korea, research and development of insect smart factory farms using artificial intelligence and ICT is accelerating, so insects can be used as carbon-free materials in secondary industries such as natural plastics or natural molding materials as well as existing feed and food. A Korean-style customized breeding system for shortening the breeding period or enhancing functionality is expected to be developed soon.