• Title/Summary/Keyword: Complex systems

Search Result 4,741, Processing Time 0.034 seconds

Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge (지식 간 상호참조적 네비게이션이 가능한 온톨로지 기반 프로세스 중심 지식지도)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.61-83
    • /
    • 2012
  • A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.

A Integrated Model of Land/Transportation System

  • 이상용
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.12a
    • /
    • pp.45-73
    • /
    • 1995
  • The current paper presents a system dynamics model which can generate the land use anq transportation system performance simultaneously is proposed. The model system consists of 7 submodels (population, migration of population, household, job growth-employment-land availability, housing development, travel demand, and traffic congestion level), and each of them is designed based on the causality functions and feedback loop structure between a large number of physical, socio-economic, and policy variables. The important advantages of the system dynamics model are as follows. First, the model can address the complex interactions between land use and transportation system performance dynamically. Therefore, it can be an effective tool for evaluating the time-by-time effect of a policy over time horizons. Secondly, the system dynamics model is not relied on the assumption of equilibrium state of urban systems as in conventional models since it determines the state of model components directly through dynamic system simulation. Thirdly, the system dynamics model is very flexible in reflecting new features, such as a policy, a new phenomenon which has not existed in the past, a special event, or a useful concept from other methodology, since it consists of a lots of separated equations. In Chapter I, II, and III, overall approach and structure of the model system are discussed with causal-loop diagrams and major equations. In Chapter V _, the performance of the developed model is applied to the analysis of the impact of highway capacity expansion on land use for the area of Montgomery County, MD. The year-by-year impacts of highway capacity expansion on congestion level and land use are analyzed with some possible scenarios for the highway capacity expansion. This is a first comprehensive attempt to use dynamic system simulation modeling in simultaneous treatment of land use and transportation system interactions. The model structure is not very elaborate mainly due to the problem of the availability of behavioral data, but the model performance results indicate that the proposed approach can be a promising one in dealing comprehensively with complicated urban land use/transportation system.

  • PDF

Current Development of Company Law in the European Union (유럽주식회사법의 최근 동향에 관한 연구)

  • Choi, Yo-Sop
    • Journal of Legislation Research
    • /
    • no.41
    • /
    • pp.229-260
    • /
    • 2011
  • European Union (EU) law has been a complex but at the same time fascinating subject of study due to its dynamic evolution. In particular, the Lisbon Treaty which entered into force in December 2009 represents the culmination of a decade of attempts at Treaty reform and harmonisation in diverse sectors. Amongst the EU private law fields, company law harmonisation has been one of the hotly debated issues with regards to the freedom of establishment in the internal market. Due to the significant differences between national provisions on company law, it seemed somewhat difficult to harmonise company law. However, Council Regulation 2157/2001 was legislated in 2001 and now provides the basis for the Statute for a European Company (or Societas Europaea: SE). The Statute is also supplemented by the Council Directive 2001/86 on the involvement of employees. The SE Statute is a legal measure in order to contribute to the internal market, and provides a choice for companies that wish to merge, create a joint subsidiary or convert a subsidiary into an SE. Through this option, the SE became a corporate form which is only available to existing companies incorporated in different Member States in the EU. The important question on the meaning of the SE Statute is whether the distinctive characteristics of the SE make it an attractive option to ensure significant numbers of SE registration. In fact, the outcome that has been made through the SE Statute is an example of regulatory competition. The traditional regulatory competition in the freedom of establishment has been the one between national statutes between Member States. However, this time is not a competition between Member States, which means that the Union has joined the area in competition between legal orders and is now in competition with the systems of company law of the Member States.Key Words : European Union, EU Company Law, Societas Europaea, SE Statute, One-tier System, Two-tier System, Race to the Bottom A quite number of scholars expect that the number of SE will increase significantly. Of course, there is no evidence of regulatory competition that Korea faces currently. However, because of the increasing volume of international trade and expansion of regional economic bloc, it is necessary to consider the example of development of EU company law. Addition to the existing SE Statute, the EU Commission has also proposed a new corporate form, Societas Private Europaea (private limited liable company). All of this development in European company law will help firms make their best choice for company establishment. The Delaware-style development in the EU will foster the race to the bottom, thereby improving the contents of company law. To conclude, the study on the development of European company law becomes important to understand the evolution of company law and harmonisation efforts in the EU.

"As the Scientific Witness Is a Court Witness and Is Not a Party Witness" ("과학의 승리"는 어떻게 선언될 수 있는가? 친자 확인을 위한 혈액형 검사가 법원으로 들어갔던 과정)

  • Kim, Hyomin
    • Journal of Science and Technology Studies
    • /
    • v.19 no.1
    • /
    • pp.1-51
    • /
    • 2019
  • The understanding of law and science as fundamentally different two systems, in which fact stands against justice, rapid progress against prudent process, is far too simple to be valid. Nonetheless, such account is commonly employed to explain the tension between law and science or justice and truth. Previous STS research raises fundamental doubts upon the off-the-shelf concept of "scientific truth" that can be introduced to the court for legal judgment. Delimiting the qualification of the expert, the value of the expert knowledge, or the criteria of the scientific expertise have always included social negotiation. What are the values that are affecting the boundary-making of the thing called "modern science" that is supposedly useful in solving legal conflicts? How do the value of law and the meaning of justice change as the boundaries of modern science take shapes? What is the significance of "science" when it is emphasized, particularly in relation to the legal provisions of paternity, and how does this perception of science affect unfoldings of legal disputes? In order to explore the answers to the above questions, we follow a process in which a type of "knowledge-deficient model" of a court-that is, law lags behind science and thus, under-employs its useful functions-can be closely examined. We attend to a series of discussions and subsequent changes that occurred in the US courts between 1930s and 1970s, when blood type tests began to be used to determine parental relations. In conclusion, we argue that it was neither nature nor truth in itself that was excavated by forensic scientists and legal practitioners, who regarded blood type tests as a truth machine. Rather, it was their careful practices and crafty narratives that made the roadmaps of modern science, technology, and society on which complex tensions between modern states, families, and courts were seen to be "resolved".

An Investigation on the Periodical Transition of News related to North Korea using Text Mining (텍스트마이닝을 활용한 북한 관련 뉴스의 기간별 변화과정 고찰)

  • Park, Chul-Soo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.63-88
    • /
    • 2019
  • The goal of this paper is to investigate changes in North Korea's domestic and foreign policies through automated text analysis over North Korea represented in South Korean mass media. Based on that data, we then analyze the status of text mining research, using a text mining technique to find the topics, methods, and trends of text mining research. We also investigate the characteristics and method of analysis of the text mining techniques, confirmed by analysis of the data. In this study, R program was used to apply the text mining technique. R program is free software for statistical computing and graphics. Also, Text mining methods allow to highlight the most frequently used keywords in a paragraph of texts. One can create a word cloud, also referred as text cloud or tag cloud. This study proposes a procedure to find meaningful tendencies based on a combination of word cloud, and co-occurrence networks. This study aims to more objectively explore the images of North Korea represented in South Korean newspapers by quantitatively reviewing the patterns of language use related to North Korea from 2016. 11. 1 to 2019. 5. 23 newspaper big data. In this study, we divided into three periods considering recent inter - Korean relations. Before January 1, 2018, it was set as a Before Phase of Peace Building. From January 1, 2018 to February 24, 2019, we have set up a Peace Building Phase. The New Year's message of Kim Jong-un and the Olympics of Pyeong Chang formed an atmosphere of peace on the Korean peninsula. After the Hanoi Pease summit, the third period was the silence of the relationship between North Korea and the United States. Therefore, it was called Depression Phase of Peace Building. This study analyzes news articles related to North Korea of the Korea Press Foundation database(www.bigkinds.or.kr) through text mining, to investigate characteristics of the Kim Jong-un regime's South Korea policy and unification discourse. The main results of this study show that trends in the North Korean national policy agenda can be discovered based on clustering and visualization algorithms. In particular, it examines the changes in the international circumstances, domestic conflicts, the living conditions of North Korea, the South's Aid project for the North, the conflicts of the two Koreas, North Korean nuclear issue, and the North Korean refugee problem through the co-occurrence word analysis. It also offers an analysis of South Korean mentality toward North Korea in terms of the semantic prosody. In the Before Phase of Peace Building, the results of the analysis showed the order of 'Missiles', 'North Korea Nuclear', 'Diplomacy', 'Unification', and ' South-North Korean'. The results of Peace Building Phase are extracted the order of 'Panmunjom', 'Unification', 'North Korea Nuclear', 'Diplomacy', and 'Military'. The results of Depression Phase of Peace Building derived the order of 'North Korea Nuclear', 'North and South Korea', 'Missile', 'State Department', and 'International'. There are 16 words adopted in all three periods. The order is as follows: 'missile', 'North Korea Nuclear', 'Diplomacy', 'Unification', 'North and South Korea', 'Military', 'Kaesong Industrial Complex', 'Defense', 'Sanctions', 'Denuclearization', 'Peace', 'Exchange and Cooperation', and 'South Korea'. We expect that the results of this study will contribute to analyze the trends of news content of North Korea associated with North Korea's provocations. And future research on North Korean trends will be conducted based on the results of this study. We will continue to study the model development for North Korea risk measurement that can anticipate and respond to North Korea's behavior in advance. We expect that the text mining analysis method and the scientific data analysis technique will be applied to North Korea and unification research field. Through these academic studies, I hope to see a lot of studies that make important contributions to the nation.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Positron Annihilation Spectroscopy of Active Galactic Nuclei

  • Doikov, Dmytry N.;Yushchenko, Alexander V.;Jeong, Yeuncheol
    • Journal of Astronomy and Space Sciences
    • /
    • v.36 no.1
    • /
    • pp.21-33
    • /
    • 2019
  • This paper focuses on the interpretation of radiation fluxes from active galactic nuclei. The advantage of positron annihilation spectroscopy over other methods of spectral diagnostics of active galactic nuclei (therefore AGN) is demonstrated. A relationship between regular and random components in both bolometric and spectral composition of fluxes of quanta and particles generated in AGN is found. We consider their diffuse component separately and also detect radiative feedback after the passage of high-velocity cosmic rays and hard quanta through gas-and-dust aggregates surrounding massive black holes in AGN. The motion of relativistic positrons and electrons in such complex systems produces secondary radiation throughout the whole investigated region of active galactic nuclei in form of cylinder with radius R= 400-1000 pc and height H=200-400 pc, thus causing their visible luminescence across all spectral bands. We obtain radiation and electron energy distribution functions depending on the spatial distribution of the investigated bulk of matter in AGN. Radiation luminescence of the non-central part of AGN is a response to the effects of particles and quanta falling from its center created by atoms, molecules and dust of its diffuse component. The cross-sections for the single-photon annihilation of positrons of different energies with atoms in these active galactic nuclei are determined. For the first time we use the data on the change in chemical composition due to spallation reactions induced by high-energy particles. We establish or define more accurately how the energies of the incident positron, emitted ${\gamma}-quantum$ and recoiling nucleus correlate with the atomic number and weight of the target nucleus. For light elements, we provide detailed tables of all indicated parameters. A new criterion is proposed, based on the use of the ratio of the fluxes of ${\gamma}-quanta$ formed in one- and two-photon annihilation of positrons in a diffuse medium. It is concluded that, as is the case in young supernova remnants, the two-photon annihilation tends to occur in solid-state grains as a result of active loss of kinetic energy of positrons due to ionisation down to thermal energy of free electrons. The single-photon annihilation of positrons manifests itself in the gas component of active galactic nuclei. Such annihilation occurs as interaction between positrons and K-shell electrons; hence, it is suitable for identification of the chemical state of substances comprising the gas component of the investigated media. Specific physical media producing high fluxes of positrons are discussed; it allowed a significant reduction in the number of reaction channels generating positrons. We estimate the brightness distribution in the ${\gamma}-ray$ spectra of the gas-and-dust media through which positron fluxes travel with the energy range similar to that recorded by the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) research module. Based on the results of our calculations, we analyse the reasons for such a high power of positrons to penetrate through gas-and-dust aggregates. The energy loss of positrons by ionisation is compared to the production of secondary positrons by high-energy cosmic rays in order to determine the depth of their penetration into gas-and-dust aggregations clustered in active galactic nuclei. The relationship between the energy of ${\gamma}-quanta$ emitted upon the single-photon annihilation and the energy of incident electrons is established. The obtained cross sections for positron interactions with bound electrons of the diffuse component of the non-central, peripheral AGN regions allowed us to obtain new spectroscopic characteristics of the atoms involved in single-photon annihilation.

A Study on the Determinants of Demand for Visiting Department Stores Using Big Data (POS) (빅데이터(POS)를 활용한 백화점 방문수요 결정요인에 관한 연구)

  • Shin, Seong Youn;Park, Jung A
    • Land and Housing Review
    • /
    • v.13 no.4
    • /
    • pp.55-71
    • /
    • 2022
  • Recently, the domestic department store industry is growing into a complex shopping cultural space, which is advanced and differentiated by changes in consumption patterns. In addition, competition is intensifying across 70 places operated by five large companies. This study investigates the determinants of the visits to department stores using the big data concept's automatic vehicle access system (pos) and proposes how to strengthen the competitiveness of the department store industry. We use a negative binomial regression test to predict the frequency of visits to 67 branches, except for three branches whose annual sales were incomplete due to the new opening in 2021. The results show that the demand for visiting department stores is positively associated with airport, terminal, and train stations, land areas, parking lots, VIP lounge numbers, luxury store ratio, F&B store numbers, non-commercial areas, and hotels. We suggest four strategies to enhance the competitiveness of domestic department stores. First, department store consumers have a high preference for luxury brands. Therefore, department stores need to form their own overseas buyer teams to discover and attract new luxury brands and attract customers who have a high demand for luxury brands. In addition, to attract consumers with high purchasing power and loyalty, it is necessary to provide more differentiated products and services for VIP customers than before. Second, it is desirable to focus on transportation hub areas such as train stations, airports, and terminals in Gyeonggi and Incheon. Third, department stores should attract tenants who can satisfy customers, given that key tenants are an important component of advanced shopping centers for department stores. Finally, the department store, a top-end shopping center, should be developed as a space with differentiated shopping, culture, dining out, and leisure services, such as "The Hyundai", which opened in 2021, to ensure future growth potential.

Analysis of News Agenda Using Text mining and Semantic Network Analysis: Focused on COVID-19 Emotions (텍스트 마이닝과 의미 네트워크 분석을 활용한 뉴스 의제 분석: 코로나 19 관련 감정을 중심으로)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.47-64
    • /
    • 2021
  • The global spread of COVID-19 around the world has not only affected many parts of our daily life but also has a huge impact on many areas, including the economy and society. As the number of confirmed cases and deaths increases, medical staff and the public are said to be experiencing psychological problems such as anxiety, depression, and stress. The collective tragedy that accompanies the epidemic raises fear and anxiety, which is known to cause enormous disruptions to the behavior and psychological well-being of many. Long-term negative emotions can reduce people's immunity and destroy their physical balance, so it is essential to understand the psychological state of COVID-19. This study suggests a method of monitoring medial news reflecting current days which requires striving not only for physical but also for psychological quarantine in the prolonged COVID-19 situation. Moreover, it is presented how an easier method of analyzing social media networks applies to those cases. The aim of this study is to assist health policymakers in fast and complex decision-making processes. News plays a major role in setting the policy agenda. Among various major media, news headlines are considered important in the field of communication science as a summary of the core content that the media wants to convey to the audiences who read it. News data used in this study was easily collected using "Bigkinds" that is created by integrating big data technology. With the collected news data, keywords were classified through text mining, and the relationship between words was visualized through semantic network analysis between keywords. Using the KrKwic program, a Korean semantic network analysis tool, text mining was performed and the frequency of words was calculated to easily identify keywords. The frequency of words appearing in keywords of articles related to COVID-19 emotions was checked and visualized in word cloud 'China', 'anxiety', 'situation', 'mind', 'social', and 'health' appeared high in relation to the emotions of COVID-19. In addition, UCINET, a specialized social network analysis program, was used to analyze connection centrality and cluster analysis, and a method of visualizing a graph using Net Draw was performed. As a result of analyzing the connection centrality between each data, it was found that the most central keywords in the keyword-centric network were 'psychology', 'COVID-19', 'blue', and 'anxiety'. The network of frequency of co-occurrence among the keywords appearing in the headlines of the news was visualized as a graph. The thickness of the line on the graph is proportional to the frequency of co-occurrence, and if the frequency of two words appearing at the same time is high, it is indicated by a thick line. It can be seen that the 'COVID-blue' pair is displayed in the boldest, and the 'COVID-emotion' and 'COVID-anxiety' pairs are displayed with a relatively thick line. 'Blue' related to COVID-19 is a word that means depression, and it was confirmed that COVID-19 and depression are keywords that should be of interest now. The research methodology used in this study has the convenience of being able to quickly measure social phenomena and changes while reducing costs. In this study, by analyzing news headlines, we were able to identify people's feelings and perceptions on issues related to COVID-19 depression, and identify the main agendas to be analyzed by deriving important keywords. By presenting and visualizing the subject and important keywords related to the COVID-19 emotion at a time, medical policy managers will be able to be provided a variety of perspectives when identifying and researching the regarding phenomenon. It is expected that it can help to use it as basic data for support, treatment and service development for psychological quarantine issues related to COVID-19.