• Title/Summary/Keyword: News Analysis

Search Result 1,112, Processing Time 0.028 seconds

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Analysis of Twitter for 2012 South Korea Presidential Election by Text Mining Techniques (텍스트 마이닝을 이용한 2012년 한국대선 관련 트위터 분석)

  • Bae, Jung-Hwan;Son, Ji-Eun;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.141-156
    • /
    • 2013
  • Social media is a representative form of the Web 2.0 that shapes the change of a user's information behavior by allowing users to produce their own contents without any expert skills. In particular, as a new communication medium, it has a profound impact on the social change by enabling users to communicate with the masses and acquaintances their opinions and thoughts. Social media data plays a significant role in an emerging Big Data arena. A variety of research areas such as social network analysis, opinion mining, and so on, therefore, have paid attention to discover meaningful information from vast amounts of data buried in social media. Social media has recently become main foci to the field of Information Retrieval and Text Mining because not only it produces massive unstructured textual data in real-time but also it serves as an influential channel for opinion leading. But most of the previous studies have adopted broad-brush and limited approaches. These approaches have made it difficult to find and analyze new information. To overcome these limitations, we developed a real-time Twitter trend mining system to capture the trend in real-time processing big stream datasets of Twitter. The system offers the functions of term co-occurrence retrieval, visualization of Twitter users by query, similarity calculation between two users, topic modeling to keep track of changes of topical trend, and mention-based user network analysis. In addition, we conducted a case study on the 2012 Korean presidential election. We collected 1,737,969 tweets which contain candidates' name and election on Twitter in Korea (http://www.twitter.com/) for one month in 2012 (October 1 to October 31). The case study shows that the system provides useful information and detects the trend of society effectively. The system also retrieves the list of terms co-occurred by given query terms. We compare the results of term co-occurrence retrieval by giving influential candidates' name, 'Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn' as query terms. General terms which are related to presidential election such as 'Presidential Election', 'Proclamation in Support', Public opinion poll' appear frequently. Also the results show specific terms that differentiate each candidate's feature such as 'Park Jung Hee' and 'Yuk Young Su' from the query 'Guen Hae Park', 'a single candidacy agreement' and 'Time of voting extension' from the query 'Jae In Moon' and 'a single candidacy agreement' and 'down contract' from the query 'Chul Su Ahn'. Our system not only extracts 10 topics along with related terms but also shows topics' dynamic changes over time by employing the multinomial Latent Dirichlet Allocation technique. Each topic can show one of two types of patterns-Rising tendency and Falling tendencydepending on the change of the probability distribution. To determine the relationship between topic trends in Twitter and social issues in the real world, we compare topic trends with related news articles. We are able to identify that Twitter can track the issue faster than the other media, newspapers. The user network in Twitter is different from those of other social media because of distinctive characteristics of making relationships in Twitter. Twitter users can make their relationships by exchanging mentions. We visualize and analyze mention based networks of 136,754 users. We put three candidates' name as query terms-Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn'. The results show that Twitter users mention all candidates' name regardless of their political tendencies. This case study discloses that Twitter could be an effective tool to detect and predict dynamic changes of social issues, and mention-based user networks could show different aspects of user behavior as a unique network that is uniquely found in Twitter.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

WHICH INFORMATION MOVES PRICES: EVIDENCE FROM DAYS WITH DIVIDEND AND EARNINGS ANNOUNCEMENTS AND INSIDER TRADING

  • Kim, Chan-Wung;Lee, Jae-Ha
    • The Korean Journal of Financial Studies
    • /
    • v.3 no.1
    • /
    • pp.233-265
    • /
    • 1996
  • We examine the impact of public and private information on price movements using the thirty DJIA stocks and twenty-one NASDAQ stocks. We find that the standard deviation of daily returns on information days (dividend announcement, earnings announcement, insider purchase, or insider sale) is much higher than on no-information days. Both public information matters at the NYSE, probably due to masked identification of insiders. Earnings announcement has the greatest impact for both DJIA and NASDAQ stocks, and there is some evidence of positive impact of insider asle on return volatility of NASDAQ stocks. There has been considerable debate, e.g., French and Roll (1986), over whether market volatility is due to public information or private information-the latter gathered through costly search and only revealed through trading. Public information is composed of (1) marketwide public information such as regularly scheduled federal economic announcements (e.g., employment, GNP, leading indicators) and (2) company-specific public information such as dividend and earnings announcements. Policy makers and corporate insiders have a better access to marketwide private information (e.g., a new monetary policy decision made in the Federal Reserve Board meeting) and company-specific private information, respectively, compated to the general public. Ederington and Lee (1993) show that marketwide public information accounts for most of the observed volatility patterns in interest rate and foreign exchange futures markets. Company-specific public information is explored by Patell and Wolfson (1984) and Jennings and Starks (1985). They show that dividend and earnings announcements induce higher than normal volatility in equity prices. Kyle (1985), Admati and Pfleiderer (1988), Barclay, Litzenberger and Warner (1990), Foster and Viswanathan (1990), Back (1992), and Barclay and Warner (1993) show that the private information help by informed traders and revealed through trading influences market volatility. Cornell and Sirri (1992)' and Meulbroek (1992) investigate the actual insider trading activities in a tender offer case and the prosecuted illegal trading cased, respectively. This paper examines the aggregate and individual impact of marketwide information, company-specific public information, and company-specific private information on equity prices. Specifically, we use the thirty common stocks in the Dow Jones Industrial Average (DJIA) and twenty one National Association of Securities Dealers Automated Quotations (NASDAQ) common stocks to examine how their prices react to information. Marketwide information (public and private) is estimated by the movement in the Standard and Poors (S & P) 500 Index price for the DJIA stocks and the movement in the NASDAQ Composite Index price for the NASDAQ stocks. Divedend and earnings announcements are used as a subset of company-specific public information. The trading activity of corporate insiders (major corporate officers, members of the board of directors, and owners of at least 10 percent of any equity class) with an access to private information can be cannot legally trade on private information. Therefore, most insider transactions are not necessarily based on private information. Nevertheless, we hypothesize that market participants observe how insiders trade in order to infer any information that they cannot possess because insiders tend to buy (sell) when they have good (bad) information about their company. For example, Damodaran and Liu (1993) show that insiders of real estate investment trusts buy (sell) after they receive favorable (unfavorable) appraisal news before the information in these appraisals is released to the public. Price discovery in a competitive multiple-dealership market (NASDAQ) would be different from that in a monopolistic specialist system (NYSE). Consequently, we hypothesize that NASDAQ stocks are affected more by private information (or more precisely, insider trading) than the DJIA stocks. In the next section, we describe our choices of the fifty-one stocks and the public and private information set. We also discuss institutional differences between the NYSE and the NASDAQ market. In Section II, we examine the implications of public and private information for the volatility of daily returns of each stock. In Section III, we turn to the question of the relative importance of individual elements of our information set. Further analysis of the five DJIA stocks and the four NASDAQ stocks that are most sensitive to earnings announcements is given in Section IV, and our results are summarized in Section V.

  • PDF

An Exploratory Study about the Importance of Selected Nursing Activities during the Puerperal Period, as Viewed by Women in the Puerperal Period and by Nurses Caring for Them (산모와 간호원이 본 선택된 산욕기 간호활동의 중요도에 관한 탐색적 연구)

  • 박주봉
    • Journal of Korean Academy of Nursing
    • /
    • v.8 no.1
    • /
    • pp.152-162
    • /
    • 1978
  • The desire to maintain health is increasing, consequently the role of nursing which has as one chief aim the solving of man′s basic problems is more and more important. Today, in spite of a growing concern about the nursing activities which nurses provide for individual human having specific needs, clinically in fact, it is questionable that individual′s expectation of nursing activities agrees with nurse′s performance of nursing activities. In this study the importance and agreement of the importance of the nursing activities during the hospitalized puerperal period as viewed by women in the puerperal period and by nurses caring for them, were assessed. The present study was undertaken in an attempt to furnish the basic data for expediting the progress of research activities in this area and further to be helpful in planning maternity nursing practice. The study population defined and selected was nurses (13) caring for women in the puerperal period and doing duty on obstetric & gynecologic ward at Y. hospital, and the women in puerperal period (39) as sum of 3 women selected by each nurse during the period of May 13th-June 4th 1976. The study data was collected by the direct interview method based on the questionnaire which the investigator made out. The study result was analyzed by percentage, t - test. The findings can be summarized as follows: 1. General characteristics of nurses doing duty on puerperal ward: a. Nurses′average age was 24.8 years old. b. 84.6% had educational background of 4 years of college. c. 69.2% had a religion. d. 53.8% were married. e. 53.8% had clinical experience of 1 year -3 years. f, 61.5% did duty on puerperal ward during 1 year -3 years. g. 46.2% desired to do duty on obstetric ? gynecologic ward. 2. General characteristics of the women who were studied during their puerperal period: a. Women′s average age was 26.4 years old. b. 79.5% had educational background above high school. c. 56.4% had a religion. d. 84.6% had living standard above medium. e. 89.7% had no occupation. f, 53,8% had previous hospitalization experience. g. 56.4% had previous delivery experience. 3. Examining the importance of 39 nursing activities during puerperal period selected by investigator, studied group of women considered that the most important nursing activity was "Record precisely about condition, medical treatment and nursing activity results etc". Nurses considered that the most important nursing activity was "Notice whether having pain and care for that". Both groups considered that the least important nursing activity was "Talk with her about topics such as news, hobbies, other interests". 4. Examining the importance of nursing activities in 4 specific categories, studied group of women considered that the most important nursing activity in physical nursing category was "Be sure of safety measure to prevent accidents, injuries", and nurses considered that the most important nursing activity was "Make her sleep and rest sufficiently". Studied group of women considered that the most important nursing activity in psychological category was "Explain about medical treatment and nursing activity ahead of time so she knows what to expect" , and nurses considered that the most important nursing activity was "Explain about puerperal period so she understands". Studied group of women considered that the most important nursing activity in relation to medical care was "Record precisely about condition, medical treatment and nursing activity results etc.", and nurses considered that the most important nursing activity was "Observing, cleaning and protecting the perineum" Studied group of women considered that the most important nursing activity in nursing category in preparation for discharge was "Instruct about personnel hygiene during puerperal period", and nurses considered that the most important nursing activity was "Instruct self-care to protect the perineum". 5. The analysis of this study showed a significant amount of disagreement computed by subtracting the nurse′s score from the patient′s score. Studied group of women put greater importance on physical nursing category, psychological nursing category, nursing in relation to medical care, than the nurses. These results were statistically significant at 0.01 level.

  • PDF

A Study on the Cause Analysis and Countermeasures of the Traditional Market for Fires in the TRIZ Method (TRIZ 기법에 의한 재래시장 화재의 원인분석과 대책에 관한 연구)

  • Seo, Yong-Goo;Min, Se-Hong
    • Fire Science and Engineering
    • /
    • v.31 no.4
    • /
    • pp.95-102
    • /
    • 2017
  • The fires in the traditional markets often occur recently with the most of them expanded into great fires so that the damage is very serious. The status of traditional markets handling the distribution for ordinary people is greatly shrunk with the aggressive marketing of the local large companies and the foreign large distribution companies after the overall opening of the local distribution market. Most of the traditional markets have the history and tradition from decades to centuries and have grown steadily with the joys and sorrows of ordinary people and the development of the local economy. The fire developing to the large fire has the characteristics of the problem that the fire possibility is high since all products can be flammable due to the deterioration of facilities, the arbitrary modification of equipment, and the crowding of the goods for sale. Furthermore, most of the stores are petty with their small sizes so that the passage is narrow affecting the passage of pedestrians. Accordingly, the traditional markets are vulnerable to fire due to the initial unplanned structural problem so that the large scale fire damage occurs. The study is concerned with systematically classifying and analyzing the result by applying the TRIZ tool to the fire risk factors to extract the fundamental problem with the fire of the traditional market and make the active response. The study was done for preventing the fire on the basis of it and the expansion to the large fire in case of fire to prepare the specific measure to minimize the fire damage. On the basis of the fire expansion risk factor of the derived traditional market, the study presented the passive measures such as the improvement of the fire resisting capacity, the fire safety island, etc. and the active and institutional measures such as the obligation of the fire breaking news facilities, the application of the extra-high pressure pump system, the divided use of the electric line, etc.

Analysis of Rice Blast Outbreaks in Korea through Text Mining (텍스트 마이닝을 통한 우리나라의 벼 도열병 발생 개황 분석)

  • Song, Sungmin;Chung, Hyunjung;Kim, Kwang-Hyung;Kim, Ki-Tae
    • Research in Plant Disease
    • /
    • v.28 no.3
    • /
    • pp.113-121
    • /
    • 2022
  • Rice blast is a major plant disease that occurs worldwide and significantly reduces rice yields. Rice blast disease occurs periodically in Korea, causing significant socio-economic damage due to the unique status of rice as a major staple crop. A disease outbreak prediction system is required for preventing rice blast disease. Epidemiological investigations of disease outbreaks can aid in decision-making for plant disease management. Currently, plant disease prediction and epidemiological investigations are mainly based on quantitatively measurable, structured data such as crop growth and damage, weather, and other environmental factors. On the other hand, text data related to the occurrence of plant diseases are accumulated along with the structured data. However, epidemiological investigations using these unstructured data have not been conducted. The useful information extracted using unstructured data can be used for more effective plant disease management. This study analyzed news articles related to the rice blast disease through text mining to investigate the years and provinces where rice blast disease occurred most in Korea. Moreover, the average temperature, total precipitation, sunshine hours, and supplied rice varieties in the regions were also analyzed. Through these data, it was estimated that the primary causes of the nationwide outbreak in 2020 and the major outbreak in Jeonbuk region in 2021 were meteorological factors. These results obtained through text mining can be combined with deep learning technology to be used as a tool to investigate the epidemiology of rice blast disease in the future.

A Study on Contents Activism Analysis using Social Media - Focusing on Cases Related to Tom Moore's 100 Laps Challenge and the Exhibition of the Statue of Peace - (소셜미디어를 활용한 콘텐츠 액티비즘 분석 연구 - 톰 무어의 '100바퀴 챌린지'와 '평화의 소녀상' 전시를 중심으로-)

  • Shin, Jung-Ah
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.8
    • /
    • pp.91-106
    • /
    • 2021
  • The purpose of this study is to define the process of leading to self-realization and social solidarity through the process of contents planning, production, and distribution as Contents Activism, and to categorize specific execution steps. Based on this, we try to analyze concrete cases to find out the social meaning and effect of the practice of Contents Activism. As for the research method, after examining the differences between traditional activism and Contents Activism through a review of previous studies, the implementation process of Contents Activism was categorized into 7 steps. By applying this model, this study analyzed two cases of Contents Activism. The first case is the 100 laps challenge in the backyard planned by an elderly man ahead of his 100th birthday in early 2020, when the fear of COVID-19 spread. Sir Tom Moore, who lives in the UK, challenged to walk 100 laps in the backyard to help medical staff from the National Health Service as COVID-19 infections and deaths increased due to a lack of protective equipment. His challenge, which is difficult to walk without assistive devices due to cancer surgery and fall aftereffects, drew sympathy and participation from many people, leading to global solidarity. The second case analyzes the case of 'The Unfreedom of Expression, Afterwards' by Kim Seo-kyung and Kim Woon-seong, who were invited to the 2019 Aichi Triennale special exhibition in Japan. The 'Unfreedom of Expression, After' exhibition was a project to display the Statue of Peace and the lives of comfort women in the Japanese military, but it was withdrawn after three days of war due to threats and attacks from the far-right forces. Overseas artists who heard this news resisted the Triennale's decision, took and shared photos in the same pose as the Statue of Peace on social media such as Twitter and Instagram, empathizing with the historical significance of the Statue of Peace. Activism, which began with artists, has expanded through social media to the homes, workplaces, and streets of ordinary citizens living in various regions. The two cases can be said to be Contents Activism that led to social practice while solidifying and communicating with someone through contents.