• Title/Summary/Keyword: 의미처리

Search Result 3,531, Processing Time 0.041 seconds

The Role of Protein Kinase C in Acute Lung Injury Induced by Endotoxin (내독소에 의한 급성폐손상에서 Protein Kinase C의 역할)

  • Kim, Yong-Hoon;Moon, Seung-Hyug;Kee, Sin-Young;Ju, Jae-Hak;Park, Tae-Eung;Im, Keon-Il;Cheong, Seung-Whan;Kim, Hyeon-Tae;Park, Choon-Sik;Jin, Byung-Won
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.2
    • /
    • pp.349-359
    • /
    • 1997
  • Background : The signal pathways and their precise roles for acute respiratory distress syndrome caused by endotoxin (ETX) has not been established. Since there has been several in vitro experiments suggesting that activation of protein kinase C (PKC) pathway may be responsible for endotoxin-induced inflammatory reaction, we performed in vivo experiments in the rats with the hypothesis that PKC-inhibition can effectively prevent endotoxin-induced acute lung injury. Methods : We studied the role of PKC in ETX-induced ALI using PKC inhibitor (staurosporine, STP) in the rat Specific pathogen free male Sprague-Dawley weighted from 165 to 270g were used for the study. Animals were divided into the normal control (NC)-, vehicle control (VC)-, ETX-, PMA (phorbolmyristateacetate)-, STP+PMA-, and STP+ETX-group. PMA (50mg/kg) or ETX (7mg/kg) was instilled through polyethylen catheter after aseptic tracheostomy with and without STP (0.2mg/kg)-pretreatment STP was injected via tail vein 30min before intratracheal injection (IT) of PMA or ETX. Bronchoalveolar lavage (BAL) was done 3-or 6-hrs after IT of PMA or ETX respectively, to measure protein concentration, total and differential cell counts. Results : The results were as follows. The protein concentrations in BALF in the PMA- and ETX-group were very higher than that of VC-group (p<0.001). When animals were pretreated with STP, the %reduction of the protein concentration in BALF was $64.8{\pm}8.5$ and $30.4{\pm}2.5%$ in the STP+PMA- and STP+ETX-group, respectively (p = 0.028). There was no difference in the total cell counts between the PMA-and VC-group (p = 0.26). However the ETX-group showed markedly increased total cell counts as compared to the VC- (p = 0.003) and PMA-group (p = 0.0027), respectively. The total cell counts in BALF were not changed after pretreatment with STP compared to the PMA- (p = 0.22) and ETX-group (p = 0.46). The percentage of PMN, but not alveolar macrophage, was significantly elevated in the PMA-, and ETX-group. Especially in the ETX-group, the percentage of PMN was 17 times higher than that of PMA (p < 0.001). The differential cell counts was not different between the PMA and STP+PMA On the contrary the STP+ETX-group showed decreased percentage of PMN (p = 0.016). There was no significant relationship between the protein concentration and the total or differential cell counts in each group. Conclusion : Pretreatment with PKC-inhibitor (staurosporine) partially but significantly inhibited ETX-induced ALI.

  • PDF

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

A Study on the Funerary Mean of the Vertical Plate Armour from the 4th Century - Mainly Based on the Burial Patterns Shown by the Ancient Tombs No.164 and No.165 in Bokcheon-dong - (종장판갑(縱長板甲) 부장의 다양성과 의미 - 부산 복천동 164·165호분 출토 자료를 중심으로 -)

  • Lee, Yu Jin
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.3
    • /
    • pp.178-199
    • /
    • 2011
  • The ancient tombs found in Bokcheon-dong, Busan originate from the time between the $4^{th}$ and $5^{th}$ centuries, the period of the Three Nations. They are known as the tombs where the Vertical Plate Armour was mainly buried. In 2006, two units of the Vertical Plate Armour were additionally investigated in the tombs No.164 and No.165 which had been constructed at the end of the eastern slope near the hill of the group of ancient tombs in Bokcheon-dong. Throughout this study, the contents of the two units of the Vertical Plate Armour, whose preservation process has been completed, have been arranged, while the group of constructed ancient tombs in Bokcheon-dong from the $4^{th}$ century has been observed through the consideration of the burial pattern. The units of the Vertical Plate Armour from the tombs No.164 and No.165 can be classified as the IIa-typed armor showing the Gyeongju and Ulsan patterns, according to the attribute of the manufacturing technology. Also, they can be chronologically recorded as those from the early period of Stage II among the three stages regarding the chronological recording of the Vertical Plate Armour. While more than two units of the Vertical Plate Armour were buried in the largesized tomb on the top of the hill of the group of ancient tombs, one unit of the Vertical Plate Armour was buried in the small-sized tomb. By considering such a trend, it can be said that in the stage of burying the armor showing the Gyeongju and Ulsan patterns (I-type and IIa-type), different units of the Vertical Plate Armour were buried according to the size of the tomb. However, as the armor showing the Busan pattern (IIb-type) was settled, only one unit was buried. Meanwhile, the tombs No.164 and No.165 can be included in the wooden chamber tomb showing the Gyeongju pattern, which is a slender rectangular wooden chamber tomb with the aspect ratio of more than 1:3. However, according to the trend shown by the buried earthenware, it can be said that there seem to be common types and patterns shared with the earthenware which has been found in the area of Gimhae and is called the one showing the Geumgwan Gaya pattern. In other words, there seem to be close relationships between the subject tombs and the tomb No.3 in Gujeong-dong and the tomb No.55 in Sara-ri, Gyeongju, regarding the types of armor and tombs and the arrangement of buried artifacts. However, the buried earthenware shows a relationship with the areas of Busan and Gimhae. By considering the combined trend of the Gyeongju and Gimhae elements found in one tomb, it is possible to assume that the group of constructed ancient tombs in Bokcheon-dong used to be actively related with both areas. It has been thought that the Vertical Plate Armour used to be the exclusive property of the upper hierarchy until now, since it was buried in the large-sized tomb located on the top of the hill of the group of ancient tombs in Bokcheondong. However, as shown in case of the tombs No.164 and No.165, it has been verified that the Vertical Plate Armour was also buried in the small-sized tomb in terms of such factors as locations, sizes, the amount of buried artifacts and the qualitative aspect. Therefore, it is impossible to discuss the hierarchical characteristic of the tomb just based on the buried units of the Vertical Plate Armour. Also, it is difficult to assume that armor used to symbolize the domination of the military forces. The hierarchical characteristic of the group of constructed ancient tombs in Bokcheon-dong from the $4^{th}$ century can be verified according to the location and size of each tomb. As are sult, the re seem to be some differences regarding the buried units of the vertical plate armour. However, it would be necessary to carry out amore multilateral examination in order to find out whether the burial of the vertical plate armour could be regarded as the artifact which symbolizes the status or class of the deceased.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

The Study of Korean-style Leadership (The Great Cause?Oriented and Confidence-Oriented Leadership) (대의와 신뢰 중시의 한국형 리더십 연구)

  • Park, sang ree
    • The Journal of Korean Philosophical History
    • /
    • no.23
    • /
    • pp.99-128
    • /
    • 2008
  • This research analyzes some Korean historical figures and presents the core values of their leaderships so that we can bring up the theory of leadership which would be compatible with the current circumstances around Korea. Through this work, we expected that we would not only find out typical examples among historical leaders but also reaffirm our identities in our history. As a result of the research, it was possible to classify some figures in history into several patterns and discover their archetypal qualities. Those qualities were 'transform(實事)', 'challenge(決死)', 'energize(風流)', 'create(創案)', and 'envision(開新)' respectively. Among the qualities, this research concentrated on the quality of 'challenge', exclusively 'death-defying spirit'. This spirit is the one with which historical leaders could sacrifice their lives for their great causes. This research selected twelve figures as incarnations of death-defying spirit, who are Gyebaek(階伯), Ganggamchan(姜邯贊), Euljimundeok(乙支文德), Choeyoung(崔瑩),ChungMongju(鄭夢周), Seongsammun (成三問), Yisunsin(李舜臣), Gwakjaewoo(郭再祐), Choeikhyeon(崔益鉉), Anjunggeun(安重根), Yunbonggil(尹奉吉), Yijun(李儁). Through analyzing their core values and abilities and categorizing some historical cases into four spheres such as a private sphere, relations sphere, a community sphere, and a society sphere, we came to find a certain element in common among those figures. It was that they eventually took the lead by showing the goal and the ideal to their people at all times. Moreover, their goals were always not only obvious but also unwavering. In the second chapter, I described the core value in a private sphere, so called '志靑靑'. It implies that a leader should set his ultimate goal and then try to attain it with an unyielding will. Obvious self-confidence and unfailing self-creed are core values in a private sphere. In the third chapter, I described the core value in a relative sphere, the relationship between one and others. It is '守信結義'. It indicates that a leader should win confidence from others by discharging his duties in the relation with others. Confidence is the highest leveled affection to others. Thus, mutual reliance should be based on truthful sincerity and affection toward others. Stubbornness and strictness are needed not to be prompted by pity simultaneously. In the fourth chapter, I described the core value in a community sphere. It is '丹心合力'. For this value, what are required to a leader are both his community spirit and his loyalty to one's community. Moreover, the strong sense of responsibility and the attitude of taking an initiative among others are also required. Thus, it can be said that the great power to conduct the community is so called fine teamwork. What's more, the attitude of the leader can exert a great influence on his community. In the fifth chapter, I described the core value of death defying spirit in the society sphere. This value might be more definite and explicit than other ones described above. A leader should prepare willingly for one's death to fulfill his great duties. 'What to do' is more important for a leader than 'how to do'. That is to say, a leader should always do righteous things. Efficiency is nothing but one of his interests. A leader must be the one who behaves himself always according to righteousness. Unless a leader's behaviors are based on righteousness, it is absolutely impossible that a leader exerts his leadership toward people very efficiently. Thus, it can be said that a true leader is the one not only who is of morality and but also who tries to fulfill his duties.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.