• Title/Summary/Keyword: SIMPLE method

Search Result 13,707, Processing Time 0.066 seconds

Lung Clearance of Inhaled $^{99m}Tc$-DTPA by Urine Excretion Ratio (소변내 방사능배설량비를 이용한 $^{99m}Tc$-DTPA 폐청소율에 관한 연구)

  • Suh, G.Y.;Park, K.Y.;Jung, M.P.;Yoo, C.G.;Lee, D.S.;Kim, Y.W.;Han, S.K.;Jung, J.K.;Lee, M.C.;Shim, Y.S.;Kim, K.Y.;Han, Y.C.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.40 no.4
    • /
    • pp.357-366
    • /
    • 1993
  • Background: Lung clearance of inhaled $^{99m}Tc$-DTPA reflects alveolar epithelial permeability and it had been reported as more sensitive than conventional pulmonary function tests in detecting lung epithelial damage. However, measuring lung clearance of inhaled $^{99m}Tc$-DTPA by gamma camera may not always reflect alveolar epithelial permeability exactly because it is influenced by mucociliary clearance depending on the site of particle deposition. Moreover, this method takes much time and patient's effort because he has to sit or lie still in front of the camera for a prolonged period. Most of the absorbed DTPA is excreted in urine within 24 hours and the amount of excreted DTPA in urine during the first few hours after inhalation is influenced by absorption rate which is correlated with the alveolar-epithelial permeability suggesting that the urinary excretion, especially in first few hours, may be an alternate index for lung clearance. The purpose of this study was to evaluate the usefulness of ratio of excreted $^{99m}Tc$-DTPA in 2 hour and 24 hour urine as an index of alveolar-epithelial damage. Methods: Pulmonary function tests including diffusing capacity and lung clearance of $^{99m}Tc$-DTPA measured by gama camera ($T_{1/2}$) and 2hr/24hr urine excretion ratio (Ratio) of inhaled $^{99m}Tc$-DTPA in 8 normal subjects and 14 patients with diffuse interstitial lung disease were compared. Results: 1) In the normal control, there was significant negative correlation between the $T_{1/2}$ and the Ratio (r=-0.77, p<0.05). In patients with diffuse interstitial lung disease, there also was significant negative correlation between $T_{1/2}$ and Ratio(r=-0.63, p<0.05). 2) In diffuse interstitial lung disease patients, the $T_{1/2}$ was $38.65{\pm}11.63$ min which was significantly lower than that of normal control, $55.53{\pm}11.15$ min and the Ratio was $52.15{\pm}10.07%$ also signifantly higher than that of the normal control, $40.43{\pm}5.53%$ (p<0.05). 3) There was no significant correlations between $T_{1/2}$ or Ratio and diffusing capactiy of lung in both patients and controls (p>0.05). Conclusion: These results suggests that 2hr/24hr urine excretion ratio of inhaled $^{99m}Tc$-DTPA is a useful simple bedside test in assessing alveolar epithelial permeability and that it may be used as an additive follow-up test in patients with diffuse interstitial lung disease complementing conventional pulmonary function tests.

  • PDF

Pulmonary Mycoses in Immunocompromised Hosts (면역기능저하 환자에서 폐진균증에 대한 임상적 고찰)

  • Suh, Gee-Young;Park, Sang-Joon;Kang, Kyeong-Woo;Koh, Young-Min;Kim, Tae-Sung;Chung, Man-Pyo;Kim, Ho-Joong;Han, Jong-Ho;Choi, Dong-Chull;Song, Jae-Hoon;Kwon, O-Jung;Rhee, Chong-H.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.6
    • /
    • pp.1199-1213
    • /
    • 1998
  • Background : The number of immunocompromised hosts has been increasing steadily and a new pulmonary infiltrate in these patients is a potentially lethal condition which needs rapid diagnosis and treatment. In this study we sought to examine the clinical manifestations, radiologic findings, and therapeutic outcomes of pulmonary mycoses presenting as a new pulmonary infiltrate in immunocompromised hosts. Method : All cases presenting as a new pulmonary infiltrate in immunocompromised hosts and confirmed to be pulmonary mycoses by pathologic examination or by positive culture from a sterile site between October of 1996 and April of 1998 were included in the study and their chart and radiologic findings were retrospectively reviewed. Results : In all, 14 cases of pulmonary mycoses from 13 patients(male : female ratio = 8 : 5, median age 47 yr) were found. Twelve cases were diagnosed as aspergillosis while two were diagnosed as mucormycosis. Major risk factors for fungal infections were chemotherapy for hematologic malignancy(10 cases) and organ transplant recipients(4 cases). Three cases were receiving empirical amphotericin B at the time of appearance of new lung infiltrates. Cases in the hematologic malignancy group had more prominent symptoms : fever(9/10), cough(6/10), sputum(5/10), dyspnea(4/10), chest pain(5/10). Patients in the organ transplant group had minimal symptoms(p<0.05). On simple chest films, all of the cases presented as single or multiple nodules(6/14) or consolidations(8/14). High resolution computed tomograph showed peri-lesional ground glass opacities(14/14), pleural effusions(5/14), and cavitary changes(7/14). Definitive diagnostic methods were as follows : 10 cases underwent minithoracotomy, 2 underwent video-assisted thoracoscopic surgery, 1 underwent percutaneous needle aspiration and 1 case was diagnosed by culture of abscess fluid. All cases received treatment with amphotericin B with 1 case each being treated with liposomal amphotericin B and itraconazole due to renal toxicity. Lung lesion improved in 12 of 14 patient but 4 patients died before completing therapy. Conclusion : When a new lung infiltrate develops presenting either as a nodule or consolidation in a neutropenic patient with hematologic malignancy or in a transplant recipient, you should always consider pulmonary mycoses as one of the differential diagnosis. By performing aggressive work up and early treatment, we may improve prognosis of these patients.

  • PDF

Term Mapping Methodology between Everyday Words and Legal Terms for Law Information Search System (법령정보 검색을 위한 생활용어와 법률용어 간의 대응관계 탐색 방법론)

  • Kim, Ji Hyun;Lee, Jong-Seo;Lee, Myungjin;Kim, Wooju;Hong, June Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.137-152
    • /
    • 2012
  • In the generation of Web 2.0, as many users start to make lots of web contents called user created contents by themselves, the World Wide Web is overflowing by countless information. Therefore, it becomes the key to find out meaningful information among lots of resources. Nowadays, the information retrieval is the most important thing throughout the whole field and several types of search services are developed and widely used in various fields to retrieve information that user really wants. Especially, the legal information search is one of the indispensable services in order to provide people with their convenience through searching the law necessary to their present situation as a channel getting knowledge about it. The Office of Legislation in Korea provides the Korean Law Information portal service to search the law information such as legislation, administrative rule, and judicial precedent from 2009, so people can conveniently find information related to the law. However, this service has limitation because the recent technology for search engine basically returns documents depending on whether the query is included in it or not as a search result. Therefore, it is really difficult to retrieve information related the law for general users who are not familiar with legal terms in the search engine using simple matching of keywords in spite of those kinds of efforts of the Office of Legislation in Korea, because there is a huge divergence between everyday words and legal terms which are especially from Chinese words. Generally, people try to access the law information using everyday words, so they have a difficulty to get the result that they exactly want. In this paper, we propose a term mapping methodology between everyday words and legal terms for general users who don't have sufficient background about legal terms, and we develop a search service that can provide the search results of law information from everyday words. This will be able to search the law information accurately without the knowledge of legal terminology. In other words, our research goal is to make a law information search system that general users are able to retrieval the law information with everyday words. First, this paper takes advantage of tags of internet blogs using the concept for collective intelligence to find out the term mapping relationship between everyday words and legal terms. In order to achieve our goal, we collect tags related to an everyday word from web blog posts. Generally, people add a non-hierarchical keyword or term like a synonym, especially called tag, in order to describe, classify, and manage their posts when they make any post in the internet blog. Second, the collected tags are clustered through the cluster analysis method, K-means. Then, we find a mapping relationship between an everyday word and a legal term using our estimation measure to select the fittest one that can match with an everyday word. Selected legal terms are given the definite relationship, and the relations between everyday words and legal terms are described using SKOS that is an ontology to describe the knowledge related to thesauri, classification schemes, taxonomies, and subject-heading. Thus, based on proposed mapping and searching methodologies, our legal information search system finds out a legal term mapped with user query and retrieves law information using a matched legal term, if users try to retrieve law information using an everyday word. Therefore, from our research, users can get exact results even if they do not have the knowledge related to legal terms. As a result of our research, we expect that general users who don't have professional legal background can conveniently and efficiently retrieve the legal information using everyday words.

Prognostic Value of TNM Staging in Small Cell Lung Cancer (소세포폐암의 TNM 병기에 따른 예후)

  • Park, Jae-Yong;Kim, Kwan-Young;Chae, Sang-Cheol;Kim, Jeong-Seok;Kim, Kwon-Yeop;Park, Ki-Su;Cha, Seung-Ik;Kim, Chang-Ho;Kam, Sin;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.2
    • /
    • pp.322-332
    • /
    • 1998
  • Background: Accurate staging is important to determine treatment modalities and to predict prognosis for the patients with lung cancer. The simple two-stage system of the Veteran's Administration Lung Cancer study Group has been used for staging of small cell lung cancer(SCLC) because treatment usually consists of chemotherapy with or without radiotherapy. However, this system does not accurately reflect segregation of patients into homogenous prognostic groups. Therefore, a variety of new staging system have been proposed as more intensive treatments including either intensive radiotherapy or surgery enter clinical trials. We evaluate the prognostic importance of TNM staging, which has the advantage of providing a uniform detailed classification of tumor spread, in patients with SCLC. Methods: The medical records of 166 patients diagnosed with SCLC between January 1989 and December 1996 were reviewed retrospectively. The influence of TNM stage on survival was analyzed in 147 patients, among 166 patients, who had complete TNM staging data. Results: Three patients were classified in stage I / II, 15 in stage III a, 78 in stage IIIb and 48 in stage IV. Survival rate at 1 and 2 years for these patients were as follows: stage I / II, 75% and 37.5% ; stage IIIa, 46.7% and 25.0% ; stage III b, 34.3% and 11.3% ; and stage IV, 2.6% and 0%. The 2-year survival rates for 84 patients who received chemotherapy(more than 2 cycles) with or without radiotherapy were as follows: stage I / II, 37.5% ; stage rna, 31.3% ; stage IIIb 13.5% ; and stage IV 0%. Overall outcome according to TNM staging was significantly different whether or not received treatment. However, there was no significant difference between stage IIIa and stage IIIb though median survival and 2-year survival rate were higher in stage IIIa than stage IIIb. Conclusion: These results suggest that the TNM staging system may be helpful for predicting the prognosis of patients with SCLC.

  • PDF

Characteristics of Everyday Movement Represented in Steve Paxton's Works: Focused on Satisfyin' Lover, Bound, Contact at 10th & 2nd- (스티브 팩스톤(Steve Paxton)의 작품에서 나타난 일상적 움직임의 특성에 관한 연구: , , 를 중심으로)

  • KIM, Hyunhee
    • Trans-
    • /
    • v.3
    • /
    • pp.109-135
    • /
    • 2017
  • The purpose of this thesis is to analyze characteristics of everyday movement showed in performances of Steve Paxton. A work of art has been realized as a special object enjoyed by high class people as high culture for a long time. Therefore, a gap between everyday life and art has been greatly existed, and the emergence of everyday elements in a work of art means that public awareness involving social change is changed. The postmodernism as the period when a boundary between art and everyday life is uncertain was a postwar society after the Second World War and a social situation that rapidly changes into a capitalistic society. Changes in this time made scholars gain access academically concepts related to everyday life, and affected artists as the spirit of the times of pluralistic postmodernism refusing totality. At the same period of the time, modern dance also faced a turning point as post-modern dance. After the Second World War, modern dance started to be evaluated as it reaches the limit, and at this juncture, headed by dancers including the Judson Dance Theatre. Acting as a dancer in a dance company of Merce Cunningham, Steve Paxton, one of founders of the Judson Dance Theatre, had a critical mind of the conditions of dance company with the social structure and the process that movement is made. This thinking is showed in early performances as an at tempt to realize everyday motion it self in performances. His early activity represented by a walking motion attracted attention as a simple motion that excludes all artful elements of existing dance performances and is possible to conduct by a person who is not a dancer. Although starting the use of everyday movement is regarded as an open characteristic of post-modern dance, advanced researches on this were rare, so this study started. In addition, studies related to Steve Paxton are skewed towards Contact Improvisation that he rose as an active practician. As the use of ordinary movement before he focused on Contact Improvisation, this study examines other attempts including Contact Improvisation as attempts after the beginning of his performances. Therefore, the study analyzes Satisfyin' Lover, Contact at 10th & 2nd and Bound that are performances of Steve Paxton, and based on this, draws everyday characteristics. In addition, related books, academic essays, dance articles and reviews are consulted to consider a concept related to everyday life and understand dance historical movement of post-modern dance. Paxton attracted attention because of his activity starting at critical approach of movement of existing modern dance. As walking of performers who are not dancers, a walking motion showed in Satisfyin' Lover gave esthetic meaning to everyday movement. After that, he was affected by Eastern ideas, so developed Contact Improvisation making a motion through energy of the natural laws. In addition, he had everyday things on his performances, and used a method to deliver various images by using mundane movement and impromptu gestures originating from relaxed body. Everyday movement of his performances represents change in awareness of performances of the art of dancing that are traditionally maintained including change of dance genre of an area. His activity with unprecedented attempt and experimentation should be highly evaluated as efforts to overcome the limit of modern dance.

  • PDF

Effect of Sanitation Treatment of Extending Shelf-life on Fresh Poultry Meats (계육(鷄肉)의 유통기간연장(流通期間延長)을 위(爲)한 위생처리방법(衛生處理方法)에 관(關)한 연구(硏究))

  • Cho, M.J.;Jang, P.H.;Park, K.B.;Lee, B.M.
    • Korean Journal of Food Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.291-300
    • /
    • 1982
  • In order to develop effective and simple sanitation method for the extention of shelf-life of fresh poultry meat, the effect of sanitizers, sanitation methods and packaging materials on the extention of shelf-life of poultry meats was observed at the $4^{\circ}C$ and room temp$(10{\sim}20^{\circ}C)$. The results are summarized as follows: 1. The autochonous skin microflora of poultry, before processing, were believed to be removed or killed during the scalding and plucking, and exposed dermal tissue was contaminated by microorganisms from the subsequent stages of processing. 2. In the final stage of poultry processing, total viable counts of microorganisms and coliforms were averaged to $3.5{\times}10^4/cm^2$ and $400/cm^2$, respectively. 3. The refrigerated shelf-life of fresh whole poultry carcasses at $3\;to\;4^{\circ}C$ was extended to 7 to 16 days compared to control with the various treatments of some sanitizers by dipping freshly chilled carcasses for 5 min or spraying 1 liter of sanitizers per carcasses. In the case of storage at $10\;to\;15^{\circ}C$, the shelf-life of poultry carcasses was extended to one to two days by the sanitation treatments compared to control. 4. Spraying sanitation was more effective than dipping sanitation, and 5 minutes dipping and one liter spraying per carcass were enough for effective sanitation of poultry carcasses in most sanitizers. 5. The packaging with an oxygen impermeable polyvinylidene chloride extended the shelf-life to 10 days and 5 days with polyethylene compared to control. When poultry carcasses were sanitized by continuous spraying with one liter of 30 ppm of chlorine and another one liter of 5% of potassium sorbate, packaged with polyvinylidene chlorlde were extended to about 30 days compared to control.

  • PDF

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.