• Title/Summary/Keyword: Attempts

Search Result 6,208, Processing Time 0.036 seconds

A study of the Medical System in the Early Chosun-Dynasty (조선시대(朝鮮時代) 전기(前期)의 의료제도(醫療制度)에 대한 연구(硏究))

  • Han, Dae-Hee;Kang, Hyo-Shin
    • Journal of Korean Medical classics
    • /
    • v.9
    • /
    • pp.555-652
    • /
    • 1996
  • Up to the present the scholastic achievements in the history of the medical system have been rather scare despite its importance in the Korean History. Hence, this dissertation attempts to examine the significance of the institute in the Korean History, covering the period from the ancient times through the early Chosun-Dynasty. In the ancient times, the medical practice relied primarily upon human instincts and experiences at the same time, shaman's incantations were widely believed to cure diseases, the workings of evil spirits supposedly. For the period from the Old Chosun through Samhan(巫堂), Chinese refugees brought a long medical knowledge and skills of the continent. New Chinese medicine, traditional practices and incantations were generally used at this time. Medicine and the medical system were arranged by the period of the Three Countries(三國時代). No definite record concerning Koguryo remains now. As for Paekje, however, history shows that they set up the system under the Chinese influence, assigning medical posts such as Euibaksa(medical doctor), Chaeyaksa(pharmacist), and Jukeumsa(medicine man) within Yakbu(department of medicine). Scientifically advanced, they sent experts to Japan, giving a tremendous influence on the development of the science on ancient Japan. After the unification of the three countries, Shilla had their own system after the model of Dang(唐). This system of the Unified Shilla was continued down to Koryo and became the backbone of the future ones. In the ancient time religion and medicine were closely related. The curative function of the shaman was absolute. Buddhism played a notable part in medical practice, too, producing numerous medical monks. The medical system of Koryo followed the model of Dang with some borrowings from Song(宋). Sangyakkuk(尙藥局) was to deal exclusively with the diseases of the monarch whereas Taeeuigam(太醫監) was the central office to handle the national medical administration and the qualification test and education for doctors. In addition, Dongsodaebiwon(東西大悲院), Jewibo(濟危寶), and Hyeminkuk(惠民局) were public hospitals for the people, and a few aristocrats practiced medicine privately. In 987, the 6th year of Songjong(成宗), local medical operations were installed for curing the sick and educating medical students. Later Hyonjong(顯宗), established Yakjom(clinics, 藥店) throughout the country and officials were sent there to see patients. Foreign experts, mainly from Song, were invited frequently to deliver their advanced technology, and contributed to the great progress of the science in Korea. Medical officials were equipped with better land and salary than others, enjoying appropriate social respect. Koryo exchanged doctors, medicine and books mainly with Song, but also had substantial interrelations with Yuan(元), Ming(明), Kitan(契丹), Yojin(女眞), and Japan. Among them, however, Song was most influential to the development of medicine in Koryo. During Koryo Dynasty Buddhism, the national religion at the time, exercised bigger effect on medicine than in any other period. By conducting national ceremonies and public rituals to cure diseases, Taoism also affected the way people regarded illness. Curative shamanism was still in practice as well. These religious practices, however, were now engaged only when medication was already in use or when medicine could not held not help any more. The advanced medical system of Koryo were handed down to Chosun and served the basis for further progress. Hence, then played well the role to connect the ancient medicine and the modern one. The early Chosun followed and systemized the scientific and technical achievement in medicine during the Koryo Dynasty, and furthermore, founded the basis of the future developments. Especially the 70 years approximately from the reign of Sejong(世宗) to that of Songjong(成宗) withnessed a termendous progress in the field with the reestablishment of the medical system. The functions of the three medical institute Naeeuiwon(內醫院), Joneuigam(典醫監), Hyeminkuk(惠民局) were expanded. The second, particualy, not only systemized all the medical practices of the whole nation, but also grew and distributed domestic medicaments which had been continually developed since the late Koryo period. In addition, Hyeminso(惠民局, Hwarinwon(活人院)) and Jesaenwon(濟生院)(later merged to the first) played certain parts in the curing illness. Despite the active medical education in the capital and the country, the results were not substantial, for the aristocracy avoided the profession due to the social prejudice against technicians including medical docotors. During the early Chosun-Dynasty, the science was divided into Chimgueui (acupuncturist), Naryogeui(specialist in scrofula) and Chijongeui (specialist in boil). For the textbooks, those for the qualification exam were used, including several written by the natives. With the introduction on Neoconfucianism(性理學) which reinforced sexual segregation, female doctors appeared for the female patients who refused to be seen by male doctors. This system first appeared in 1406, the sixth year of Taejong(太宗), but finally set up during the reign of Sejong. As slaves to the offices, the lowest class, female doctors drew no respect. However, this is still significant in the aspect of women's participation in society. They were precedents of midwives. Medical officials were selected through the civil exam and a special test. Those who passed exams were given temporary jobs and took permanent posts later. At that time the test score, the work experience and the performance record of the prospective doctor were all taken into consideration, for it was a specialized office. Most doctors were given posts that changed every six months, and therefore had fewer chances for a goverment office than the aristocracy. At the beginning the social status of those in medicine was not that low, but with the prejudice gradully rising among the aristocracy, it became generally agreed to belong to the upper-middle technician class. Dealing with life, however, they received social respect and courtesy from the public. Sometimes they collected wealth with their skills. They kept improving techniques and finally came to take an important share in modernization process during the late Chosun-Dynasty.

  • PDF

The Early Experience with a Totally Laparoscopic Distal Gastrectomy (전(全)복강경하 원위부 위절제술의 초기 경험)

  • Kim Jin Jo;Song Gyo Young;Chin Hyung Min;Kim Wook;Jeon Hae Myoung;Park Cho Hyun;Park Seung Man;Lim Keun Woo;Park Woo Bae;Kim Seung Nam
    • Journal of Gastric Cancer
    • /
    • v.5 no.1
    • /
    • pp.16-22
    • /
    • 2005
  • Purpose: In Korea, the number of laparoscopy-assisted distal gastrectomies for early gastric cancer patients has been increasing lately. Although minimally invasive surgery is more beneficial, no reported case of a totally laparoscopic distal gastrectomy has been reported because of difficulty with intracorporeal anastomosis. This study attempts, through our experiences, to determine the feasibility of a totally laparoscopic distal gastrectomy using an intracorporeal gastroduodenostomy in treating early gastric carcinoma. Materials and Methods: We investigated surgical results and clinicopatholgic characteristics of eight(8) patients with an early gastric carcinoma who underwent a totally laparoscopic distal gastrectomy at the Department of Surgery, Our Lady of Mercy Hospital, The Catholic University of Korea, between June 2004 and September 2004. The intracorporeal gastroduodenostomy was performed with a delta-shaped ananstomosis by using only laparoscopic linear staplers (Endocutter 45mm; Ethicon Endosurgery, OH, USA). Results: The operative time was $369.4\pm62.5$ minutes (range $275\∼465$ minutes), and the anastomotic time was 45.1\pm14.4$ minutes (range $32\∼70$ minutes). The anastomotic time was shortened as surgical experience was gained. The number of laparoscopic linear staplers for an operation was $7.1\pm0.6$. The number of lymph nodes harvested was $31.9\pm13.1$. There was 1 case of transfusion and no case of conversion to an open procedure. The time to the first flatus was 2.8$\pm$0.5 days, and the time to the first food intake was $4.1\pm0.8$ days. There were no early postoperative complications, and the postoperative hospital stay was $10.0\pm3.9$ days. Conclusion: A totally laparoscopic distal gastrectomy using an intracorporeal gastroduodenostomy with a delta-shaped anastomosis is technically feasible and can maximize the benefit of laparoscopic surgery for early gastric cancer.

  • PDF

If This Brand Were a Person, or Anthropomorphism of Brands Through Packaging Stories (가설품패시인(假设品牌是人), 혹통과고사포장장품패의인화(或通过故事包装将品牌拟人化))

  • Kniazeva, Maria;Belk, Russell W.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.231-238
    • /
    • 2010
  • The anthropomorphism of brands, defined as seeing human beings in brands (Puzakova, Kwak, and Rosereto, 2008) is the focus of this study. Specifically, the research objective is to understand the ways in which brands are rendered humanlike. By analyzing consumer readings of stories found on food product packages we intend to show how marketers and consumers humanize a spectrum of brands and create meanings. Our research question considers the possibility that a single brand may host multiple or single meanings, associations, and personalities for different consumers. We start by highlighting the theoretical and practical significance of our research, explain why we turn our attention to packages as vehicles of brand meaning transfer, then describe our qualitative methodology, discuss findings, and conclude with a discussion of managerial implications and directions for future studies. The study was designed to directly expose consumers to potential vehicles of brand meaning transfer and then engage these consumers in free verbal reflections on their perceived meanings. Specifically, we asked participants to read non-nutritional stories on selected branded food packages, in order to elicit data about received meanings. Packaging has yet to receive due attention in consumer research (Hine, 1995). Until now, attention has focused solely on its utilitarian function and has generated a body of research that has explored the impact of nutritional information and claims on consumer perceptions of products (e.g., Loureiro, McCluskey and Mittelhammer, 2002; Mazis and Raymond, 1997; Nayga, Lipinski and Savur, 1998; Wansik, 2003). An exception is a recent study that turns its attention to non-nutritional packaging narratives and treats them as cultural productions and vehicles for mythologizing the brand (Kniazeva and Belk, 2007). The next step in this stream of research is to explore how such mythologizing activity affects brand personality perception and how these perceptions relate to consumers. These are the questions that our study aimed to address. We used in-depth interviews to help overcome the limitations of quantitative studies. Our convenience sample was formed with the objective of providing demographic and psychographic diversity in order to elicit variations in consumer reflections to food packaging stories. Our informants represent middle-class residents of the US and do not exhibit extreme alternative lifestyles described by Thompson as "cultural creatives" (2004). Nine people were individually interviewed on their food consumption preferences and behavior. Participants were asked to have a look at the twelve displayed food product packages and read all the textual information on the package, after which we continued with questions that focused on the consumer interpretations of the reading material (Scott and Batra, 2003). On average, each participant reflected on 4-5 packages. Our in-depth interviews lasted one to one and a half hours each. The interviews were tape recorded and transcribed, providing 140 pages of text. The products came from local grocery stores on the West Coast of the US and represented a basic range of food product categories, including snacks, canned foods, cereals, baby foods, and tea. The data were analyzed using procedures for developing grounded theory delineated by Strauss and Corbin (1998). As a result, our study does not support the notion of one brand/one personality as assumed by prior work. Thus, we reveal multiple brand personalities peacefully cohabiting in the same brand as seen by different consumers, despite marketer attempts to create more singular brand personalities. We extend Fournier's (1998) proposition, that one's life projects shape the intensity and nature of brand relationships. We find that these life projects also affect perceived brand personifications and meanings. While Fournier provides a conceptual framework that links together consumers’ life themes (Mick and Buhl, 1992) and relational roles assigned to anthropomorphized brands, we find that consumer life projects mold both the ways in which brands are rendered humanlike and the ways in which brands connect to consumers' existential concerns. We find two modes through which brands are anthropomorphized by our participants. First, brand personalities are created by seeing them through perceived demographic, psychographic, and social characteristics that are to some degree shared by consumers. Second, brands in our study further relate to consumers' existential concerns by either being blended with consumer personalities in order to connect to them (the brand as a friend, a family member, a next door neighbor) or by distancing themselves from the brand personalities and estranging them (the brand as a used car salesman, a "bunch of executives.") By focusing on food product packages, we illuminate a very specific, widely-used, but little-researched vehicle of marketing communication: brand storytelling. Recent work that has approached packages as mythmakers, finds it increasingly challenging for marketers to produce textual stories that link the personalities of products to the personalities of those consuming them, and suggests that "a multiplicity of building material for creating desired consumer myths is what a postmodern consumer arguably needs" (Kniazeva and Belk, 2007). Used as vehicles for storytelling, food packages can exploit both rational and emotional approaches, offering consumers either a "lecture" or "drama" (Randazzo, 2006), myths (Kniazeva and Belk, 2007; Holt, 2004; Thompson, 2004), or meanings (McCracken, 2005) as necessary building blocks for anthropomorphizing their brands. The craft of giving birth to brand personalities is in the hands of writers/marketers and in the minds of readers/consumers who individually and sometimes idiosyncratically put a meaningful human face on a brand.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Clinical Study of Corrosive Esophagitis (부식성 식도염에 관한 임상적 고찰)

  • 이원상;정승규;최홍식;김상기;김광문;홍원표
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.6-7
    • /
    • 1981
  • With the improvement of living standard and educational level of the people, there is an increasing awareness about the dangers of toxic substances and lethal drugs. In addition to the above, the governmental control of these substances has led to a progressive decrease in the accidents with corrosive substances. However there are still sporadic incidences of suicidal attempts with the substances due to the unbalance between the cultural development in society and individual emotion. The problem is explained by the fact that there is a variety of corrosive agents easily available to the people due to the considerable industrial development and industrialization. Salzen(1920), Bokey(1924) were pioneers on the subject of the corrosive esophagitis and esophageal stenosis by dilatation method. Since then there had been a continuing improvement on the subject with researches on various acid(Pitkin, 1935, Carmody, 1936) and alkali (Tree, 1942, Tucker, 1951) corrosive agents, and the use of steroid (Spain, 1950) and antibiotics. Recently, early esophagoscopic examination is emphasized on the purpose of determining the way of the treatment in corrosive esophagitis patients. In order to find the effective treatment of such patients in future, the authors selected 96 corrosive esophagitis patients who were admitted and treated at the ENT department of Severance hospital from 1971 to March, 1981 to attempt a clinical study. 1. Sex incidence……male: female=1 : 1.7, Age incidence……21-30 years age group; 38 cases (39.6%). 2. Suicidal attempt……80 cases(83.3%), Accidental ingestion……16 cases (16.7%). Among those who ingested the substance accidentally, children below ten years were most numerous with nine patients. 3. Incidence acetic acid……41 cases(41.8%), lye…20 cases (20.4%), HCI……17 cases (17.3%). There was a trend of rapid rise in the incidence of acidic corrosive agents especially acetic acid. 4. Lavage……57 cases (81.1%). 5. Nasogastric tube insertion……80 cases (83.3%), No insertion……16 cases(16.7%), late admittance……10 cases, failure…4 cases, other……2 cases. 6. Tracheostomy……17 cases(17.7%), respiratory problems(75.0%), mental problems (25.0%). 7. Early endoscopy……11 cases(11.5%), within 48 hours……6 cases (54.4%). Endoscopic results; moderate mucosal ulceration…8 cases (72.7%), mild mucosal erythema……2 cases (18.2%), severe mucosal ulceration……1 cases (9.1%) and among those who took early endoscopic examination; 6 patients were confirmed mild lesion and so they were discharged after endoscopy. Average period of admittance in the cases of nasogastric tube insertion was 4 weeks. 8. Nasogastric tube indwelling period……average 11.6 days, recently our treatment trend in the corrosive esophagitis patients with nasogastric tube indwelling is determined according to the finding of early endoscopy. 9. The No. of patients who didn't given and delayed administration of steroid……7 cases(48.9%): causes; kind of drug(acid, unknown)……12 cases, late admittance……11 cases, mild case…9 cases, contraindication……7 cases, other …8 cases. 10. Management of stricture; bougienage……7 cases, feeding gastrostomy……6 cases, other surgical management……4 cases. 11. Complication……27 cases(28.1%); cardio-pulmonary……10 cases, visceral rupture……8 cases, massive bleeding……6 cases, renal failure……4 cases, other…2 cases, expire and moribund discharge…8 cases. 12. No. of follow-up case……23 cases; esophageal stricture……13 cases and site of stricture; hypopharynx……1 case, mid third of esophagus…5 cases, upper third of esophagus…3 cases, lower third of esophagus……3 cases pylorus……1 case, diffuse esophageal stenosis……1 case.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Surgical Treatment for Isolated Aortic Endocarditis: a Comparison with Isolated Mitral Endocarditis (대동맥 판막만을 침범한 감염성 심내막염의 수술적 치료: 승모판막만을 침범한 경우와 비교 연구)

  • Hong, Seong-Beom;Park, Jeong-Min;Lee, Kyo-Seon;Ryu, Sang-Woo;Yun, Ju-Sik;CheKar, Jay-Key;Yun, Chi-Hyeong;Kim, Sang-Hyung;Ahn, Byoung-Hee
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.600-606
    • /
    • 2007
  • Background: Infective endocarditis shows high surgical mortality and morbidity rates, especially for aortic endocarditis. This study attempts to investigate the clinical characteristics and operative results of isolated aortic endocarditis. Material and Method: From July 1990 to May 2005, 25 patients with isolated aortic endocarditis (Group I, male female=18 : 7, mean age $43.2{\pm}18.6$ years) and 23 patients with isolated mitral endocarditis (Group II, male female=10 : 13, mean age $43.2{\pm}17.1$ years) underwent surgical treatment in our hospital. All the patients had native endocarditis and 7 patients showed a bicuspid aortic valve in Group I. Two patients had prosthetic valve endocarditis and one patients developed mitral endocarditis after a mitral valvuloplasty in Group II. Positive blood cultures were obtained from 11 (44.0%) patients in Group I, and 10 (43.3%) patients in Group II, The pre-operative left ventricular ejection fraction for each group was $60.8{\pm}8.7%$ and $62.1{\pm}8.1%$ (p=0.945), respectively. There was moderate to severe aortic regurgitation in 18 patients and vegetations were detected in 17 patients in Group I. There was moderate to severe mitral regurgitation in 19 patients and vegetations were found in 18 patients in Group II. One patient had a ventricular septal defect and another patient underwent a Maze operation with microwaves due to atrial fibrillation. We performed echocardiography before discharge and each year during follow-up. The mean follow-up period was $37.2{\pm}23.5$ (range $9{\sim}123$) months. Result: Postoperative complications included three cases of low cardiac output in Group I and one case each of re-surgery because of bleeding and low cardiac output in Group II. One patient died from an intra-cranial hemorrhage on the first day after surgery in Group I, but there were no early deaths in Group II. The 1, 3-, and 5-year valve related event free rates were 92.0%, 88.0%, and 88.0% for Group I patients, and 91.3%, 76.0%, and 76.0% for Group II patients, respectively. The 1, 3-, and 5-year survival rates were 96.0%, 96.0%, and 96.0% for Group I patients, and foo%, 84.9%, and 84.9% for Group II patients, respectively. Conclusion: Acceptable surgical results and mid-term clinical results for aortic endocarditis were seen.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.