• Title/Summary/Keyword: 연

Search Result 22,795, Processing Time 0.055 seconds

Effect of Nasal Continuous Positive Airway Pressure after Early Surfactant Therapy in Moderate Respiratory Distress Syndrome (중등도 신생아 호흡 곤란 증후군에서 폐 표면 활성제 조기 투여 후 Nasal CPAP의 치료 효과)

  • Kim, Eun Ji;Kim, Hae Sook;Hur, Man Hoe;Lee, Sang Geel
    • Clinical and Experimental Pediatrics
    • /
    • v.45 no.10
    • /
    • pp.1204-1212
    • /
    • 2002
  • Purpose : Early surfactant therapy with either gentle ventilation, high-frequency ventilation or aggressive weaning of mechanical ventilation are principles for the treatment of respiratory distress syndrome(RDS). We studied to determine the accessibility of noninvasive nasal continuous positive airway pressure(CPAP) rather than mechanical ventilation by invasive intubation after early surfactant therapy. Methods : The study group consisted of 14 infants who were born and diagnosed with moderate respiratory distress syndrome and received early surfactant therapy with nasal CPAP of PEEP 5-6 cm $H_2O$ within two hours after birth in the Fatima neonatal intensive care unit for two years from January 1999 to August 2001. The control group consisted of 15 infants who were diagnosed with the disease and could be weaned from mechanical ventilator within five days after birth during the same period. Results : The characteristics, the severity of clinical symptoms and laboratory findings in the two groups at birth showed no significant difference. Neither did the interim analysis of laboratory data in two groups. Of 14 infants in the study group who received nasal CPAP after early surfactant therapy, only two infants showed weaning failure with this therapy. In the response cases, duration of CPAP was five days and mean airway pressure was $5.4{\pm}0.5cm$ $H_2O$. Two had the complication of CPAP with abdominal distension. Final complications and outcomes in the two groups showed no signifcant difference(P>0.05). Conclusion : The clinical courses in the two groups showed no significant difference. Therefore, we suggest that early surfactant therapy with noninvasive nasal CPAP is a simple and safe method rather than aggressive weaning after invasive mechanical ventilation in moderate respiratory distress syndrome.

THE EFFECTS OF THE PLATELET-DERIVED GROWTH FACTOR-BB ON THE PERIODONTAL TISSUE REGENERATION OF THE FURCATION INVOLVEMENT OF DOGS (혈소판유래성장인자-BB가 성견 치근이개부병변의 조직재생에 미치는 효과)

  • Cho, Moo-Hyun;Park, Kwang-Beom;Park, Joon-Bong
    • Journal of Periodontal and Implant Science
    • /
    • v.23 no.3
    • /
    • pp.535-563
    • /
    • 1993
  • New techniques for regenerating the destructed periodontal tissue have been studied for many years. Current acceptable methods of promoting periodontal regeneration alre basis of removal of diseased soft tissue, root treatment, guided tissue regeneration, graft materials, biological mediators. Platelet-derived growth factor (PDGF) is one of polypeptide growth factor. PDGF have been reported as a biological mediator which regulate activities of wound healing progress including cell proliferation, migration, and metabolism. The purposes of this study is to evaluate the possibility of using the PDGF as a regeneration promoting agent for furcation involvement defect. Eight adult mongrel dogs were used in this experiment. The dogs were anesthetized with Pentobarbital Sodium (25-30 mg/kg of body weight, Tokyo chemical Co., Japan) and conventional periodontal prophylaxis were performed with ultrasonic scaler. With intrasulcular and crestal incision, mucoperiosteal flap was elevated. Following decortication with 1/2 high speed round bur, degree III furcation defect was made on mandibular second(P2) and fourth(P4) premolar. For the basic treatment of root surface, fully saturated citric acid was applied on the exposed root surface for 3 minutes. On the right P4 20ug of human recombinant PDGF-BB dissolved in acetic acid was applied with polypropylene autopipette. On the left P2 and right P2 PDGF-BB was applied after insertion of ${\beta}-Tricalcium$ phosphate(TCP) and collagen (Collatape) respectively. Left mandibular P4 was used as control. Systemic antibiotics (Penicillin-G benzathine and penicillin-G procaine, 1 ml per 10-25 1bs body weight) were administrated intramuscular for 2 weeks after surgery. Irrigation with 0.1% Chlorhexidine Gluconate around operated sites was performed during the whole experimental period except one day immediate after surgery. Soft diets were fed through the whole experiment period. After 2, 4, 8, 12 weeks, the animals were sacrificed by perfusion technique. Tissue block was excised including the tooth and prepared for light microscope with H-E staining. At 2 weeks after surgery, therer were rapid osteogenesis phenomenon on the defected area of the PDGF only treated group and early trabeculation pattern was made with new osteoid tissue produced by activated osteoblast. Bone formation was almost completed to the fornix of furcation by 8 weeks after surgery. New cementum fromation was observed from 2 weeks after surgery, and the thickness was increased until 8 weeks with typical Sharpey’s fibers reembedded into new bone and cementum. In both PDGF-BB with TCP group and PDGF-BB with Collagen group, regeneration process including new bone and new cementum formation and the group especially in the early weeks. It might be thought that the migration of actively proliferating cells was prohibited by the graft materials. In conclusion, platelet-derived growth factor can promote rapid osteogenesis during early stage of periodontal tissue regeneration.

  • PDF

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

Clinical Indices Predicting Resorption of Pleural Effusion in Tuberculous Pleurisy (결핵성 늑막염에서 삼출액의 흡수에 영향을 미치는 임상적 지표)

  • Lee, Joe-Ho;Chung, Hee-Soon;Lee, Jeong-Sang;Cho, Sang-Rok;Yoon, Hae-Kyung;Song, Chee-Sung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.5
    • /
    • pp.660-668
    • /
    • 1995
  • Background: It is said that tuberculous pleuritis responds well to anti-tuberculous drug in general, so no further aggressive therapeutic management is unnecesarry except in case of diagnostic thoracentesis. But in clinical practice, we often see some patients who need later decortication due to dyspnea caused by pleural loculation or thickening despite several months of anti-tuberculous drug therapy. Therefore, we want to know the clinical difference between a group who received decortication due to complication of tuberculous pleuritis despite of anti-tuberculous drug and a group who improved after 9 months of anti-tuberculous drug only. Methods: We reviewed 20 tuberculous pleuritis patients(group 1) who underwent decortication due to dyspnea caused by pleural loculation or severe pleural thickening despite of anti-tuberculous drug therapy for 9 or more months, and 20 other tuberculous pleuritis patients(group 2) who improved by anti-tuberculous drug only and had similar degrees of initial pleural effusion and similar age, sex distribution. Then we compared between the two groups the duration of symptoms before anti-tuberculous drug treatment and pleural fluid biochemistry like glucose, LDH, protein and pleural fluid cell count and WBC differential count, and we also wanted to know whether there was any difference in preoperative PFT value and postoperative PFT value in the patients who underwent decortication, and obtained following results. Results: 1) Group 1 patients had lower glucose level{$63.3{\pm}30.8$(mg/dl)} than that of the group 2{$98.5{\pm}34.2$(mg/dl), p<0.05}, and higher LDH level{$776.3{\pm}266.0$(IU/L)} than the group 2 patients{$376.3{\pm}123.1$(IU/L), p<0.05}, and also longer duration of symptom before treatment{$2.0{\pm}1.7$(month)} than the group 2{$1.1{\pm}1.2$(month), p<0.05}, respectively. 2) In group 1, FVC changed from preoperative $2.55{\pm}0.80$(L) to postoperative $2.99{\pm}0.78$(L)(p<0.05), and FEV1 changed from preoperative $2.19{\pm}0.70$(L/sec) to postoperative $2.50{\pm}0.69$(L/sec)(p<0.05). 3) There was no difference in pleural fluid protein level($5.05{\pm}1.01$(gm/dL) and $5.15{\pm}0.77$(gm/dl), p>0.05) and WBC differential count between group 1 and group 2. Conclusion: It is probable that in tuberculous pleuritis there is a risk of complication in the case of showing relatively low pleural fluid glucose or high LDH level, or in the case of having long duraton of symptom before treatment. We thought prospective study should be performed to confirm this.

  • PDF

Clinical Study of Corrosive Esophagitis (부식성 식도염에 관한 임상적 고찰)

  • 이원상;정승규;최홍식;김상기;김광문;홍원표
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.6-7
    • /
    • 1981
  • With the improvement of living standard and educational level of the people, there is an increasing awareness about the dangers of toxic substances and lethal drugs. In addition to the above, the governmental control of these substances has led to a progressive decrease in the accidents with corrosive substances. However there are still sporadic incidences of suicidal attempts with the substances due to the unbalance between the cultural development in society and individual emotion. The problem is explained by the fact that there is a variety of corrosive agents easily available to the people due to the considerable industrial development and industrialization. Salzen(1920), Bokey(1924) were pioneers on the subject of the corrosive esophagitis and esophageal stenosis by dilatation method. Since then there had been a continuing improvement on the subject with researches on various acid(Pitkin, 1935, Carmody, 1936) and alkali (Tree, 1942, Tucker, 1951) corrosive agents, and the use of steroid (Spain, 1950) and antibiotics. Recently, early esophagoscopic examination is emphasized on the purpose of determining the way of the treatment in corrosive esophagitis patients. In order to find the effective treatment of such patients in future, the authors selected 96 corrosive esophagitis patients who were admitted and treated at the ENT department of Severance hospital from 1971 to March, 1981 to attempt a clinical study. 1. Sex incidence……male: female=1 : 1.7, Age incidence……21-30 years age group; 38 cases (39.6%). 2. Suicidal attempt……80 cases(83.3%), Accidental ingestion……16 cases (16.7%). Among those who ingested the substance accidentally, children below ten years were most numerous with nine patients. 3. Incidence acetic acid……41 cases(41.8%), lye…20 cases (20.4%), HCI……17 cases (17.3%). There was a trend of rapid rise in the incidence of acidic corrosive agents especially acetic acid. 4. Lavage……57 cases (81.1%). 5. Nasogastric tube insertion……80 cases (83.3%), No insertion……16 cases(16.7%), late admittance……10 cases, failure…4 cases, other……2 cases. 6. Tracheostomy……17 cases(17.7%), respiratory problems(75.0%), mental problems (25.0%). 7. Early endoscopy……11 cases(11.5%), within 48 hours……6 cases (54.4%). Endoscopic results; moderate mucosal ulceration…8 cases (72.7%), mild mucosal erythema……2 cases (18.2%), severe mucosal ulceration……1 cases (9.1%) and among those who took early endoscopic examination; 6 patients were confirmed mild lesion and so they were discharged after endoscopy. Average period of admittance in the cases of nasogastric tube insertion was 4 weeks. 8. Nasogastric tube indwelling period……average 11.6 days, recently our treatment trend in the corrosive esophagitis patients with nasogastric tube indwelling is determined according to the finding of early endoscopy. 9. The No. of patients who didn't given and delayed administration of steroid……7 cases(48.9%): causes; kind of drug(acid, unknown)……12 cases, late admittance……11 cases, mild case…9 cases, contraindication……7 cases, other …8 cases. 10. Management of stricture; bougienage……7 cases, feeding gastrostomy……6 cases, other surgical management……4 cases. 11. Complication……27 cases(28.1%); cardio-pulmonary……10 cases, visceral rupture……8 cases, massive bleeding……6 cases, renal failure……4 cases, other…2 cases, expire and moribund discharge…8 cases. 12. No. of follow-up case……23 cases; esophageal stricture……13 cases and site of stricture; hypopharynx……1 case, mid third of esophagus…5 cases, upper third of esophagus…3 cases, lower third of esophagus……3 cases pylorus……1 case, diffuse esophageal stenosis……1 case.

  • PDF

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Growth Efficiency, Carcass Quality Characteristics and Profitability of 'High'-Market Weight Pigs ('고체중' 출하돈의 성장효율, 도체 품질 특성 및 수익성)

  • Park, M.J.;Ha, D.M.;Shin, H.W.;Lee, S.H.;Kim, W.K.;Ha, S.H.;Yang, H.S.;Jeong, J.Y.;Joo, S.T.;Lee, C.Y.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.4
    • /
    • pp.459-470
    • /
    • 2007
  • Domestically, finishing pigs are marketed at 110 kg on an average. However, it is thought to be feasible to increase the market weight to 120kg or greater without decreasing the carcass quality, because most domestic pigs for pork production have descended from lean-type lineages. The present study was undertaken to investigate the growth efficiency and profitability of ‘high’-market wt pigs and the physicochemical characteristics and consumers' acceptability of the high-wt carcass. A total of 96 (Yorkshire × Landrace) × Duroc-crossbred gilts and barrows were fed a finisher diet ad laibtum in 16 pens beginning from 90-kg BW, after which the animals were slaughtered at 110kg (control) or ‘high’ market wt (135 and 125kg in gilts & barrows, respectively) and their carcasses were analyzed. Average daily gain and gain:feed did not differ between the two sex or market wt groups, whereas average daily feed intake was greater in the barrow and high market wt groups than in the gilt and 110-kg market wt groups, respectively(P<0.01). Backfat thickness of the high-market wt gilts and barrows corrected for 135 and 125-kg live wt, which were 23.7 and 22.5 mm, respectively, were greater (P<0.01) than their corresponding 110-kg counterparts(19.7 & 21.1 mm). Percentages of the trimmed primal cuts per total trimmed lean (w/w), except for that of loin, differed statistically (P<0.05) between two sex or market wt groups, but their numerical differences were rather small. Crude protein content of the loin was greater in the high vs. 110-kg market group (P<0.01), but crude fat and moisture contents and other physicochemical characteristics including the color of this primal cut were not different between the two sexes or market weights. Aroma, marbling and overall acceptability scores were greater in the high vs. 110-kg market wt group in sensory evaluation for fresh loin (P<0.01); however, overall acceptabilities for cooked loin, belly and ham were not different between the two market wt groups. Marginal profits of the 135- and 125-kg high-market wt gilt and barrow relative to their corresponding 110-kg ones were approximately -35,000 and 3,500 wons per head under the current carcass grading standard and price. However, if it had not been for the upper wt limits for the A- and B-grade carcasses, marginal profits of the high market wt gilt and barrow would have amounted to 22,000 and 11,000 wons per head, respectively. In summary, 120~125-kg market pigs are likely to meet the consumers' preference better than the 110-kg ones and also bring a profit equal to or slightly greater than that of the latter even under the current carcass grading standard. Moreover, if only the upper wt limits of the A- & B-grade carcasses were removed or increased to accommodate the high-wt carcass, the optimum market weights for the gilt and barrow would fall upon their target weights of the present study, i.e. 135 and 125 kg, respectively.

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.