• Title/Summary/Keyword: Improving System

Search Result 7,422, Processing Time 0.045 seconds

The Effect of Nasal BiPAP Ventilation in Acute Exacerbation of Chronic Obstructive Airway Disease (만성 기도폐쇄환자에서 급성 호흡 부전시 BiPAP 환기법의 치료 효과)

  • Cho, Young-Bok;Kim, Ki-Beom;Lee, Hak-Jun;Chung, Jin-Hong;Lee, Kwan-Ho;Lee, Hyun-Woo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.2
    • /
    • pp.190-200
    • /
    • 1996
  • Background : Mechanical ventilation constitutes the last therapeutic method for acute respiratory failure when oxygen therapy and medical treatment fail to improve the respiratory status of the patient. This invasive ventilation, classically administered by endotracheal intubation or by tracheostomy, is associated with significant mortality and morbidity. Consequently, any less invasive method able to avoid the use of endotracheal ventilation would appear to be useful in high risk patient. Over recent years, the efficacy of nasal mask ventilation has been demonstrated in the treatment of chronic restrictive respiratory failure, particularly in patients with neuromuscular diseases. More recently, this method has been successfully used in the treatment of acute respiratory failure due to parenchymal disease. Method : We assessed the efficacy of Bilevel positive airway pressure(BiPAP) in the treatment of acute exacerbation of chronic obstructive pulmonary disease(COPD). This study prospectively evaluated the clinical effectiveness of a treatment schedule with positive pressure ventilation via nasal mask(Respironics BiPAP device) in 22 patients with acute exacerbations of COPD. Eleven patients with acute exacerbations of COPD were treated with nasal pressure support ventilation delivered via a nasal ventilatory support system plus standard treatment for 3 consecutive days. An additional 11 control patients were treated only with standard treatment. The standard treatment consisted of medical and oxygen therapy. The nasal BiPAP was delivered by a pressure support ventilator in spontaneous timed mode and at an inspiratory positive airway pressure $6-8cmH_2O$ and an expiratory positive airway pressure $3-4cmH_2O$. Patients were evaluated with physical examination(respiratory rate), modified Borg scale and arterial blood gas before and after the acute therapeutic intervention. Results : Pretreatment and after 3 days of treatment, mean $PaO_2$ was 56.3mmHg and 79.1mmHg (p<0.05) in BiPAP group and 56.9mmHg and 70.2mmHg (p<0.05) in conventional treatment (CT) group and $PaCO_2$ was 63.9mmHg and 56.9mmHg (p<0.05) in BiPAP group and 53mmHg and 52.8mmHg in CT group respectively. pH was 7.36 and 7.41 (p<0.05) in BiPAP group and 7.37 and 7.38 in cr group respectively. Pretreatment and after treatment, mean respiratory rate was 28 and 23 beats/min in BiPAP group and 25 and 20 beats/min in CT group respectively. Borg scale was 7.6 and 4.7 in BiPAP group and 6.4 and 3.8 in CT group respectively. There were significant differences between the two groups in changes of mean $PaO_2$, $PaCO_2$ and pH respectively. Conclusion: We conclude that short-term nasal pressure-support ventilation delivered via nasal BiPAP in the treatment of acute exacerbation of COPD, is an efficient mode of assisted ventilation for improving blood gas values and dyspnea sensation and may reduce the need for endotracheal intubation with mechanical ventilation.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

An Exploratory study on the demand for training programs to improve Real Estate Agents job performance -Focused on Cheonan, Chungnam- (부동산중개인의 직무능력 향상을 위한 교육프로그램 욕구에 관한 탐색적 연구 -충청남도 천안지역을 중심으로-)

  • Lee, Jae-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.3856-3868
    • /
    • 2011
  • Until recently, research trend in real estate has been focused on real estate market and the market analysis. But the studies on real estate training program development for real estate agents to improve their job performance are relatively short in numbers. Thus, this study shows empirical analysis of the needs for the training programs for real estate agents in Cheonan to improve their job performance. The results are as follows. First, in the survey of asking what educational contents they need in order to improve real estate agents' job performance, most of the respondents show their needs for the analysis of house's value, legal knowledge, real estate management, accounting, real estate marketing, and understanding of the real estate policy. This is because they are well aware that the best way of responding to the changing clients' needs comes from training programs. Secondly, asked about real estate marketing strategies, most of respondents showed their awareness of new strategies to meet the needs of clients. This is because new forms of marketing strategies including internet ads are needed in the field as the paradigm including Information Technology changes. Thirdly, asked about the need for real estate-related training programs, 92% of the respondents answered they need real estate education programs run by the continuing education centers of the universities. In addition, the survey showed their needs for retraining programs that utilize the resources in the local universities. Other than this, to have effective and efficient training programs, they demanded running a training system by utilizing the human resources of the universities under the name of the department of 'Real Estate Contract' for real estate agents' job performance. Fourthly, the survey revealed real estate management(44.2%) and real estate marketing(42.3%) is the most chosen contents they want to take in the regular course for improving real estate agents' job performance. This shows their will to understand clients' needs through the mind of real estate management and real estate marketing. The survey showed they prefer the training programs as an irregular course to those in the regular one. Despite the above results, this study chose subjects only in Cheanan and thus it needs to research more diverse areas. The needs of programs to improve real estate agents job performance should be analyzed empirically targeting the real estate agents not just in Cheonan but also cities like Pyeongchon, Ilsan and Bundang in which real estate business is booming, as well as undergraduate and graduate students whose major is real estate studies. These studies will be able to provide information to help develop the customized training programs by evaluating elements that real estate agents need in order to meet clients satisfaction and improve their job performance. Many variables of the program development learned through these studies can be incorporated in the curriculum of the real estate studies and used very practically as information for the development of the real estate studies in this fast changing era.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.

Improvement of Certification Criteria based on Analysis of On-site Investigation of Good Agricultural Practices(GAP) for Ginseng (인삼 GAP 인증기준의 현장실천평가결과 분석에 따른 인증기준 개선방안)

  • Yoon, Deok-Hoon;Nam, Ki-Woong;Oh, Soh-Young;Kim, Ga-Bin
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.1
    • /
    • pp.40-51
    • /
    • 2019
  • Ginseng has a unique production system that is different from those used for other crops. It is subject to the Ginseng Industry Act., requires a long-term cultivation period of 4-6 years, involves complicated cultivation characteristics whereby ginseng is not produced in a single location, and many ginseng farmers engage in mixed-farming. Therefore, to bring the production of Ginseng in line with GAP standards, it is necessary to better understand the on-site practices of Ginseng farmers according to established control points, and to provide a proper action plan for improving efficiency. Among ginseng farmers in Korea who applied for GAP certification, 77.6% obtained it, which is lower than the 94.1% of farmers who obtained certification for other products. 13.7% of the applicants were judged to be unsuitable during document review due to their use of unregistered pesticides and soil heavy metals. Another 8.7% of applicants failed to obtain certification due to inadequate management results. This is a considerably higher rate of failure than the 5.3% incompatibility of document inspection and 0.6% incompatibility of on-site inspection, which suggests that it is relatively more difficult to obtain GAP certification for ginseng farming than for other crops. Ginseng farmers were given an average of 2.65 points out of 10 essential control points and a total 72 control points, which was slightly lower than the 2.81 points obtained for other crops. In particular, ginseng farmers were given an average of 1.96 points in the evaluation of compliance with the safe use standards for pesticides, which was much lower than the average of 2.95 points for other crops. Therefore, it is necessary to train ginseng farmers to comply with the safe use of pesticides. In the other essential control points, the ginseng farmers were rated at an average of 2.33 points, lower than the 2.58 points given for other crops. Several other areas of compliance in which the ginseng farmers also rated low in comparison to other crops were found. These inclued record keeping over 1 year, record of pesticide use, pesticide storages, posts harvest storage management, hand washing before and after work, hygiene related to work clothing, training of workers safety and hygiene, and written plan of hazard management. Also, among the total 72 control points, there are 12 control points (10 required, 2 recommended) that do not apply to ginseng. Therefore, it is considered inappropriate to conduct an effective evaluation of the ginseng production process based on the existing certification standards. In conclusion, differentiated certification standards are needed to expand GAP certification for ginseng farmers, and it is also necessary to develop programs that can be implemented in a more systematic and field-oriented manner to provide the farmers with proper GAP management education.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

The Present Status and a Proposal of the Prospective Measures for Parasitic Diseases Control in Korea (우리나라 기생충병관리의 현황(現況)과 효율적방안에 관(關)한 연구(硏究))

  • Loh, In-Kyu
    • Journal of Preventive Medicine and Public Health
    • /
    • v.3 no.1
    • /
    • pp.1-16
    • /
    • 1970
  • The present status of control measures for public health important helminthic infections in Korea was surveyed in 1969 and the following results were obtained. The activities of parasitic examination and Ascaris treatment for the positives which were done during 1966 to 1969 were brought in poor result and could not decrease the infection rate. It is needed to improve or strengthen the activities. The mass treatment activities for paragonimiasis and clonorchiasis in the areas which were designated by the Ministry of Health were carried out during 1965 to 1968 with no good results in decrease of estimated number of the patients. There were too many pharmaceutical companies where many kinds of anthelmintics were produced. It may be better to reduce the number of anthelmintics produced and control the quality. The human feces, the most important source of helminthic infections, was generally not treated in sanitary ways because of the poor sewerage system and no sewage treatment plant in urban areas and insanitary latrines in rural areas. The field soils of 170 specimens were collected from 34 areas out of 55 urban and tourist areas where night soil has been prohibited by a regulation to be used as a fertilizer, and examined for parasites contamination with the result of Ascaris egg detection in 44%. Some kinds of vegetables of 64 specimens each from the supply agents of parasite free vegetables and general markets were collected and examined for parasites contamination with the results of Ascaris egg detection in 25% and 36% respectively. The parasite control activities and the ability of parasitological examination techniques in the health centers of the country were not satisfactory. The budget of the Ministry of Health for the parasite control was very poor. The actual expenditure needed for cellophane thick smear technique was 8 Won per a specimen. As a principle the control of helminthic infections might be led toward breaking the chain of events in the life cycle of the prasites and eliminating environmental and host factors concerned with the infections, and the following methods nay be pointed out. 1) Mass treatment might be done to eliminate human reservoirs of an infection. 2) Animal reservoirs which are related with human infections night be eliminated. 3) The excretes of reservoirs, particularly human feces, should be treated in sanitary ways by the means of sanitary sewerage system and sewage treatment plant in urban areas and sanitary latrines such as waterborne latrine, aqua privy and pit latrine in rural areas. The increase of national economical development and prohibition of the habit of using night soils as a fertilizer might be very important factors to achieve the purpose. 4) The control of vehicles and intermediate hosts might be done by the means of prohibition of soil contamination with parasites, food sanitation, insect control and snail control. 5) The improvement of insanitary attitudes and bad habits which are related with parasitic infections night be done by the means of prohibition of habit of using night soils as a fertilizer, and improving eating habits and personal hygiene. 6) Chemoprophylactic measure and vaccination may be effective to prevent the infections or the development of a parasite to adult in the bodies when the bodies were invaded by parasites. Further studies and development of this kind of measures are needed.

  • PDF

The Effects of Online Service Quality on Consumer Satisfaction and Loyalty Intention -About Booking and Issuing Air Tickets on Website- (온라인 서비스 품질이 고객만족 및 충성의도에 미치는 영향 -항공권 예약.발권 웹사이트를 중심으로-)

  • Park, Jong-Gee;Ko, Do-Eun;Lee, Seung-Chang
    • Journal of Distribution Research
    • /
    • v.15 no.3
    • /
    • pp.71-110
    • /
    • 2010
  • 1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is

    . 3.2 Hypotheses 3.2.1 The Hypothesis of On-line Service Quality and Overall Service Quality The relation between on-line service quality and overall service quality I-1. Efficiency of on-line service quality may have a significant effect on overall service quality. I-2. System availability of on-line service quality may have a significant effect on overall service quality. I-3. Fulfillment of on-line service quality may have a significant effect on overall service quality. I-4. Privacy of on-line service quality may have a significant effect on overall service quality. 3.2.2 The Hypothesis of On-line Service Recovery and Overall Service Quality The relation between on-line service recovery and overall service quality II-1. Responsiveness of on-line service recovery may have a significant effect on overall service quality. II-2. Compensation of on-line service recovery may have a significant effect on overall service quality. II-3. Contact of on-line service recovery may have a significant effect on overall service quality. 3.2.3 The Hypothesis of Overall Service Quality and Customer Satisfaction The relation between overall service quality and customer satisfaction III-1. Overall service quality may have a significant effect on customer satisfaction. 3.2.4 The Hypothesis of Customer Satisfaction and Loyalty Intention The relation between customer satisfaction and loyalty intention IV-1. Customer satisfaction may have a significant effect on loyalty intention. 3.2.5 The Hypothesis of a Mediation Variable Wolfinbarger & Gilly(2003) and Parasuraman, Zeithaml & Malhotra(2005) had made clear that each dimension of service quality has a significant effect on overall service quality. Add to this, the authors analyzed empirically that each dimension of on-line service quality has a positive effect on customer satisfaction. With that viewpoint, this study would examine if overall service quality mediates between on-line service quality and each dimension of customer satisfaction, keeping on looking into the relation between on-line service quality and overall service quality, overall service quality and customer satisfaction. And as this study understands that each dimension of on-line service recovery also has an effect on overall service quality, this would examine if overall service quality also mediates between on-line service recovery and each dimension of customer satisfaction. Therefore these hypotheses followed are set up to examine if overall service quality plays its role as the mediation variable. The relation between on-line service quality and customer satisfaction V-1. Overall service quality may mediate the effects of efficiency of on-line service quality on customer satisfaction. V-2. Overall service quality may mediate the effects of system availability of on-line service quality on customer satisfaction. V-3. Overall service quality may mediate the effects of fulfillment of on-line service quality on customer satisfaction. V-4. Overall service quality may mediate the effects of privacy of on-line service quality on customer satisfaction. The relation between on-line service recovery and customer satisfaction VI-1. Overall service quality may mediate the effects of responsiveness of on-line service recovery on customer satisfaction. VI-2. Overall service quality may mediate the effects of compensation of on-line service recovery on customer satisfaction. VI-3. Overall service quality may mediate the effects of contact of on-line service recovery on customer satisfaction. 4. Empirical Analysis 4.1 Research design and the characters of data This empirical study aimed at customers who ever purchased air ticket at the Web sites for reservation and issue. Total 430 questionnaires were distributed, and 400 were collected. After surveying with the final questionnaire, the frequency test was performed about variables of sex, age which is demographic factors for analyzing general characters of sample data. Sex of data is consist of 146 of male(42.7%) and 196 of female(57.3%), so portion of female is a little higher. Age is composed of 11 of 10s(3.2%), 199 of 20s(58.2%), 105 of 30s(30.7%), 22 of 40s(6.4%), 5 of 50s(1.5%). The reason that portions of 20s and 30s are higher can be supposed that they use the Internet frequently and purchase air ticket directly. 4.2 Assessment of measuring scales This study used the internal consistency analysis to measure reliability, and then used the Cronbach'$\alpha$ to assess this. As a result of reliability test, Cronbach'$\alpha$ value of every component shows more than 0.6, it is found that reliance of the measured variables are ensured. After reliability test, the explorative factor analysis was performed. the factor sampling was performed by the Principal Component Analysis(PCA), the factor rotation was performed by the Varimax which is good for verifying mutual independence between factors. By the result of the initial factor analysis, items blocking construct validity were removed, and the result of the final factor analysis performed for verifying construct validity is followed above. 4.3 Hypothesis Testing 4.3.1 Hypothesis Testing by the Regression Analysis(SPSS) 4.3.2 Analysis of Mediation Effect To verify mediation effect of overall service quality of and , this study used the phased analysis method proposed by Baron & Kenny(1986) generally used. As shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : efficiency=.164, system availability=.074, fulfillment=.108, privacy=.107) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : efficiency=.409, system availability=.227, fulfillment=.386, privacy=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service quality and satisfaction. As
    shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : responsiveness=.164, compensation=.117, contact=.113) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : responsiveness=.409, compensation=.386, contact=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service recovery and satisfaction. Verified results on the basis of empirical analysis are followed. First, as the result of , it shows that all were chosen, so on-line service quality has a positive effect on overall service quality. Especially fulfillment of overall service quality has the most effect, and then efficiency, system availability, privacy in order. Second, as the result of , it shows that all were chosen, so on-line service recovery has a positive effect on overall service quality. Especially responsiveness of overall service quality has the most effect, and then contact, compensation in order. Third, as the result of and , it shows that and all were chosen, so overall service quality has a positive effect on customer satisfaction, customer satisfaction has a positive effect on loyalty intention. Fourth, as the result of and , it shows that and all were chosen, so overall service quality plays a role as the partial mediation between on-line service quality and customer satisfaction, on-line service recovery and customer satisfaction. 5. Conclusion This study measured and analyzed service quality and service recovery of the Web sites that customers made a reservation and issued their air tickets, and by improving customer satisfaction through the result, this study put its final goal to grope how to keep loyalty customers. On the basis of the result of empirical analysis, suggestion points of this study are followed. First, this study regarded E-S-QUAL that measures on-line service quality and E-RecS-QUAL that measures on-line service recovery as variables, so it overcame the limit of existing studies that used modified SERVQUAL to measure service quality of the Web sites. Second, it shows that fulfillment and efficiency of on-line service quality have the most significant effect on overall service quality. Therefore the Web sites of reserving and issuing air tickets should try harder to elevate efficiency and fulfillment. Third, privacy of on-line service quality has the least significant effect on overall service quality, but this may be caused by un-assurance of customers whether the Web sites protect safely their confidential information or not. So they need to notify customers of this fact clearly. Fourth, there are many cases that customers don't recognize the importance of on-line service recovery, but if they would think that On-line service recovery has an effect on customer satisfaction and loyalty intention, as its importance is very significant they should prepare for that. Fifth, because overall service quality has a positive effect on customer satisfaction and loyalty intention, they should try harder to elevate service quality and service recovery of the Web sites of reserving and issuing air tickets to maximize customer satisfaction and to secure loyalty customers. Sixth, it is found that overall service quality plays a role as the partial mediation, but now there are rarely existing studies about this, so there need to be more studies about this.

  • PDF

  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.