• Title/Summary/Keyword: Application Case

Search Result 7,994, Processing Time 0.046 seconds

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

Clinical Applications and Efficacy of Korean Ginseng (고려인삼의 주요 효능과 그 임상적 응용)

  • Nam, Ki-Yeul
    • Journal of Ginseng Research
    • /
    • v.26 no.3
    • /
    • pp.111-131
    • /
    • 2002
  • Korean ginseng (Panax ginseng C.A. Meyer) received a great deal of attention from the Orient and West as a tonic agent, health food and/or alternative herbal therapeutic agent. However, controversy with respect to scientific evidence on pharmacological effects especially, evaluation of clinical efficacy and the methodological approach still remains to be solved. Author reviewed those articles published since 1980 when pharmacodynamic studies on ginseng have intensively started. Special concern was paid on metabolic disorders including diabetes mellitus, circulatory disorders, malignant tumor, sexual dysfunction, and physical and mental performance to give clear information to those who are interested in pharmacological study of ginseng and to promote its clinical use. With respect to chronic diseases such as diabetes mellitus, atherosclerosis, high blood pressure, malignant disorders, and sexual disorders, it seems that ginseng plays preventive and restorative role rather than therapeutics. Particularly, ginseng plays a significant role in ameliorating subjective symptoms and preventing quality of life from deteriorating by long term exposure of chemical therapeutic agents. Also it seems that the potency of ginseng is mild, therefore it could be more effective when used concomitantly with conventional therapy. Clinical studies on the tonic effect of ginseng on work performance demonstrated that physical and mental dysfunction induced by various stresses are improved by increasing adaptability of physical condition. However, the results obtained from clinical studies cannot be mentioned in the indication, which are variable upon the scientist who performed those studies. In this respect, standardized ginseng product and providing planning of the systematic clinical research in double-blind randomized controlled trials are needed to assess the real efficacy for proposing ginseng indication. Pharmacological mode of action of ginseng has not yet been fully elucidated. Pharmacodynamic and pharmacokinetic researches reveal that the role of ginseng not seem to be confined to a given single organ. It has been known that ginseng plays a beneficial role in such general organs as central nervous, endocrine, metabolic, immune systems, which means ginseng improves general physical and mental conditons. Such multivalent effect of ginseng can be attributed to the main active component of ginseng,ginsenosides or non-saponin compounds which are also recently suggested to be another active ingredients. As is generally the similar case with other herbal medicines, effects of ginseng cannot be attributed as a given single compound or group of components. Diversified ingredients play synergistic or antagonistic role each other and act in harmonized manner. A few cases of adverse effect in clinical uses are reported, however, it is not observed when standardized ginseng products are used and recommended dose was administered. Unfavorable interaction with other drugs has also been suggested, which the information on the products and administered dosage are not available. However, efficacy, safety, interaction or contraindication with other medicines has to be more intensively investigated in order to promote clinical application of ginseng. For example, daily recommended doses per day are not agreement as 1-2g in the West and 3-6 g in the Orient. Duration of administration also seems variable according to the purpose. Two to three months are generally recommended to feel the benefit but time- and dose-dependent effects of ginseng still need to be solved from now on. Furthermore, the effect of ginsenosides transformed by the intestinal microflora, and differential effect associated with ginsenosides content and its composition also should be clinically evaluated in the future. In conclusion, the more wide-spread use of ginseng as a herbal medicine or nutraceutical supplement warrants the more rigorous investigations to assess its effacy and safety. In addition, a careful quality control of ginseng preparations should be done to ensure an acceptable standardization of commercial products.

Conclusion of Conventions on Compensation for Damage Caused by Aircraft in Flight to Third Parties (항공운항 시 제3자 피해 배상 관련 협약 채택 -그 혁신적 내용과 배경 고찰-)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.1
    • /
    • pp.35-58
    • /
    • 2009
  • A treaty that governs the compensation on damage caused by aircraft to the third parties on surface was first adopted in Rome in 1933, but without support from the international aviation community it was replaced by another convention adopted again in Rome in 1952. Despite the increase of the compensation amount and some improvements to the old version, the Rome Convention 1952 with 49 State parties as of today is not considered universally accepted. Neither is the Montreal Protocol 1978 amending the Rome Convention 1952, with only 12 State parties excluding major aviation powers like USA, Japan, UK, and Germany. Consequently, it is mostly the local laws that apply to the compensation case of surface damage caused by the aircraft, contrary to the intention of those countries and people who involved themselves in the drafting of the early conventions on surface damage. The terrorist attacks 9/11 proved that even the strongest power in the world like the USA cannot with ease bear all the damages done to the third parties by the terrorist acts involving aircraft. Accordingly as a matter of urgency, the International Civil Aviation Organization(ICAO) picked up the matter and have it considered among member States for a few years through its Legal Committee before proposing for adoption as a new treaty in the Diplomatic Conference held in Montreal, Canada 20 April to 2 May 2009. Accordingly, two treaties based on the drafts of the Legal Committee were adopted in Montreal by consensus, one on the compensation for general risk damage caused by aircraft, the other one on compensation for damage from acts of unlawful interference involving aircraft. Both Conventions improved the old Convention/Protocol in many aspects. Deleting 'surface' in defining the damage to the third parties in the title and contents of the Conventions is the first improvement because the third party damage is not necessarily limited to surface on the soil and sea of the Earth. Thus Mid-air collision is now the new scope of application. Increasing compensation limit in big gallop is another improvement, so is the inclusion of the mental injury accompanied by bodily injury as the damage to be compensated. In fact, jurisprudence in recent years for cases of passengers in aircraft accident holds aircraft operators to be liable to such mental injuries. However, "Terror Convention" involving unlawful interference of aircraft has some unique provisions of innovation and others. While establishing the International Civil Aviation Compensation Fund to supplement, when necessary, the damages that exceed the limit to be covered by aircraft operators through insurance taking is an innovation, leaving the fate of the Convention to a State Party, implying in fact the USA, is harming its universality. Furthermore, taking into account the fact that the damage incurred by the terrorist acts, where ever it takes place targeting whichever sector or industry, are the domain of the State responsibility, imposing the burden of compensation resulting from terrorist acts in the air industry on the aircraft operators and passengers/shippers is a source of serious concern for the prospect of the Convention. This is more so when the risks of terrorist acts normally aimed at a few countries because of current international political situation are spread out to many innocent countries without quid pro quo.

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Compensation for Personal Injury and the Insurer's Claim for Indemnity - Focused on the NHIC's Claim for Indemnity - (인신사고로 인한 손해배상과 보험자의 구상권 - 국민건강보험공단의 구상권을 중심으로 -)

  • Noh, Tae Heon
    • The Korean Society of Law and Medicine
    • /
    • v.16 no.2
    • /
    • pp.87-130
    • /
    • 2015
  • In a case in which National Health Insurance Corporation (NHIC) pays medical care expenses to a victim of a traffic accident resulting in injury or death and asks the assailant for compensation of its share in the medical care expenses, as the precedent treats the subrogation of a claim set by National Health Insurance Act the same as that set by Industrial Accident Compensation Insurance Act, it draws the range of its compensation from the range of deduction, according to the principle of deduction after offsetting and acknowledges the compensation of all medical care expenses borne by the NHIC, within the amount of compensation claimed by the victim. However, both the National Health Insurance Act and the Industrial Accident Compensation Insurance Act are laws that regulate social insurance, but medical care expenses in the National Health Insurance Act have a character of 'an underinsurance that fixes the ratio of indemnification,' while insurance benefit on the Industrial Accident Compensation Insurance Act has a character of full insurance, or focuses on helping the insured that suffered an industrial accident lead a life, approximate to that in the past, regardless of the amount of damages according to its character of social insurance. Therefore, there is no reason to treat the subrogation of a claim on the National Health Insurance Act the same as that on the Industrial Accident Compensation Insurance Act. Since the insured loses the right of claim acquired by the insurer by subrogation in return for receiving a receipt, there is no benefit from receiving insurance in the range. Thus, in a suit in which the insured seeks compensation for damages from the assailant, there is no room for the application of the legal principle of offset of profits and losses, and the range of subrogation of a claim or the amount of deduction from compensation should be decided by the contract between the persons directly involved or a related law. Therefore, it is not reasonable that the precedent draws the range of the NHIC's compensation from the principle of deduction after offsetting. To interpret Clause 1, Article 58 of the National Health Insurance Act that sets the range of the NHIC's compensation uniformly and systematically in combination with Clause 2 of the same article that sets the range of exemption, if the compensation is made first, it is reasonable to fix the range of the NHIC's compensation by multiplying the medical care expenses paid by the ratio of the assailant's liability. This is contrasted with the range of the Korea Labor Welfare Corporation's compensation which covers the total amount of the claim of the insured within the insurance benefit paid in the interpretation of Clauses 1 and 2, Article 87 of the Industrial Accident Compensation Insurance Act. In the meantime, there are doubts about why the profit should be deducted from the amount of compensation claimed, though it is enough for the principle of deduction after offsetting that the precedent took as the premise in judging the range of the NHIC's compensation to deduct the profit made by the victim from the amount of damages, so as to achieve the goal of not attributing profit more than the amount of damage to a victim; whether it is reasonable to attribute all the profit made by the victim to the assailant, while the damages suffered by the victim are distributed fairly; and whether there is concrete validity in actual cases. Therefore, the legal principle of the precedent concerning the range of the NHIC's compensation and the legal principle of the precedent following the principle of deduction after offsetting should be reconsidered.

  • PDF

An Empirical Study on Perceived Value and Continuous Intention to Use of Smart Phone, and the Moderating Effect of Personal Innovativeness (스마트폰의 지각된 가치와 지속적 사용의도, 그리고 개인 혁신성의 조절효과)

  • Han, Joonhyoung;Kang, Sungbae;Moon, Taesoo
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.53-84
    • /
    • 2013
  • With rapid development of ICT (Information and Communications Technology), new services by the convergence of mobile network and application technology began to appear. Today, smart phone with new ICT convergence network capabilities is exceedingly popular and very useful as a new tool for the development of business opportunities. Previous studies based on Technology Acceptance Model (TAM) suggested critical factors, which should be considered for acquiring new customers and maintaining existing users in smart phone market. However, they had a limitation to focus on technology acceptance, not value based approach. Prior studies on customer's adoption of electronic utilities like smart phone product showed that the antecedents such as the perceived benefit and the perceived sacrifice could explain the causality between what is perceived and what is acquired over diverse contexts. So, this research conceptualizes perceived value as a trade-off between perceived benefit and perceived sacrifice, and we need to research the perceived value to grasp user's continuous intention to use of smart phone. The purpose of this study is to investigate the structured relationship between benefit (quality, usefulness, playfulness) and sacrifice (technicality, cost, security risk) of smart phone users, perceived value, and continuous intention to use. In addition, this study intends to analyze the differences between two subgroups of smart phone users by the degree of personal innovativeness. Personal innovativeness could help us to understand the moderating effect between how perceptions are formed and continuous intention to use smart phone. This study conducted survey through e-mail, direct mail, and interview with smart phone users. Empirical analysis based on 330 respondents was conducted in order to test the hypotheses. First, the result of hypotheses testing showed that perceived usefulness among three factors of perceived benefit has the highest positive impact on perceived value, and then followed by perceived playfulness and perceived quality. Second, the result of hypotheses testing showed that perceived cost among three factors of perceived sacrifice has significantly negative impact on perceived value, however, technicality and security risk have no significant impact on perceived value. Also, the result of hypotheses testing showed that perceived value has significant direct impact on continuous intention to use of smart phone. In this regard, marketing managers of smart phone company should pay more attention to improve task efficiency and performance of smart phone, including rate systems of smart phone. Additionally, to test the moderating effect of personal innovativeness, this research conducted multi-group analysis by the degree of personal innovativeness of smart phone users. In a group with high level of innovativeness, perceived usefulness has the highest positive influence on perceived value than other factors. Instead, the analysis for a group with low level of innovativeness showed that perceived playfulness was the highest positive factor to influence perceived value than others. This result of the group with high level of innovativeness explains that innovators and early adopters are able to cope with higher level of cost and risk, and they expect to develop more positive intentions toward higher performance through the use of an innovation. Also, hedonic behavior in the case of the group with low level of innovativeness aims to provide self-fulfilling value to the users, in contrast to utilitarian perspective, which aims to provide instrumental value to the users. However, with regard to perceived sacrifice, both groups in general showed negative impact on perceived value. Also, the group with high level of innovativeness had less overall negative impact on perceived value compared to the group with low level of innovativeness across all factors. In both group with high level of innovativeness and with low level of innovativeness, perceived cost has the highest negative influence on perceived value than other factors. Instead, the analysis for a group with high level of innovativeness showed that perceived technicality was the positive factor to influence perceived value than others. However, the analysis for a group with low level of innovativeness showed that perceived security risk was the second high negative factor to influence perceived value than others. Unlike previous studies, this study focuses on influencing factors on continuous intention to use of smart phone, rather than considering initial purchase and adoption of smart phone. First, perceived value, which was used to identify user's adoption behavior, has a mediating effect among perceived benefit, perceived sacrifice, and continuous intention to use smart phone. Second, perceived usefulness has the highest positive influence on perceived value, while perceived cost has significant negative influence on perceived value. Third, perceived value, like prior studies, has high level of positive influence on continuous intention to use smart phone. Fourth, in multi-group analysis by the degree of personal innovativeness of smart phone users, perceived usefulness, in a group with high level of innovativeness, has the highest positive influence on perceived value than other factors. Instead, perceived playfulness, in a group with low level of innovativeness, has the highest positive factor to influence perceived value than others. This result shows that early adopters intend to adopt smart phone as a tool to make their job useful, instead market followers intend to adopt smart phone as a tool to make their time enjoyable. In terms of marketing strategy for smart phone company, marketing managers should pay more attention to identify their customers' lifetime value by the phase of smart phone adoption, as well as to understand their behavior intention to accept the risk and uncertainty positively. The academic contribution of this study primarily is to employ the VAM (Value-based Adoption Model) as a conceptual foundation, compared to TAM (Technology Acceptance Model) used widely by previous studies. VAM is useful for understanding continuous intention to use smart phone in comparison with TAM as a new IT utility by individual adoption. Perceived value dominantly influences continuous intention to use smart phone. The results of this study justify our research model adoption on each antecedent of perceived value as a benefit and a sacrifice component. While TAM could be widely used in user acceptance of new technology, it has a limitation to explain the new IT adoption like smart phone, because of customer behavior intention to choose the value of the object. In terms of theoretical approach, this study provides theoretical contribution to the development, design, and marketing of smart phone. The practical contribution of this study is to suggest useful decision alternatives concerned to marketing strategy formulation for acquiring and retaining long-term customers related to smart phone business. Since potential customers are interested in both benefit and sacrifice when evaluating the value of smart phone, marketing managers in smart phone company has to put more effort into creating customer's value of low sacrifice and high benefit so that customers will continuously have higher adoption on smart phone. Especially, this study shows that innovators and early adopters with high level of innovativeness have higher adoption than market followers with low level of innovativeness, in terms of perceived usefulness and perceived cost. To formulate marketing strategy for smart phone diffusion, marketing managers have to pay more attention to identify not only their customers' benefit and sacrifice components but also their customers' lifetime value to adopt smart phone.

Application of LCA on Lettuce Cropping System by Bottom-up Methodology in Protected Cultivation (시설상추 농가를 대상으로 하는 bottom-up 방식 LCA 방법론의 농업적 적용)

  • Ryu, Jong-Hee;Kim, Kye-Hoon;Kim, Gun-Yeob;So, Kyu-Ho;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1195-1206
    • /
    • 2011
  • This study was conducted to apply LCA (Life cycle assessment) methodology to lettuce (Lactuca sativa L.) production systems in Namyang-ju as a case study. Five lettuce growing farms with three different farming systems (two farms with organic farming system, one farm with a system without agricultural chemicals and two farms with conventional farming system) were selected at Namyangju city of Gyeonggi-province in Korea. The input data for LCA were collected by interviewing with the farmers. The system boundary was set at a cropping season without heating and cooling system for reducing uncertainties in data collection and calculation. Sensitivity analysis was carried out to find out the effect of type and amount of fertilizer and energy use on GHG (Greenhouse Gas) emission. The results of establishing GTG (Gate-to-Gate) inventory revealed that the quantity of fertilizer and energy input had the largest value in producing 1 kg lettuce, the amount of pesticide input the smallest. The amount of electricity input was the largest in all farms except farm 1 which purchased seedlings from outside. The quantity of direct field emission of $CO_2$, $CH_4$ and $N_2O$ from farm 1 to farm 5 were 6.79E-03 (farm 1), 8.10E-03 (farm 2), 1.82E-02 (farm 3), 7.51E-02 (farm 4) and 1.61E-02 (farm 5) kg $kg^{-1}$ lettuce, respectively. According to the result of LCI analysis focused on GHG, it was observed that $CO_2$ emission was 2.92E-01 (farm 1), 3.76E-01 (farm 2), 4.11E-01 (farm 3), 9.40E-01 (farm 4) and $5.37E-01kg\;CO_2\;kg^{-1}\;lettuce$ (farm 5), respectively. Carbon dioxide contribute to the most GHG emission. Carbon dioxide was mainly emitted in the process of energy production, which occupied 67~91% of $CO_2$ emission from every production process from 5 farms. Due to higher proportion of $CO_2$ emission from production of compound fertilizer in conventional crop system, conventional crop system had lower proportion of $CO_2$ emission from energy production than organic crop system did. With increasing inorganic fertilizer input, the process of lettuce cultivation covered higher proportion in $N_2O$ emission. Therefore, farms 1 and 2 covered 87% of total $N_2O$ emission; and farm 3 covered 64%. The carbon footprints from farm 1 to farm 5 were 3.40E-01 (farm 1), 4.31E-01 (farm 2), 5.32E-01 (farm 3), 1.08E+00 (farm 4) and 6.14E-01 (farm 5) kg $CO_2$-eq. $kg^{-1}$ lettuce, respectively. Results of sensitivity analysis revealed the soybean meal was the most sensitive among 4 types of fertilizer. The value of compound fertilizer was the least sensitive among every fertilizer imput. Electricity showed the largest sensitivity on $CO_2$ emission. However, the value of $N_2O$ variation was almost zero.