• Title/Summary/Keyword: Rule Evaluation

Search Result 580, Processing Time 0.023 seconds

Clinical Significance of Enterovirus in Febrile Illness of Young Children (하절기에 발열을 주소로 입원한 3개월 이하의 영아에서 장바이러스 감염)

  • Kwak, Ji-Yeon;Cho, Mi-Hyun;Kim, Sung-Eun;Kang, Suk-Ho;Kim, Mi-Ok;Ma, Sang-Hyuk;Lee, Kyu-Man
    • Pediatric Infection and Vaccine
    • /
    • v.8 no.1
    • /
    • pp.94-100
    • /
    • 2001
  • Purpose : Enterovirus is a common cause of aseptic meningitis and nonspecific febrile illness in young children. During the summer and fall months, enterovirus-infected young children are frequently admitted and evaluated to rule out bacterial sepsis and/or meningitis. The purpose of this study was to evaluate the relationship between nonpolio enterovirus infection and febrile illness in infants under 3 months of age during the summer, fall months by using a stool culture to identify the presence of enterovirus. Methods : Patients included febrile infants under 3 months of age admitted to Masan Fatima Hospital for sepsis evaluation from May 1999 to September 1999. Cultures were performed from stool and Cerebrospinal fluid samples and then were tested for enterovirus infection. Viral isolation and serotype identification were performed by cell culture and immunofluorescent testing. Enteroviruses not typed by immunofluorescent testing were confirmed by reverse transcription-polymerase chain reaction. Results : A total of 44 febrile infants were enrolled; of those, 20(45%) were positive for enterovirus. Two enterovirus culture-positive infants had concomitant urinary tract infection and one had Kawasaki disease. All infants infected with an enterovirus recovered without complications. Serotype of 20 enteroviruses were isolated from stool, 3 of echovirus type 9, 1 of echovirus type 11, 1 Coxsachievirus type B4, 15 of untyped enteroviruses. One untyped enterovirus was isolated in the CSF. Conclusion : Nonpolio enterovirus infections are associated with nonspecific febrile illnesses in infants under 3 months of age.

  • PDF

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

A Study on the Appropriate Management of Maritime Police Authority in Korea Coast Guard: Focusing on the Japan Coast Guard (해양경비안전본부의 해양경찰권 적정 운영방안에 관한 연구: -일본 해상보안청과의 비교를 중심으로-)

  • Son, Yeong-Tae
    • Korean Security Journal
    • /
    • no.42
    • /
    • pp.361-391
    • /
    • 2015
  • Regarding the [Government Organization Act; which is legislated on 19th November, 2014] Korea Coast Guard(KCG) has been re-organized and belong from Korea Coast Guard shall be established under the Minister of Oceans and Fisheries to Ministry of Public Safety and Security. Furthermore, National Police Agency(NPA) Commissioner has the right for administer duties concerning investigation and information by succession from Korea Coast Guard Commissioner. That means that main rule has been moved from prior KCG to Ministry of Public Safety and Security(MPSS) and NPA currently which is dual structure. Meanwhile, This kind of organization change has been effective to investigative agency which exert KCG's call of duty and causes needs of variety problems. In other words, There are quite huge changes such as KCG's reduction of their work, call of duty and re-organization regarding revised government organization act. However this change - including re-organization by government, was not able to take current MPSS's special features such as organization specialty and legal rights. It means, the current change has not been taken present law system CRIMINAL PROCEDURE LAW and there was no preparation to stable maritime police authority action as well. To sum up, this revised GOVERNMENT ORGANIZATION ACT is supposed to provide total, quick security service by establishing strong disasters and safety control tower. However they only contains few area such as organization revision regarding 'Sewol Ferry Disaster', they was not able to contain the other parts of Society. Therefore, in this article I would like to check the part of re-evaluation of current change made by KCC's organization revision. It is supposed to provide better legal stability by making clear of work area by government agencies who acts maritime police authority.

  • PDF

A Study for operation results of the comprehensive examination on tendering system in the cultural heritage repair and restoration, focusing on the cause of the decline in the winning bid rate (문화재수리 종합심사낙찰제·종합평가낙찰제 운영결과 및 낙찰률하락 원인 분석)

  • JUNG, Younghun;YUN, Hyundo
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.111-132
    • /
    • 2022
  • The comprehensive examination on tendering system has been introduced to the Cultural Heritage repair and restoration field since 2016 to remedy the repair issues of South Gate in 2014. The Cultural Heritage Administration tried to attain the high performance of the cultural heritage repair and restoration works securing the proper payment for the repair and restoration works. It is high time to review the operating performance of the comprehensive examination on tendering system (hereinafter referred to as the "CEOTS"), as the system has been run for over 5 years to correspond with its original goal, i.e., "The Proper Payment in return for the High Performance of Repair and Restoration works." This study intends to analyze 114 tenders of CEOTS from 2016 to 2020. As a result of the analysis of 114 tenders, firstly, more than half of bid winners were in the top 20% of repair & restoration capacity disclosure amount list, which mostly fulfilled the goal of 'attaining high performance.' Secondly, as the winning bid rate is decreasing from 86.847% in 2017 to 85.488% in 2020, the goal of 'guarantee of a proper payment' is not achieved yet. Thirdly, the influence of Economic Evaluation section in CEOTS has been grown since the change of scoring system in CEOTS in 2019. This study identifies two reasons why the winning bid rate of CEOTS has decreased. Firstly, it is caused by the fact that 'the group that got more than 1st place' and 'the first place group' that are more than half of the total bidders have the decreasing bidding rate trend as the years go by. Secondly, the exclusion rate of 'the group that got more than 1st place' is higher than the exclusion rate of 'the group that got less than 1st place', which means the expected winning rate would be lowered. It is proposed that the revision of CEOTS code is needed, i.e. easing the strict rule concerning the exclusion rate as well as setting up the lower bidding limit to prevent the excessive decreasing winning bid rate.

Development and Evaluation of Multiplex PCR for the Detection of Carbapenemase-Producing Enterobacteriaceae (카바페넴분해효소 생성 장내세균 검출을 위한 Multiplex PCR의 개발 및 평가)

  • Kim, Si Hyun;Bae, Il Kwon;Kim, Na Young;Song, Sae Am;Kim, Sunjoo;Jeong, Joseph;Shin, Jeong Hwan
    • Annals of Clinical Microbiology
    • /
    • v.22 no.1
    • /
    • pp.9-13
    • /
    • 2019
  • Background: The isolation of carbapenemase-producing Enterobacteriaceae (CPE) has become increasingly common. Continuous surveillance for these organisms is essential because their infections are closely related to outbreaks of illness and are associated with high mortality rates. The aim of this study was to develop and evaluate multiplex PCR as a means of detecting several important CPE genes simultaneously. Methods: We aimed to develop a multiplex PCR that could detect seven CPE genes simultaneously. The multiplex PCR was composed of seven primer sets for the detection of KPC, IMP, VIM, NDM-1, GES, OXA-23, and OXA-48. We designed different PCR product sizes of at least 100 bp. We evaluated the performance of this new test using 69 CPE-positive clinical isolates. Also, we confirmed the specificity to rule out false-positive reactions by using 71 carbapenem-susceptible clinical strains. Results: A total of 69 CPE clinical isolates showed positive results and were correctly identified as KPC (N=14), IMP (N=13), OXA-23 (N=12), OXA-48 (N=11), VIM (N=9), GES (N=5), and NDM (N=5) by the multiplex PCR. All 71 carbapenem-susceptible clinical isolates, including Enterococcus faecalis, Escherichia coli, Klebsiella pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa, showed negative results. Conclusion: This multiplex PCR can detect seven CPE genes at a time and will be useful in clinical laboratories.

Evaluation of Applicability of Sea Ice Monitoring Using Random Forest Model Based on GOCI-II Images: A Study of Liaodong Bay 2021-2022 (GOCI-II 영상 기반 Random Forest 모델을 이용한 해빙 모니터링 적용 가능성 평가: 2021-2022년 랴오둥만을 대상으로)

  • Jinyeong Kim;Soyeong Jang;Jaeyeop Kwon;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1651-1669
    • /
    • 2023
  • Sea ice currently covers approximately 7% of the world's ocean area, primarily concentrated in polar and high-altitude regions, subject to seasonal and annual variations. It is very important to analyze the area and type classification of sea ice through time series monitoring because sea ice is formed in various types on a large spatial scale, and oil and gas exploration and other marine activities are rapidly increasing. Currently, research on the type and area of sea ice is being conducted based on high-resolution satellite images and field measurement data, but there is a limit to sea ice monitoring by acquiring field measurement data. High-resolution optical satellite images can visually detect and identify types of sea ice in a wide range and can compensate for gaps in sea ice monitoring using Geostationary Ocean Color Imager-II (GOCI-II), an ocean satellite with short time resolution. This study tried to find out the possibility of utilizing sea ice monitoring by training a rule-based machine learning model based on learning data produced using high-resolution optical satellite images and performing detection on GOCI-II images. Learning materials were extracted from Liaodong Bay in the Bohai Sea from 2021 to 2022, and a Random Forest (RF) model using GOCI-II was constructed to compare qualitative and quantitative with sea ice areas obtained from existing normalized difference snow index (NDSI) based and high-resolution satellite images. Unlike NDSI index-based results, which underestimated the sea ice area, this study detected relatively detailed sea ice areas and confirmed that sea ice can be classified by type, enabling sea ice monitoring. If the accuracy of the detection model is improved through the construction of continuous learning materials and influencing factors on sea ice formation in the future, it is expected that it can be used in the field of sea ice monitoring in high-altitude ocean areas.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on the Relationship Between Online Community Characteristics and Loyalty : Focused on Mediating Roles of Self-Congruency, Consumer Experience, and Consumer to Consumer Interactivity (온라인 커뮤니티 특성과 충성도 간의 관계에 대한 연구: 자아일치성, 소비자 체험, 상호작용성의 매개적 역할을 중심으로)

  • Kim, Moon-Tae;Ock, Jung-Won
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.157-194
    • /
    • 2008
  • The popularity of communities on the internet has captured the attention of marketing scholars and practitioners. By adapting to the culture of the internet, however, and providing consumer with the ability to interact with one another in addition to the company, businesses can build new and deeper relationships with customers. The economic potential of online communities has been discussed with much hope in the many popular papers. In contrast to this enthusiastic prognostications, empirical and practical evidence regarding the economic potential of the online community has shown a little different conclusion. To date, even communities with high levels of membership and vibrant social arenas have failed to build financial viability. In this perspective, this study investigates the role of various kinds of influencing factors to online community loyalty and basically suggests the framework that explains the process of building purchase loyalty. Even though the importance of building loyalty in an online environment has been emphasized from the marketing theorists and practitioners, there is no sufficient research conclusion about what is the process of building purchase loyalty and the most powerful factors that influence to it. In this study, the process of building purchase loyalty is divided into three levels; characteristics of community site such as content superiority, site vividness, navigation easiness, and customerization, the mediating variables such as self congruency, consumer experience, and consumer to consumer interactivity, and finally various factors about online community loyalty such as visit loyalty, affect, trust, and purchase loyalty are those things. And the findings of this research are as follows. First, consumer-to-consumer interactivity is an important factor to online community purchase loyalty and other loyalty factors. This means, in order to interact with other people more actively, many participants in online community have the willingness to buy some kinds of products such as music, content, avatar, and etc. From this perspective, marketers of online community have to create some online environments in order that consumers can easily interact with other consumers and make some site environments in order that consumer can feel experience in this site is interesting and self congruency is higher than at other community sites. It has been argued that giving consumers a good experience is vital in cyber space, and websites create an active (rather than passive) customer by their nature. Some researchers have tried to pin down the positive experience, with limited success and less empirical support. Web sites can provide a cognitively stimulating experience for the user. We define the online community experience as playfulness based on the past studies. Playfulness is created by the excitement generated through a website's content and measured using three descriptors Marketers can promote using and visiting online communities, which deliver a superior web experience, to influence their customers' attitudes and actions, encouraging high involvement with those communities. Specially, we suggest that transcendent customer experiences(TCEs) which have aspects of flow and/or peak experience, can generate lasting shifts in beliefs and attitudes including subjective self-transformation and facilitate strong consumer's ties to a online community. And we find that website success is closely related to positive website experiences: consumers will spend more time on the site, interacting with other users. As we can see figure 2, visit loyalty and consumer affect toward the online community site didn't directly influence to purchase loyalty. This implies that there may be a little different situations here in online community site compared to online shopping mall studies that shows close relations between revisit intention and purchase intention. There are so many alternative sites on web, consumers do not want to spend money to buy content and etc. In this sense, marketers of community websites must know consumers' affect toward online community site is not a last goal and important factor to influnece consumers' purchase. Third, building good content environment can be a really important marketing tool to create a competitive advantage in cyberspace. For example, Cyworld, Korea's number one community site shows distinctive superiority in the consumer evaluations of content characteristics such as content superiority, site vividness, and customerization. Particularly, comsumer evaluation about customerization was remarkably higher than the other sites. In this point, we can conclude that providing comsumers with good, unique and highly customized content will be urgent and important task directly and indirectly impacting to self congruency, consumer experience, c-to-c interactivity, and various loyalty factors of online community. By creating enjoyable, useful, and unique online community environments, online community portals such as Daum, Naver, and Cyworld are able to build customer loyalty to a degree that many of today's online marketer can only dream of these loyalty, in turn, generates strong economic returns. Another way to build good online community site is to provide consumers with an interactive, fun, experience-oriented or experiential Web site. Elements that can make a dot.com's Web site experiential include graphics, 3-D images, animation, video and audio capabilities. In addition, chat rooms and real-time customer service applications (which link site visitors directly to other visitors, or with company support personnel, respectively) are also being used to make web sites more interactive. Researchers note that online communities are increasingly incorporating such applications in their Web sites, in order to make consumers' online shopping experience more similar to that of an offline store. That is, if consumers are able to experience sensory stimulation (e.g. via 3-D images and audio sound), interact with other consumers (e.g., via chat rooms), and interact with sales or support people (e.g. via a real-time chat interface or e-mail), then they are likely to have a more positive dot.com experience, and develop a more positive image toward the online company itself). Analysts caution, however, that, while high quality graphics, animation and the like may create a fun experience for consumers, when heavily used, they can slow site navigation, resulting in frustrated consumers, who may never return to a site. Consequently, some analysts suggest that, at least with current technology, the rule-of-thumb is that less is more. That is, while graphics etc. can draw consumers to a site, they should be kept to a minimum, so as not to impact negatively on consumers' overall site experience.

  • PDF