• Title/Summary/Keyword: complexity analysis

Search Result 2,403, Processing Time 0.036 seconds

A Study on Differentiation and Improvement in Arbitration Systems in Construction Disputes (건설분쟁 중재제도의 차별화 및 개선방안에 관한 연구)

  • Lee, Sun-Jae
    • Journal of Arbitration Studies
    • /
    • v.29 no.2
    • /
    • pp.239-282
    • /
    • 2019
  • The importance of ADR(Alternative Dispute Resolution), which has the advantage of expertise, speed and neutrality due to the increase of arbitration cases due to domestic and foreign construction disputes, has emerged. Therefore, in order for the nation's arbitration system and the arbitration Organization to jump into the ranks of advanced international mediators, it is necessary to research the characteristics and advantages of these arbitration Organization through a study of prior domestic and foreign research and operation of international arbitration Organization. As a problem, First, education for the efficient promotion of arbitrators (compulsory education, maintenance education, specialized education, seminars, etc.). second, The effectiveness of arbitration in resolving construction disputes (hearing methods, composition of the tribunal, and speed). third, The issue of flexibility and diversity of arbitration solutions (the real problem of methodologies such as mediation and arbitration) needs to be drawn on the Arbitration laws and practical problems, such as laws, rules and guidelines. Therefore, Identify the problems presented in the preceding literature and diagnosis of the defects and problems of the KCAB by drawing features and benefits from the arbitration system operated by the international arbitration Institution. As an improvement, the results of an empirical analysis are derived for "arbitrator" simultaneously through a recognition survey. As a method of improvement, First, as an optimal combination of arbitration hearing and judgment in the settlement of construction disputes,(to improve speed). (1) A plan to improve the composition of the audit department according to the complexity, specificity, and magnification of the arbitration cases - (1)Methods to cope with the increased role of the non-lawyer(Specialist, technical expert). (2)Securing technical mediators for each specialized expert according to the large and special corporation arbitration cases. (2) Improving the method of writing by area of the arbitration guidelines, second, Introduction of the intensive hearing system for psychological efficiency and the institutional improvement plan (1) Problems of optimizing the arbitration decision hearing procedure and resolution of arbitration, and (2) Problems of the management of technical arbitrators of arbitration tribunals. (1)A plan to expand hearing work of technical arbitrator(Review on the introduction of the Assistant System as a member of the arbitration tribunals). (2)Improved use of alternative appraisers by tribunals(cost analysis and utilization of the specialized institution for calculating construction costs), Direct management of technical arbitrators : A Study on the Improvement of the Assessment Reliability of the Appraisal and the Appraisal Period. third, Improvement of expert committee system and new method, (1) Creating a non-executive technical committee : Special technology affairs, etc.(Major, supports pre-qualification of special events and coordinating work between parties). (2) Expanding the standing committee.(Added expert technicians : important, special, large affairs / pre-consultations, pre-coordination and mediation-arbitration). This has been shown to be an improvement. In addition, institutional differentiation to enhance the flexibility and diversity of arbitration. In addition, as an institutional differentiation to enhance the flexibility and diversity of arbitration, First, The options for "Med-Arb", "Arb-Med" and "Arb-Med-Arb" are selected. second, By revising the Agreement Act [Article 28, 2 (Agreement on Dispute Resolution)], which is to be amended by the National Parties, the revision of the arbitration settlement clause under the Act, to expand the method to resolve arbitration. third, 2017.6.28. Measures to strengthen the status role and activities of expert technical arbitrators under enforcement, such as the Act on Promotion of Interestments Industry and the Information of Enforcement Decree. Fourth, a measure to increase the role of expert technical Arbitrators by enacting laws on the promotion of the arbitration industry is needed. Especially, the establishment of the Act on Promotion of Intermediation Industry should be established as an international arbitration agency for the arbitration system. Therefore, it proposes a study of improvement and differentiation measures in the details and a policy, legal and institutional improvement and legislation.

A Study on the Determinants of Blockchain-oriented Supply Chain Management (SCM) Services (블록체인 기반 공급사슬관리 서비스 활용의 결정요인 연구)

  • Kwon, Youngsig;Ahn, Hyunchul
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.119-144
    • /
    • 2021
  • Recently, as competition in the market evolves from the competition among companies to the competition among their supply chains, companies are struggling to enhance their supply chain management (hereinafter SCM). In particular, as blockchain technology with various technical advantages is combined with SCM, a lot of domestic manufacturing and distribution companies are considering the adoption of blockchain-oriented SCM (BOSCM) services today. Thus, it is an important academic topic to examine the factors affecting the use of blockchain-oriented SCM. However, most prior studies on blockchain and SCMs have designed their research models based on Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT), which are suitable for explaining individual's acceptance of information technology rather than companies'. Under this background, this study presents a novel model of blockchain-oriented SCM acceptance model based on the Technology-Organization-Environment (TOE) framework to consider companies as the unit of analysis. In addition, Value-based Adoption Model (VAM) is applied to the research model in order to consider the benefits and the sacrifices caused by a new information system comprehensively. To validate the proposed research model, a survey of 126 companies were collected. Among them, by applying PLS-SEM (Partial Least Squares Structural Equation Modeling) with data of 122 companies, the research model was verified. As a result, 'business innovation', 'tracking and tracing', 'security enhancement' and 'cost' from technology viewpoint are found to significantly affect 'perceived value', which in turn affects 'intention to use blockchain-oriented SCM'. Also, 'organization readiness' is found to affect 'intention to use' with statistical significance. However, it is found that 'complexity' and 'regulation environment' have little impact on 'perceived value' and 'intention to use', respectively. It is expected that the findings of this study contribute to preparing practical and policy alternatives for facilitating blockchain-oriented SCM adoption in Korean firms.

Peirce and the Problem of Symbols (퍼스와 상징의 문제)

  • Noh, Yang-jin
    • Journal of Korean Philosophical Society
    • /
    • v.152
    • /
    • pp.59-79
    • /
    • 2019
  • The main purpose of this paper is to critically examine the intractable problems of Peirce's notion of 'symbol' as a higher and perfect mode of sign, and present a more appropriate account of the higher status of symbol from an experientialist perspective. Peirce distinguished between icon, index, and symbol, and suggested symbol to be a higher mode of sign, in that it additionally requires "interpretation." Within Peirce's picture, the matter of interpretation is to be explained in terms of "interpretant," while icon or index are not. However, Peirce's conception of "interpretant" itself remains fraught with intractable opacities, thereby leaving the nature of symbol in a misty conundrum. Drawing largely on the experientialist account of the nature and structure of symbolic experience, I try to explicate the complexity of symbol in terms of "the symbolic mapping." According to experientialism, our experience consists of two levels, i.e., physical and symbolic. Physical experience can be extended to symbolic level largely by means of "symbolic mapping," and yet is strongly constrained by physical experience. Symbolic mapping is the way in which we map part of certain physical experience onto some other area, thereby understanding the other area in terms of the mapped part of the physical experience. According to this account, all the signs, icon, index, and symbol a la Peirce, are constructed by way of symbolic mapping. While icon and index are constructed by mapping physical level experience onto some signifier(i.e. Peirce's "representamen"), symbol is constructed by mapping abstract level experience onto some signifier. Considering the experientialist account that abstract level of experience is constructed by way of symbolic mapping of physical level of experience, the symbolic mapping of abstract level of experience onto some other area is a secondary one. Thus, symbol, being constructed by way of secondary or more times mapping, becomes a higher level sign. This analysis is based on the idea that explaining the nature of sign is a matter of explaining that symbolic experience, leaving behind Peirce's realist conception of sign as a matter of an event or state of affairs out there. In conclusion, I suggest that this analysis will open up new possibilities for a more appropriate account of the nature of signs, beyond Peirce's complicated riddles.

Identifying sources of heavy metal contamination in stream sediments using machine learning classifiers (기계학습 분류모델을 이용한 하천퇴적물의 중금속 오염원 식별)

  • Min Jeong Ban;Sangwook Shin;Dong Hoon Lee;Jeong-Gyu Kim;Hosik Lee;Young Kim;Jeong-Hun Park;ShunHwa Lee;Seon-Young Kim;Joo-Hyon Kang
    • Journal of Wetlands Research
    • /
    • v.25 no.4
    • /
    • pp.306-314
    • /
    • 2023
  • Stream sediments are an important component of water quality management because they are receptors of various pollutants such as heavy metals and organic matters emitted from upland sources and can be secondary pollution sources, adversely affecting water environment. To effectively manage the stream sediments, identification of primary sources of sediment contamination and source-associated control strategies will be required. We evaluated the performance of machine learning models in identifying primary sources of sediment contamination based on the physico-chemical properties of stream sediments. A total of 356 stream sediment data sets of 18 quality parameters including 10 heavy metal species(Cd, Cu, Pb, Ni, As, Zn, Cr, Hg, Li, and Al), 3 soil parameters(clay, silt, and sand fractions), and 5 water quality parameters(water content, loss on ignition, total organic carbon, total nitrogen, and total phosphorous) were collected near abandoned metal mines and industrial complexes across the four major river basins in Korea. Two machine learning algorithms, linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were used to classify the sediments into four cases of different combinations of the sampling period and locations (i.e., mine in dry season, mine in wet season, industrial complex in dry season, and industrial complex in wet season). Both models showed good performance in the classification, with SVM outperformed LDA; the accuracy values of LDA and SVM were 79.5% and 88.1%, respectively. An SVM ensemble model was used for multi-label classification of the multiple contamination sources inlcuding landuses in the upland areas within 1 km radius from the sampling sites. The results showed that the multi-label classifier was comparable performance with sinlgle-label SVM in classifying mines and industrial complexes, but was less accurate in classifying dominant land uses (50~60%). The poor performance of the multi-label SVM is likely due to the overfitting caused by small data sets compared to the complexity of the model. A larger data set might increase the performance of the machine learning models in identifying contamination sources.

Analysis on dynamic numerical model of subsea railway tunnel considering various ground and seismic conditions (다양한 지반 및 지진하중 조건을 고려한 해저철도 터널의 동적 수치모델 분석)

  • Changwon Kwak;Jeongjun Park;Mintaek Yoo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.6
    • /
    • pp.583-603
    • /
    • 2023
  • Recently, the advancement of mechanical tunnel boring machine (TBM) technology and the characteristics of subsea railway tunnels subjected to hydrostatic pressure have led to the widespread application of shield TBM methods in the design and construction of subsea railway tunnels. Subsea railway tunnels are exposed in a constant pore water pressure and are influenced by the amplification of seismic waves during earthquake. In particular, seismic loads acting on subsea railway tunnels under various ground conditions such as soft ground, soft soil-rock composite ground, and fractured zones can cause significant changes in tunnel displacement and stress, thereby affecting tunnel safety. Additionally, the dynamic response of the ground and tunnel varies based on seismic load parameters such as frequency characteristics, seismic waveform, and peak acceleration, adding complexity to the behavior of the ground-tunnel structure system. In this study, a finite difference method is employed to model the entire ground-tunnel structure system, considering hydrostatic pressure, for the investigation of dynamic behavior of subsea railway tunnel during earthquake. Since the key factors influencing the dynamic behavior during seismic events are ground conditions and seismic waves, six analysis cases are established based on virtual ground conditions: Case-1 with weathered soil, Case-2 with hard rock, Case-3 with a composite ground of soil and hard rock in the tunnel longitudinal direction, Case-4 with the tunnel passing through a narrow fault zone, Case-5 with a composite ground of soft soil and hard rock in the tunnel longitudinal direction, and Case-6 with the tunnel passing through a wide fractured zone. As a result, horizontal displacements due to earthquakes tend to increase with an increase in ground stiffness, however, the displacements tend to be restrained due to the confining effects of the ground and the rigid shield segments. On the contrary, peak compressive stress of segment significantly increases with weaker ground stiffness and the effects of displacement restrain contribute the increase of peak compressive stress of segment.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

$V_H$ Gene Expression and its Regulation on Several Different B Cell Population by using in situ Hybridization technique

  • Jeong, Hyun-Do
    • Journal of fish pathology
    • /
    • v.6 no.2
    • /
    • pp.111-122
    • /
    • 1993
  • The mechanism by which $V_H$ region gene segments is selected in B lymphocyte is not known. Moreover, evidence for both random and nonrandom expression of $V_H$ genes in matured B cells has been presented previously. In this report, the technique of in situ hybridization allowed us to analyze expressed $V_H$ gene families in normal B lymphocyte at the single cell level. The analysis of normal B cells in this study eliminated any posssible bias resulting from transformation protocols used previously and minimized limitation associated with sampling size. Therefore, an accurate measure of the functional and expressed $V_H$ gene repertoire in B lymphocyte could be made. One of the most important controls for the optimization of in situ hybridization is to establish probe concentration and washing stringency due to the degree of nucleotide sequence similarlity between different families which in some cases can be as high as 70%. When the radioactive $C{\mu}$ and $V_{H}J558$ RNA probes are tested on LPS-stimulated adult spleen cells, $2{\sim}4{\times}106cpm$/slide shows low background and reasonable frequency of specific positive cells. For the washing condition. 40~50% formamide at $54^{\circ}C$ is found to be optimum for the $C{\mu}$. $V_{H}S107$ and $V_{H}J558$ probes. The analyzed results clearly demonstrate that the level of each different $V_H$ gene family expression is dependent upon the complexity or size of that family. These findings are also extended to the level of $V_H$ gene family expression in separated bone marrow B cells depend upon the various stage of differentiation and conclude no preferential utilization of specific $V_H$ gene family. Thus, the utilization of VH gene segments in B lymphocyte of adult BALB/c mice is random and is not regulated or changed during the differentiation of B cells.

  • PDF

A Study on the recognition and Attitude of Home Health Nursing System (가정간호사 제도에 대한 인식 및 태도 조사연구)

  • Lee Sung Ja
    • Journal of Korean Public Health Nursing
    • /
    • v.12 no.1
    • /
    • pp.132-146
    • /
    • 1998
  • This Study was attempted to provide the basic data necessary in the development and introduction of Home Health Nursing System by investigating the recognition and attitude level of Home Health Nursing System. The data were collected by means of questionaires presented to 74 patients who had been admitted in C general hospital in Chon Ju, from June 30, 1997. As the tool for this study, the questionares developed by Kim Yong. Soon, et al (1990) and Han Bok Hee(1993) were modified and supplemented for the aim of this study. The computer was used for data analysis. The items about the charateristics of the subjects and the attitude to the management plan of Home Health Nursing System were represented as the frequency and percentage. The standard deviation and calculation average were produced on the items related to definition, recognition, necessity, expected effect of the attitude of Home Health Nursing System and the items related to admission. The ANOVA test was .used according to the characteristics of variables to analyze the necessity and difference of Home Health Nursing System. The results of this study were as follows 1) The general characteristics of the subjects were as follows ; for sex, man, $58.1\%$ ; for age, 50-59 years, $29.7\%$ ; for the level of education, high school, $51.4\%$ ; $79.7\%$ of them were married; for the family forms, small family, $73.0\%$ ; and $68.9\%$ of them take the monthly income over 100 million won. 2) The characteristics related to admissions of the subjects were as follows ; for clinic, surgical department, $78.4\%$ ; addmission not more then 7days, $47.3\%$ ; for the operation-performance $71.6\%$ of them were experienced; for the admission route, via outpatients clinic, $54.1\%$ ; for waiting period to the admission day, 1-2 days, $71.6\%$. 3) The difficulties comming from the hospitalization were related mostly to the factor that they felt hospital life more inconvenient than home.(3.66) The reasons for the difficulties in the admission which was due to insufficient beds in the hospital was related to the concentration to the general hospital because of 'The Whole National Medical Insurance System'(4.05). 4) On the previous informations about the Home Health Nursing System, those who have heard of only the name were 42 $(56.8\%)$, and on the recognition of it, they thought that it is periodic treatment by the licenced nurses for the recovering pateints after early discharge(3.73). On the attitude about the necessity of Home Health Nursing System, they thought that it is necessary because of the increasing trend of a psychological disease by the change of environment and complexity of the social structure(4.24). On the expected effect of Home Health Nursing System, they answered that it is convinient for the family of the patient to take care of them(4.l8). 5) On the attitude to the management plan of the Home Health Nursing System, those who had intention to participate in the system in the case of systemic support were 42(56.8). In the visiting time, 'visit periodically' and 'visit when the patient needs' were $28(37.8\%)$ respectively. For the application of medical insurance, if possoble, they will use $(91.9\%)$; for the method of payment for the treatment, 'pay by the time required' was $23(31.1\%)$, for the subject of management, 'National public institute must operate' was $33(44.6\%)$. 6) The relationship between the general characteristics of the subjects and the necessity of Home Health Nursing System showed the notable difference in the age (F=3.508, P<0.05) and marrage state (F=5.402, P<.023).

  • PDF

Validation of the Korean Version of the Trauma Symptom Checklist-40 among Psychiatric Outpatients (정신건강의학과 외래환자 대상 한국판 외상 증상 체크리스트(Trauma Symptom Checklist-40)의 타당도 연구)

  • Park, Jin;Kim, Daeho;Kim, Eunkyung;Kim, Seokhyun;Yun, Mirim
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.26 no.1
    • /
    • pp.35-43
    • /
    • 2018
  • Objectives : Effects of multiple trauma are complex and extend beyond core PTSD symptoms. However, few psychological instruments for trauma assessment address this issue of symptom complexity. The Trauma Symptom Checklist-40 (TSC-40) is a self-report scale that assesses wide range of symptoms associated with childhood or adult traumatic experience. The purpose of the present study was to evaluate the validity of the Korean Version of the TSC-40 in a sample of psychiatric outpatients. Methods : Data of 367 treatment-seeking patients with DSM-IV diagnoses were obtained from an outpatient department of psychiatric unit at a university hospital. The diagnoses were anxiety disorder, posttraumatic stress disorder, depressive disorder, adjustment disorder and others. Included in the psychometric data were the TSC-40, the Life events checklist, the Impact of Event Scale-Revised, the Zung's Self-report Depression Scale, and the Zung's Self-report Anxiety Scale. Cronbach's ${\alpha}$ for internal consistency were calculated. Convergent and concurrent validity was approached with correlation between the TSC-40 and other scales (PTSD, anxiety and depression). Results : Exploratory factor analysis of the Korean Version of TSC-40 extracted seven-factor structure accounted for 59.55% of total variance that was contextually similar to a six-factor structure and five-factor structure of the original English version. The Korean Version of TSC-40 demonstrated a high level of internal consistency. (Cronbach's ${\alpha}=0.94$) and good concurrent and convergent validity with another PTSD scale and anxiety and depression scales. Conclusions : Excellent construct validity of The Korean Version of TSC-40 was proved in this study. And subtle difference in the factor structure may reflect the cultural issues and the sample characteristics such as heterogeneous clinical population (including non-trauma related disorders) and outpatient status. Overall, this study demonstrated that the Korean version of TSC-40 is psychometrically sound and can be used for Korean clinical population.