• Title/Summary/Keyword: accuracy index

Search Result 1,237, Processing Time 0.028 seconds

The NCAM Land-Atmosphere Modeling Package (LAMP) Version 1: Implementation and Evaluation (국가농림기상센터 지면대기모델링패키지(NCAM-LAMP) 버전 1: 구축 및 평가)

  • Lee, Seung-Jae;Song, Jiae;Kim, Yu-Jung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.307-319
    • /
    • 2016
  • A Land-Atmosphere Modeling Package (LAMP) for supporting agricultural and forest management was developed at the National Center for AgroMeteorology (NCAM). The package is comprised of two components; one is the Weather Research and Forecasting modeling system (WRF) coupled with Noah-Multiparameterization options (Noah-MP) Land Surface Model (LSM) and the other is an offline one-dimensional LSM. The objective of this paper is to briefly describe the two components of the NCAM-LAMP and to evaluate their initial performance. The coupled WRF/Noah-MP system is configured with a parent domain over East Asia and three nested domains with a finest horizontal grid size of 810 m. The innermost domain covers two Gwangneung deciduous and coniferous KoFlux sites (GDK and GCK). The model is integrated for about 8 days with the initial and boundary conditions taken from the National Centers for Environmental Prediction (NCEP) Final Analysis (FNL) data. The verification variables are 2-m air temperature, 10-m wind, 2-m humidity, and surface precipitation for the WRF/Noah-MP coupled system. Skill scores are calculated for each domain and two dynamic vegetation options using the difference between the observed data from the Korea Meteorological Administration (KMA) and the simulated data from the WRF/Noah-MP coupled system. The accuracy of precipitation simulation is examined using a contingency table that is made up of the Probability of Detection (POD) and the Equitable Threat Score (ETS). The standalone LSM simulation is conducted for one year with the original settings and is compared with the KoFlux site observation for net radiation, sensible heat flux, latent heat flux, and soil moisture variables. According to results, the innermost domain (810 m resolution) among all domains showed the minimum root mean square error for 2-m air temperature, 10-m wind, and 2-m humidity. Turning on the dynamic vegetation had a tendency of reducing 10-m wind simulation errors in all domains. The first nested domain (7,290 m resolution) showed the highest precipitation score, but showed little advantage compared with using the dynamic vegetation. On the other hand, the offline one-dimensional Noah-MP LSM simulation captured the site observed pattern and magnitude of radiative fluxes and soil moisture, and it left room for further improvement through supplementing the model input of leaf area index and finding a proper combination of model physics.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

A Study of Traffic Incident Flow Characteristics on Korean Highway Using Multi-Regime (Multi-Regime에 의한 돌발상황 시 교통류 분석)

  • Lee Seon-Ha;kang Hee-Chan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.1 s.6
    • /
    • pp.43-56
    • /
    • 2005
  • This research has examined a time series analysis(TSA) of an every hour traffic information such as occupancy, a traffic flow, and a speed, a statistical model of a surveyed data on the traffic fundamental diagram and an expand aspect of a traffic jam by many Parts of the traffic flow. Based on the detected data from traffic accidents on the Cheonan-Nonsan high way and events when the road volume decreases dramatically like traffic accidents it can be estimated from the change of occupancy right after accidents. When it comes to a traffic jam like events the changing gap of the occupancy and the mean speed is gentle, in addition to a quickness and an accuracy of a detection by the time series analyse of simple traffic index is weak. When it is a stable flow a relationship between the occupancy and a flow is a linear, which explain a very high reliability. In contrast, a platoon form presented by a wide deviation about an ideal speed of drivers is difficult to express by a statical model in a relationship between the speed and occupancy, In this case the speed drops shifty at 6$\~$8$\%$ occupancy. In case of an unstable flow, it is difficult to adopt a statistical model because the formation-clearance Process of a traffic jam is analyzed in each parts. Taken the formation-clearance process of a traffic jam by 2 parts division into consideration the flow having an accident is transferred to a stopped flow and the occupancy increases dramatically. When the flow recovers from a sloped flow to a free flow the occupancy which has increased dramatically decrease gradually and then traffic flow increases according as the result analyzed traffic flow by the multi regime as time series. When it is on the traffic jam the traffic flow transfers from an impeded free flow to a congested flow and then a jammed flow which is complicated more than on the accidents and the gap of traffic volume in each traffic conditions about a same occupancy is generated huge. This research presents a need of a multi-regime division when analyzing a traffic flow and for the future it needs a fixed quantity division and model about each traffic regimes.

  • PDF

Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia (동아시아 지역의 위성 구름탐지 산출물 상호 비교를 통한 품질 평가)

  • Byeon, Yugyeong;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Woo, Jongho;Jeon, Uujin;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_2
    • /
    • pp.1829-1836
    • /
    • 2021
  • Cloud detection means determining the presence or absence of clouds in a pixel in a satellite image, and acts as an important factor affecting the utility and accuracy of the satellite image. In this study, among the satellites of various advanced organizations that provide cloud detection data, we intend to perform quantitative and qualitative comparative analysis on the difference between the cloud detection data of GK-2A/AMI, Terra/MODIS, and Suomi-NPP/VIIRS. As a result of quantitative comparison, the Proportion Correct (PC) index values in January were 74.16% for GK-2A & MODIS, 75.39% for GK-2A & VIIRS, and 87.35% for GK-2A & MODIS in April, and GK-2A & VIIRS showed that 87.71% of clouds were detected in April compared to January without much difference by satellite. As for the qualitative comparison results, when compared with RGB images, it was confirmed that the results corresponding to April rather than January detected clouds better than the previous quantitative results. However, if thin clouds or snow cover exist, each satellite were some differences in the cloud detection results.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

Prediction of Maximal Oxygen Uptake Ages 18~34 Years (18~34 남성의 최대산소 섭취량 추정)

  • Jeon, Yoo-Joung;Im, Jae-Hyeng;Lee, Byung-Kun;Kim, Chang-Hwan;Kim, Byeong-Wan
    • 한국체육학회지인문사회과학편
    • /
    • v.51 no.3
    • /
    • pp.373-382
    • /
    • 2012
  • The purpose of this study is to predict VO2max with body index and submaximal metabolic responses. The subjects are consisted of 250 male aging from 18 to 34 and we separated them into two groups randomly; 179 for a sample, 71 for a cross-validation group. They went through maximal exercise testing with Bruce protocol, and we measured the metabolic responses in the end of the first(3 minute) and second stage(6 minute). To predict VO2max, we applied multiple regression analysis to the sample with stepwise method. Model 1's variables are weight, 6 minute HR and 6 minute VO2(R=0.64, SEE=4.74, CV=11.7%, p<.01), and the equation is VO2max(ml/kg/min)= 72.256-0.340(Weight)-0.220(6minHR)+0.013(6minVO2). Model 2's variables are weight, 6 minute HR, 6 minute VO2, and 6 minute VCO2(R=0.67, SEE=4.59, CV=11.3%, p<.01), and the equation is VO2max(ml/kg/min)= 68.699-0.277(Weight) -0.206(6minHR)+0.020(6minVO2)-0.009(6minVCO2). And the result did not show multicolinearity for both models. Model 2 demonstrated more correlation compared to Model 1. However, when we conducted cross-validation of those models with 71 men, measured VO2max and estimated VO2 Max had statistical significance with correlation (R=0.53, 0.56, P<.01). Although both models are functional with validity considering their simplicity and utility, Model 2 has more accuracy.

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.

Clinical Application of Serum CEA, SCC, Cyfra21-1, and TPA in Lung Cancer (폐암환자에서 혈청 CEA, SCC, Cyfra21-1, TPA-M 측정의 의의)

  • Lee, Jun-Ho;Kim, Kyung-Chan;Lee, Sang-Jun;Lee, Jong-Kook;Jo, Sung-Jae;Kwon, Kun-Young;Han, Sung-Beom;Jeon, Young-June
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.4
    • /
    • pp.785-795
    • /
    • 1997
  • Background : Tumor markers have been used in diagnosis, predicting the extent of disease, monitoring recurrence after therapy and prediction of prognosis. But the utility of markers in lung cancer has been limited by low sensitivity and specificity. TPA-M is recently developed marker using combined monoclonal antibody of Cytokeratin 8, 18, and 19. This study was conducted to evaluate the efficacy of new tumor marker, TPA-M by comparing the estabilished markers SCC, CEA, Cyfra21-1 in lung cancer. Method : An immunoradiometric assay of serum CEA, sec, Cyfra21-1, and TPA-M was performed in 49 pathologically confirmed lung cancer patients who visited Keimyung University Hospital from April 1996 to August 1996, and 29 benign lung diseases. Commercially available kits, Ab bead CEA (Eiken) to CEA, SCC RIA BEAD (DAINABOT) to SCC, CA2H (TFB) to Cyfra2H. and TPA-M (DAIICHI) to TPA-M were used for this study. Results : The mean serum values of lung cancer group and control group were $10.05{\pm}38.39{\mu}/L$, $1.59{\pm}0.94{\mu}/L$ in CEA, $3.04{\pm}5.79{\mu}/L$, $1.58{\pm}2.85{\mu}/L$ in SCC, $8.27{\pm}11.96{\mu}/L$, $1.77{\pm}2.72{\mu}/L$ in Cyfra21-1, and $132.02{\pm}209.35\;U/L$, $45.86{\pm}75.86\;U/L$ in TPA-M respectively. Serum values of Cyfra21-1 and TPA-M in lung cancer group were higher than control group (p<0.05). Using cutoff value recommended by the manufactures, that is $2.5{\mu}/L$ in CEA, $3.0{\mu}/L$ in Cyfra21-1, 70.0 U/L in TPA-M, and $2.0{\mu}/L$ in SCC, sensitivity and specificity of lung cancer were 33.3%, 78.6% in CEA, 50.0%, 89.7% in Cyfra21-1, 52.3%, 89.7% in TPA-M, 23.8%, 89.3% in SCC. Sensitivity and specificity of nonsmall cell lung cancer were 36.1%, 78.1% in CEA, 50.1%, 89.7% in Cyfra21-1, 53.1%, 89.7% in TPA-M, 33.8%, 89.3% in SCC. Sensitivity and specificity of small cell lung cancer were 25.0%, 78.5% in CEA, 50.0%, 89.6% in Cyfra21-1, 50.0%, 89.6% in TPA-M, 0%, 89.2% in SCC. Cutoff value according to ROC(Receiver operating characteristics) curve was $1.25{\mu}/L$ in CEA, $1.5{\mu}/L$ in Cyfra2-1, 35 U/L in TPA-M, $0.6{\mu}/L$ in SCC. With this cutoff value, sensitivity, specificity, accuracy and kappa index of Cyfra21-1 and TPA-M were better than CEA and SCC. SCC only was related with statistic significance to TNM stages, dividing to operable stages(TNM stage I to IIIA) and inoperable stages (IIIB and IV) (p<0.05). But no tumor markers showed any correlation with significance with tumor size(p>0.05). Conclusion : Serum TPA-M and Cyfra21-1 shows higher sensitivity and specificity than CEA and SCC in overall lung cancer and nonsmall cell lung cancer those were confirmed pathologically. SCC has higher specificity in nonsmall cell lung cancer. And the level of serum sec are signiticantly related with TNM staging.

  • PDF