• Title/Summary/Keyword: Predicting Patterns

Search Result 277, Processing Time 0.031 seconds

Chronic HBV Infection in Children: The histopathologic classification and its correlation with clinical findings (소아의 만성 B형 간염: 새로운 병리조직학적 분류와 임상 소견의 상관 분석)

  • Lee, Seon-Young;Ko, Jae-Sung;Kim, Chong-Jai;Jang, Ja-June;Seo, Jeong-Kee
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.1 no.1
    • /
    • pp.56-78
    • /
    • 1998
  • Objective: Chronic hepatitis B infection (CHB) occurs in 6% to 10% of population in Korea. In ethinic communities where prevalence of chronic infection is high such as Korea, transmission of hepatitis B infection is either vertical (ie, by perinatal infection) or by close family contact (usually from mothers or siblings) during the first 5 years of life. The development of chronic hepatitis B infection is increasingly more common the earlier a person is exposed to the virus, particularly in fetal and neonatal life. And it progress to cirrhosis and hepatocellular carcinoma, especially in severe liver damage and perinatal infection. Histopathology of CHB is important when evaluating the final outcomes. A numerical scoring system which is a semiquantitatively assessed objective reproducible classification of chronic viral hepatitis, is a valuable tool for statistical analysis when predicting the outcome and evaluating antiviral and other therapies. In this study, a numerical scoring system (Ludwig system) was applied and compared with the conventional histological classification of De Groute. And the comparative analysis of cinical findings, family history, serology, and liver function test by histopathological findings in chronic hepatitis B of children was done. Methods: Ninety nine patients [mean age=9 years (range=17 months to 16 years)] with clinical, biochemical, serological and histological patterns of chronic HBV infection included in this study. Five of these children had hepatocelluar carcinoma. They were 83 male and 16 female children. They all underwent liver biopsies and histologic evaluation was performed by one pathologist. The biopsy specimens were classified, according to the standard criteria of De Groute as follows: normal, chronic lobular hepatitis (CLH), chronic persistent hepatitis (CPH), mild to severe chronic active hepatitis (CAH), or active cirrhosis, inactive cirrhosis, hepatocellular carcinoma (HCC). And the biopsy specimens were also assessed and scored semiquantitatively by the numerical scoring Ludwig system. Serum HBsAg, anti-HBs, HBeAg, anti-HBe, anti-HBc (IgG, IgM), and HDV were measured by radioimunoassays. Results: Male predominated in a proportion of 5.2:1 for all patients. Of 99 patients, 2 cases had normal, 2 cases had CLH, 22 cases had CPH, 40 cases had mild CAH, 19 cases had moderate CAH, 1 case had severe CAH, 7 cases had active cirrhosis, 1 case had inactive cirrhosis, and 5 cases had HCC. The mean age, sex distribution, symptoms, signs, and family history did not differ statistically among the different histologic groups. The numerical scoring system was correlated well with the conventional histological classification. The histological activity evaluated by both the conventional classification and the scoring system was more severe as the levels of serum aminotransferases were higher. In contrast, the levels of serum aminotransferases were not useful for predicting the degree of histologic activity because of its wide range overlapping. When the histological activity was more severe and especially the cirrhosis more progressing, the prothrombin time was more prolonged. The histological severity was inversely related with the duration of seroconversion of HBeAg. Conclusions: The histological activity could not be accurately predicted by clinical and biochemical findings, but by the proper histological classification of the numerical scoring system for the biopsy specimen. The numerical scoring system was correlated well with the conventional histological classification, and it seems to be a valuable tool for the statistical analysis when predicting the outcome and evaluating effects of antiviral and other therapies in chronic hepatitis B in children.

  • PDF

Characterizing the Spatial Distribution of Oak Wilt Disease Using Remote Sensing Data (원격탐사자료를 이용한 참나무시들음병 피해목의 공간분포특성 분석)

  • Cha, Sungeun;Lee, Woo-Kyun;Kim, Moonil;Lee, Sle-Gee;Jo, Hyun-Woo;Choi, Won-Il
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.310-319
    • /
    • 2017
  • This study categorized the damaged trees by Supervised Classification using time-series-aerial photographs of Bukhan, Cheonggae and Suri mountains because oak wilt disease seemed to be concentrated in the metropolitan regions. In order to analyze the spatial characteristics of the damaged areas, the geographical characteristics such as elevation and slope were statistically analyzed to confirm their strong correlation. Based on the results from the statistical analysis of Moran's I, we have retrieved the following: (i) the value of Moran's I in Bukhan mountain is estimated to be 0.25, 0.32, and 0.24 in 2009, 2010 and 2012, respectively. (ii) the value of Moran's I in Cheonggye mountain estimated to be 0.26, 0.32 and 0.22 in 2010, 2012 and 2014, respectively and (iii) the value of Moran's I in Suri mountain estimated to be 0.42 and 0.42 in 2012 and 2014. respectively. These numbers suggest that the damaged trees are distributed in clusters. In addition, we conducted hotspot analysis to identify how the damaged tree clusters shift over time and we were able to verify that hotspots move in time series. According to our research outcome from the analysis of the entire hotspot areas (z-score>1.65), there were 80 percent probability of oak wilt disease occurring in the broadleaf or mixed-stand forests with elevation of 200~400 m and slope of 20~40 degrees. This result indicates that oak wilt disease hotspots can occur or shift into areas with the above geographical features or forest conditions. Therefore, this research outcome can be used as a basic resource when predicting the oak wilt disease spread-patterns, and it can also prevent disease and insect pest related harms to assist the policy makers to better implement the necessary solutions.

High-Resolution Numerical Simulations with WRF/Noah-MP in Cheongmicheon Farmland in Korea During the 2014 Special Observation Period (2014년 특별관측 기간 동안 청미천 농경지에서의 WRF/Noah-MP 고해상도 수치모의)

  • Song, Jiae;Lee, Seung-Jae;Kang, Minseok;Moon, Minkyu;Lee, Jung-Hoon;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.17 no.4
    • /
    • pp.384-398
    • /
    • 2015
  • In this paper, the high-resolution Weather Research and Forecasting/Noah-MultiParameterization (WRF/Noah-MP) modeling system is configured for the Cheongmicheon Farmland site in Korea (CFK), and its performance in land and atmospheric simulation is evaluated using the observed data at CFK during the 2014 special observation period (21 August-10 September). In order to explore the usefulness of turning on Noah-MP dynamic vegetation in midterm simulations of surface and atmospheric variables, two numerical experiments are conducted without dynamic vegetation and with dynamic vegetation (referred to as CTL and DVG experiments, respectively). The main results are as following. 1) CTL showed a tendency of overestimating daytime net shortwave radiation, thereby surface heat fluxes and Bowen ratio. The CTL experiment showed reasonable magnitudes and timing of air temperature at 2 m and 10 m; especially the small error in simulating minimum air temperature showed high potential for predicting frost and leaf wetness duration. The CTL experiment overestimated 10-m wind and precipitation, but the beginning and ending time of precipitation were well captured. 2) When the dynamic vegetation was turned on, the WRF/Noah-MP system showed more realistic values of leaf area index (LAI), net shortwave radiation, surface heat fluxes, Bowen ratio, air temperature, wind and precipitation. The DVG experiment, where LAI is a prognostic variable, produced larger LAI than CTL, and the larger LAI showed better agreement with the observed. The simulated Bowen ratio got closer to the observed ratio, indicating reasonable surface energy partition. The DVG experiment showed patterns similar to CTL, with differences for maximum air temperature. Both experiments showed faster rising of 10-m air temperature during the morning growth hours, presumably due to the rapid growth of daytime mixed layers in the Yonsei University (YSU) boundary layer scheme. The DVG experiment decreased errors in simulating 10-m wind and precipitation. 3) As horizontal resolution increases, the models did not show practical improvement in simulation performance for surface fluxes, air temperature, wind and precipitation, and required three-dimensional observation for more agricultural land spots as well as consistency in model topography and land cover data.

The feasibility evaluation of Respiratory Gated radiation therapy simulation according to the Respiratory Training with lung cancer (폐암 환자의 호흡훈련에 의한 호흡동조 방사선치료계획의 유용성 평가)

  • Hong, mi ran;Kim, cheol jong;Park, soo yeon;Choi, jae won;Pyo, hong ryeol
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.149-159
    • /
    • 2016
  • Purpose : To evaluate the usefulness of the breathing exercise,we analyzed the change in the RPM signal and the diaphragm imagebefore 4D respiratory gated radiation therapy planning of lung cancer patients. Materials and Methods : The breathing training was enforced on 11 patients getting the 4D respiratory gated radiation therapy from April, 2016 until August. At the same time, RPM signal and diaphragm image was obtained respiration training total three steps in step 1 signal acquisition of free-breathing state, 2 steps respiratory signal acquisition through the guide of the respiratory signal, 3 steps, won the regular respiration signal to the description and repeat training. And then, acquired the minimum value, maximum value, average value, and a standard deviation of the inspiration and expiration in RPM signal and diaphragm image in each steps. Were normalized by the value of the step 1, to convert the 2,3 steps to the other distribution ratio (%), by evaluating the change in the interior of the respiratory motion of the patient, it was evaluated breathing exercise usefulness of each patient. Results : The mean value and the standard deviation of each step were obtained with the procedure 1 of the RPM signal and the diaphragm amplitude as a 100% reference. In the RPM signal, the amplitudes and standard deviations of four patients (36.4%, eleven) decreased by 18.1%, 27.6% on average in 3 steps, and 2 patients (18.2%, 11 people) had standard deviation, It decreased by an average of 36.5%. Meanwhile, the other four patients (36.4%, eleven) decreased by an average of only amplitude 13.1%. In Step 3, the amplitude of the diaphragm image decreased by 30% on average of 9 patients (81.8%, 11 people), and the average of 2 patients (18.2%, 11 people) increased by 7.3%. However, the amplitudes of RPM signals and diaphragm image in 3steps were reduced by 52.6% and 42.1% on average from all patients, respectively, compared to the 2 steps. Relationship between RPM signal and diaphragm image amplitude difference was consistent with patterns of movement 1, 2 and 3steps, respectively, except for No. 2 No. 10 patients. Conclusion : It is possible to induce an optimized respiratory cycle when respiratory training is done. By conducting respiratory training before treatment, it was possible to expect the effect of predicting the movement of the lung which could control the patient's respiration. Ultimately, it can be said that breathing exercises are useful because it is possible to minimize the systematic error of radiotherapy, expect more accurate treatment. In this study, it is limited to research analyzed based on data on respiratory training before treatment, and it will be necessary to verify with the actual CT plan and the data acquired during treatment in the future.

  • PDF

Development of a Predictive Model Describing the Growth of Staphylococcus aureus in Pyeonyuk marketed (시중 유통판매 중인 편육에서의 Staphylococcus aureus 성장예측모델 개발)

  • Kim, An-Na;Cho, Joon-Il;Son, Na-Ry;Choi, Won-Seok;Yoon, Sang-Hyun;Suh, Soo-Hwan;Kwak, Hyo-Sun;Joo, In-Sun
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.3
    • /
    • pp.206-210
    • /
    • 2017
  • This study was performed to develope mathematical models for predicting growth kinetics of Staphylococcus aureus in the processed meat product, pyeonyuk. Growth patterns of S. aureus in pyeonyuk were determined at the storage temperatures of 4, 10, 20, and $37^{\circ}C$ respectively. The number of S. aureus in pyeonyuk increased at all the storage temperatures. The maximum specific growth rate (${\mu}_{max}$) and lag phase duration (LPD) values were calculated by Baranyi model. The ${\mu}_{max}$ values went up, while the LPD values decreased as the storage temperature increased from $4^{\circ}C$ to $37^{\circ}C$. Square root model and polynomial model were used to develop the secondary models for ${\mu}_{max}$ and LPD, respectively. Root Mean Square Error (RMSE) was used to evaluate the developed model and the fitness was determind to be 0.42. Therefore the developed predictive model was useful to predict the growth of S. aureus in pyeonyuk and it will help to prevent food-born disease by expanding for microbial sanitary management guide.

Social Network Analysis for the Effective Adoption of Recommender Systems (추천시스템의 효과적 도입을 위한 소셜네트워크 분석)

  • Park, Jong-Hak;Cho, Yoon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.305-316
    • /
    • 2011
  • Recommender system is the system which, by using automated information filtering technology, recommends products or services to the customers who are likely to be interested in. Those systems are widely used in many different Web retailers such as Amazon.com, Netfix.com, and CDNow.com. Various recommender systems have been developed. Among them, Collaborative Filtering (CF) has been known as the most successful and commonly used approach. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. However, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting in advance whether the performance of CF recommender system is acceptable or not is practically important and needed. In this study, we propose a decision making guideline which helps decide whether CF is adoptable for a given application with certain transaction data characteristics. Several previous studies reported that sparsity, gray sheep, cold-start, coverage, and serendipity could affect the performance of CF, but the theoretical and empirical justification of such factors is lacking. Recently there are many studies paying attention to Social Network Analysis (SNA) as a method to analyze social relationships among people. SNA is a method to measure and visualize the linkage structure and status focusing on interaction among objects within communication group. CF analyzes the similarity among previous ratings or purchases of each customer, finds the relationships among the customers who have similarities, and then uses the relationships for recommendations. Thus CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. Under the assumption that SNA could facilitate an exploration of the topological properties of the network structure that are implicit in transaction data for CF recommendations, we focus on density, clustering coefficient, and centralization which are ones of the most commonly used measures to capture topological properties of the social network structure. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. We explore how these SNA measures affect the performance of CF performance and how they interact to each other. Our experiments used sales transaction data from H department store, one of the well?known department stores in Korea. Total 396 data set were sampled to construct various types of social networks. The dependant variable measuring process consists of three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used UCINET 6.0 for SNA. The experiments conducted the 3-way ANOVA which employs three SNA measures as dependant variables, and the recommendation accuracy measured by F1-measure as an independent variable. The experiments report that 1) each of three SNA measures affects the recommendation accuracy, 2) the density's effect to the performance overrides those of clustering coefficient and centralization (i.e., CF adoption is not a good decision if the density is low), and 3) however though the density is low, the performance of CF is comparatively good when the clustering coefficient is low. We expect that these experiment results help firms decide whether CF recommender system is adoptable for their business domain with certain transaction data characteristics.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Epidemiologic Study of Zoophilic Dermatophytoses between 2010 and 2016 (2010~2016년 동안 동물친화성 피부 사상균 감염의 역학적 연구)

  • Kim, Su Jung
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.49 no.4
    • /
    • pp.439-445
    • /
    • 2017
  • In recent years, change in life patterns gave rise to an increase in the number of families with companion animals, and as a result, frequent dermatophytes infections have been reported. Microsporum canis, Trichophyton mentagrophytes, and Trichophyton verrucosum, are among these species of zoophilic dermatophytes. Trichophyton mentagrophytes are transmitted to humans by contact with wild animals. Infection from it causes strong inflammation in humans. Conversely, Trichophyton verrucosum is transmitted by contact with cattles. Microsporum canis will become latent carriers in cats or dogs, causing infectious diseases when it comes in contact with humans. We investigated zoophilic dermatophytes isolated according to annual, sex, age, season, body sites, and clinical types between 2010 and 2016. According to our results, the isolation rate of zoophilic dermatophytes was 0.37%, among which, 88 T. mentagrophytes, 228 Microsporum canis, and 18 Trichophyton verrucosum were isolated in human. It is interesting to note that Microsporum canis has been on the rise since 2014. Microsporum canis and Trichophyton verrucosum were highly isolated in females, but T. mentagrophytes was isolated similarly in both sexes. According to an age-based survey, the isolation rate was higher in children younger than 10 years. Our results is a valuable data for predicting and studying the isolation of zoophilic dermatophytes in the future.

A Study on the Calculation of Evapotranspiration Crop Coefficient in the Cheongmi-cheon Paddy Field (청미천 논지에서의 증발산량 작물계수 산정에 관한 연구)

  • Kim, Kiyoung;Lee, Yongjun;Jung, Sungwon;Lee, Yeongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.883-893
    • /
    • 2019
  • In this study, crop coefficients were calculated in two different methods and the results were evaluated. In the first method, appropriateness of GLDAS-based evapotranspiration was evaluated by comparing it with observed data of Cheongmi-cheon (CMC) Flux tower. Then, crop coefficient was calculated by dividing actual evapotranspiration with potential evapotranspiration that derived from GLDAS. In the second method, crop coefficient was determined by using MLR (Multiple Linear Regression) analysis with vegetation index (NDVI, EVI, LAI and SAVI) derived from MODIS and in-situ soil moisture data observed in CMC, In comparison of two crop coefficients over the entire period, for each crop coefficient GLDAS Kc and SM&VI Kc, shows the mean value of 0.412 and 0.378, the bias of 0.031 and -0.004, the RMSE of 0.092 and 0.069, and the Index of Agree (IOA) of 0.944 and 0.958. Overall, both methods showed similar patterns with observed evapotranspiration, but the SM&VI-based method showed better results. One step further, the statistical evaluation of GLDAS Kc and SM&VI Kc in specific period was performed according to the growth phase of the crop. The result shows that GLDAS Kc was better in the early and mid-phase of the crop growth, and SM&VI Kc was better in the latter phase. This result seems to be because of reduced accuracy of MODIS sensors due to yellow dust in spring and rain clouds in summer. If the observational accuracy of the MODIS sensor is improved in subsequent study, the accuracy of the SM&VI-based method will also be improved and this method will be applicable in determining the crop coefficient of unmeasured basin or predicting the crop coefficient of a certain area.