• Title/Summary/Keyword: threshold level method

Search Result 430, Processing Time 0.039 seconds

Mercury Contents of Paddy Soil in Korea and its Uptake to Rice Plant (우리나라 논 토양 중 수은함량과 벼 흡수이행)

  • Park, Sang-Won;Yang, Ju-Seok;Kim, Jin-Kyoung;Park, Byung-Jun;Kim, Won-Il;Choi, Ju-Hyeon;Kwon, Oh-Kyung;Ryu, Gab-Hee
    • Journal of Food Hygiene and Safety
    • /
    • v.23 no.1
    • /
    • pp.6-14
    • /
    • 2008
  • Objective of this study was to investigate the residual levels of mercury (Hg) in soil for "Top-rice" area and its uptake into rice plant for making sure food safety as compared to "Top-rice" & common rice produced from 2005 to 2006. Hg was analyzed with the direct mercury analyzer (DMA 80, Milestone, Italy), which implements the US/EPA method 7473. The average concentration of Hg in paddy soil was 0.031 mg/kg, which was below at 1/25-1/65 fold of the threshold levels (concern level 4 mg/kg, action level 10 mg/kg) for soil contamination designated by "The Soil Environment Conservation Law" in Korea. The maximum residue level (MRLs) for Hg residue in the polished rice is not designated in Korea. Therefore, Hg contents in the polished rice of "Top-rice" brand and common rice were compared to other country's criteria. Hg contents in the polished rice of "Top-rice" brand was 0.0018 mg/kg, which was lower at 1/10-1/30 fold than the MRLs, 0.02 mg/kg of China criteria and 0.05 mg/kg of Taiwan criteria, respectively. Hg were 0.02788, 0.00896, 0.00182, 0.00189, 0.00166, 0.00452 and 0.00145 mg/kg in soil, rice straw, unhulled rice, rice hulls, brown rice, rice bran, and polished rice produced in 2006 "Top-rice" area, respectively. For the ratio of Hg as compared to Hg contents in soil, there were 0.321 of rice straw ${\gg}$ 0.162 of rice bran ${\gg}$ 0.068 of rice hulls > 0.065 of unhulled rice > 0.060 of brown rice> 0.052 of polished rice. And, the slope of Hg uptakes was steeped as following order; rice straw ${\gg}$ rice bran ${\gg}$ rice hulls > unhulled rice > brown rice > polished rice. It means that the more slope steeped was the more uptakes. For the distribution of Hg uptaken, there was 83.8% into rice straw, and 16.2% into unhulled rice, 2.8% into rice hulls, 12.4% into brown rice, 3.5% into rice bran and 9.7% into polished rice. Consequently, it was appeared that the Hg contamination in the polished rice should not be worried in Korea.

RGB Channel Selection Technique for Efficient Image Segmentation (효율적인 이미지 분할을 위한 RGB 채널 선택 기법)

  • 김현종;박영배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1332-1344
    • /
    • 2004
  • Upon development of information super-highway and multimedia-related technoiogies in recent years, more efficient technologies to transmit, store and retrieve the multimedia data are required. Among such technologies, firstly, it is common that the semantic-based image retrieval is annotated separately in order to give certain meanings to the image data and the low-level property information that include information about color, texture, and shape Despite the fact that the semantic-based information retrieval has been made by utilizing such vocabulary dictionary as the key words that given, however it brings about a problem that has not yet freed from the limit of the existing keyword-based text information retrieval. The second problem is that it reveals a decreased retrieval performance in the content-based image retrieval system, and is difficult to separate the object from the image that has complex background, and also is difficult to extract an area due to excessive division of those regions. Further, it is difficult to separate the objects from the image that possesses multiple objects in complex scene. To solve the problems, in this paper, I established a content-based retrieval system that can be processed in 5 different steps. The most critical process of those 5 steps is that among RGB images, the one that has the largest and the smallest background are to be extracted. Particularly. I propose the method that extracts the subject as well as the background by using an Image, which has the largest background. Also, to solve the second problem, I propose the method in which multiple objects are separated using RGB channel selection techniques having optimized the excessive division of area by utilizing Watermerge's threshold value with the object separation using the method of RGB channels separation. The tests proved that the methods proposed by me were superior to the existing methods in terms of retrieval performances insomuch as to replace those methods that developed for the purpose of retrieving those complex objects that used to be difficult to retrieve up until now.

Microbiological Hazard Analysis for HACCP System Application to Vinegared Pickle Radishes (식초절임 무의 HACCP 시스템 적용을 위한 미생물학적 위해 분석)

  • Kwon, Sang-Chul
    • Journal of Food Hygiene and Safety
    • /
    • v.28 no.1
    • /
    • pp.69-74
    • /
    • 2013
  • This study has been performed for 150 days from February 1 - June 31, 2012 aiming at analyzing biologically hazardous factors in order to develop HACCP system for the vinegared pickle radishes. A process chart was prepared as shown on Fig. 1 by referring to manufacturing process of manufacturer of general vinegared pickle radishes regarding process of raw agricultural products of vinegared pickle radishes, used water, warehousing of additives and packing material, storage, careful selection, washing, peeling off, cutting, sorting out, stuffing (filling), internal packing, metal detection, external packing, storage and consignment (delivery). As a result of measuring Coliform group, Staphylococcus aureus, Salmonella spp., Bacillus cereus, Listeria Monocytogenes, E. coli O157:H7, Clostridium perfringens, Yeast and Mold before and after washing raw radishes, Bacillus cereus was $5.00{\times}10$ CFU/g before washing but it was not detected after washing and Yeast and Mold was $3.80{\times}10^2$ CFU/g before washing but it was reduced to 10 CFU/g after washing and other pathogenic bacteria was not detected. As a result of testing microorganism variation depending on pH (2-5) of seasoning fluid (condiment), pH 3-4 was determined as pH of seasoning fluid as all the bacteria was not detected in pH3-4. As a result of testing air-borne bacteria (number of general bacteria, colon bacillus, fungus) depending on each workplace, number of microorganism of internal packing room, seasoning fluid processing room, washing room and storage room was detected to be 10 CFU/Plate, 2 CFU/Plate, 60 CFU/Plate and 20 CFU/Plate, respectively. As a result of testing palm condition of workers, as number of general bacteria and colon bacillus was represented to be high as 346 $CFU/Cm^2$ and 23 $CFU/Cm^2$, respectively, an education and training for individual sanitation control was considered to be required. As a result of inspecting surface pollution level of manufacturing facility and devices, colon bacillus was not detected in all the specimen but general bacteria was most dominantly detected in PP Packing machine and Siuping machine (PE Bulk) as $4.2{\times}10^3CFU/Cm^2$, $2.6{\times}10^3CFU/Cm^2$, respectively. As a result of analyzing above hazardous factors, processing process of seasoning fluid where pathogenic bacteria may be prevented, reduced or removed is required to be controlled by CCP-B (Biological) and threshold level (critical control point) was set at pH 3-4. Therefore, it is considered that thorough HACCP control plan including control criteria (point) of seasoning fluid processing process, countermeasures in case of its deviation, its verification method, education/training and record control would be required.

The Comparison of Susceptibility Changes in 1.5T and3.0T MRIs due to TE Change in Functional MRI (뇌 기능영상에서의 TE값의 변화에 따른 1.5T와 3.0T MRI의 자화율 변화 비교)

  • Kim, Tae;Choe, Bo-Young;Kim, Euy-Neyng;Suh, Tae-Suk;Lee, Heung-Kyu;Shinn, Kyung-Sub
    • Investigative Magnetic Resonance Imaging
    • /
    • v.3 no.2
    • /
    • pp.154-158
    • /
    • 1999
  • Purpose : The purpose of this study was to find the optimum TE value for enhancing $T_2^{*}$ weighting effect and minimizing the SNR degradation and to compare the BOLD effects according to the changes of TE in 1.5T and 3.0T MRI systems. Materials and Methods : Healthy normal volunteers (eight males and two females with 24-38 years old) participated in this study. Each volunteer was asked to perform a simple finger-tapping task (sequential opposition of thumb to each of the other four fingers) with right hand with a mean frequency of about 2Hz. The stimulus was initially off for 3 images and was then alternatively switched on and off for 2 cycles of 6 images. Images were acquired on the 1.5T and 3.0T MRI with the FLASH (fast low angle shot) pulse sequence (TR : 100ms, FA : $20^{\circ}$, FOV : 230mm) that was used with 26, 36, 46, 56, 66, 76ms of TE times in 1.5T and 16, 26, 36, 46, 56, 66ms of TE in 3.0T MRI system. After the completion of scan, MR images were transferred into a PC and processed with a home-made analysis program based on the correlation coefficient method with the threshold value of 0.45. To search for the optimum TE value in fMRI, the difference between the activation and the rest by the susceptibility change for each TE was used in 1.5T and 3.0T respectively. In addition, the functional $T_2^{*}$ map was calculated to quantify susceptibility change. Results : The calculated optimum TE for fMRI was $61.89{\pm}2.68$ at 1.5T and $47.64{\pm}13.34$ at 3.0T. The maximum percentage of signal intensity change due to the susceptibility effect inactivation region was 3.36% at TE 66ms in 1.5T 10.05% at TE 46ms in 3.0T, respectively. The signal intensity change of 3.0T was about 3 times bigger than of 1.5T. The calculated optimum TE value was consistent with TE values which were obtained from the maximum signal change for each TE. Conclusion : In this study, the 3.0T MRI was clearly more sensitive, about three times bigger than the 1.5T in detecting the susceptibility due to the deoxyhemoglobin level change in the functional MR imaging. So the 3.0T fMRI I ore useful than 1.5T.

  • PDF

Optimization of Biotransformation Process for Sodium Gluconate Production by Aspergillus niger (Aspergillus niger를 이용한 글루콘산 나트륨 생산 생변환 공정의 최적화)

  • 박부수;조병관;이상윤;임승환;김동일;김병기
    • KSBB Journal
    • /
    • v.14 no.3
    • /
    • pp.309-314
    • /
    • 1999
  • In order to produce high concentration of sodium gluconate, optimization of the fermentation conditions, such as glucose concentration, inoculum size, dissolved oxygen concentration and glucose feeding method, was examined. When the glucose concentration was maintained in the range of 30∼50 g/L during the batch fermentation, glucose conversion yield and productivity were 92.2% and 6.0 g/L/hr, respectively. In the case of the low concentration below 30 g/L, the yield decreased by about 25%. As the inoculum size increased above 20%(w/v), lag phase was shortened but the productivity decreased. The dissolved oxygen level of 60∼70% was shown to be the threshold point for 75% of increase in the productivity of sodium gluconate. Finally, optimal glucose feeding rate was determined using various feeding methods such as exponential feeding, feeding based on the average glucose consumption rate and was determined using various feeding methods such as exponential feeding, feeding based on the average glucose consumption rate and on the oxygen uptake rate and etc. Our result shows that glucose feeding, based on the oxygen uptake rate is a very simple, efficient and robust method, especially when oxygen is consumed as a substrate for the bioconversion. Using the above glucose feeding strategy under the optimized condition, 255 g/L of sodium gluconate concentration, 12 g/L/hr of productivity and 95% of glucose conversion yield were achieved with A. niger ACM53.

  • PDF

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Enhancing Predictive Accuracy of Collaborative Filtering Algorithms using the Network Analysis of Trust Relationship among Users (사용자 간 신뢰관계 네트워크 분석을 활용한 협업 필터링 알고리즘의 예측 정확도 개선)

  • Choi, Seulbi;Kwahk, Kee-Young;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.113-127
    • /
    • 2016
  • Among the techniques for recommendation, collaborative filtering (CF) is commonly recognized to be the most effective for implementing recommender systems. Until now, CF has been popularly studied and adopted in both academic and real-world applications. The basic idea of CF is to create recommendation results by finding correlations between users of a recommendation system. CF system compares users based on how similar they are, and recommend products to users by using other like-minded people's results of evaluation for each product. Thus, it is very important to compute evaluation similarities among users in CF because the recommendation quality depends on it. Typical CF uses user's explicit numeric ratings of items (i.e. quantitative information) when computing the similarities among users in CF. In other words, user's numeric ratings have been a sole source of user preference information in traditional CF. However, user ratings are unable to fully reflect user's actual preferences from time to time. According to several studies, users may more actively accommodate recommendation of reliable others when purchasing goods. Thus, trust relationship can be regarded as the informative source for identifying user's preference with accuracy. Under this background, we propose a new hybrid recommender system that fuses CF and social network analysis (SNA). The proposed system adopts the recommendation algorithm that additionally reflect the result analyzed by SNA. In detail, our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and trust relationship information between users when calculating user similarities. For this, our system creates and uses not only user-item rating matrix, but also user-to-user trust network. As the methods for calculating user similarity between users, we proposed two alternatives - one is algorithm calculating the degree of similarity between users by utilizing in-degree and out-degree centrality, which are the indices representing the central location in the social network. We named these approaches as 'Trust CF - All' and 'Trust CF - Conditional'. The other alternative is the algorithm reflecting a neighbor's score higher when a target user trusts the neighbor directly or indirectly. The direct or indirect trust relationship can be identified by searching trust network of users. In this study, we call this approach 'Trust CF - Search'. To validate the applicability of the proposed system, we used experimental data provided by LibRec that crawled from the entire FilmTrust website. It consists of ratings of movies and trust relationship network indicating who to trust between users. The experimental system was implemented using Microsoft Visual Basic for Applications (VBA) and UCINET 6. To examine the effectiveness of the proposed system, we compared the performance of our proposed method with one of conventional CF system. The performances of recommender system were evaluated by using average MAE (mean absolute error). The analysis results confirmed that in case of applying without conditions the in-degree centrality index of trusted network of users(i.e. Trust CF - All), the accuracy (MAE = 0.565134) was lower than conventional CF (MAE = 0.564966). And, in case of applying the in-degree centrality index only to the users with the out-degree centrality above a certain threshold value(i.e. Trust CF - Conditional), the proposed system improved the accuracy a little (MAE = 0.564909) compared to traditional CF. However, the algorithm searching based on the trusted network of users (i.e. Trust CF - Search) was found to show the best performance (MAE = 0.564846). And the result from paired samples t-test presented that Trust CF - Search outperformed conventional CF with 10% statistical significance level. Our study sheds a light on the application of user's trust relationship network information for facilitating electronic commerce by recommending proper items to users.

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Modeling of Estimating Soil Moisture, Evapotranspiration and Yield of Chinese Cabbages from Meteorological Data at Different Growth Stages (기상자료(氣象資料)에 의(依)한 배추 생육시기별(生育時期別) 토양수분(土壤水分), 증발산량(蒸發散量) 및 수량(收量)의 추정모형(推定模型))

  • Im, Jeong-Nam;Yoo, Soon-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.4
    • /
    • pp.386-408
    • /
    • 1988
  • A study was conducted to develop a model for estimating evapotranspiration and yield of Chinese cabbages from meteorological factors from 1981 to 1986 in Suweon, Korea. Lysimeters with water table maintained at 50cm depth were used to measure the potential evapotranspiration and the maximum evapotranspiration in situ. The actual evapotranspiration and the yield were measured in the field plots irrigated with different soil moisture regimes of -0.2, -0.5, and -1.0 bars, respectively. The soil water content throughout the profile was monitored by a neutron moisture depth gauge and the soil water potentials were measured using gypsum block and tensiometer. The fresh weight of Chinese cabbages at harvest was measured as yield. The data collected in situ were analyzed to obtain parameters related to modeling. The results were summarized as followings: 1. The 5-year mean of potential evapotranspiration (PET) gradually increased from 2.38 mm/day in early April to 3.98 mm/day in mid-June, and thereafter, decreased to 1.06 mm/day in mid-November. The estimated PET by Penman, Radiation or Blanney-Criddle methods were overestimated in comparison with the measured PET, while those by Pan-evaporation method were underestimated. The correlation between the estimated and the measured PET, however, showed high significance except for July and August by Blanney-Criddle method, which implied that the coefficients should be adjusted to the Korean conditions. 2. The meteorological factors which showed hgih correlation with the measured PET were temperature, vapour pressure deficit, sunshine hours, solar radiation and pan-evaporation. Several multiple regression equations using meteorological factors were formulated to estimate PET. The equation with pan-evaporation (Eo) was the simplest but highly accurate. PET = 0.712 + 0.705Eo 3. The crop coefficient of Chinese cabbages (Kc), the ratio of the maximum evapotranspiration (ETm) to PET, ranged from 0.5 to 0.7 at early growth stage and from 0.9 to 1.2 at mid and late growth stages. The regression equation with respect to the growth progress degree (G), ranging from 0.0 at transplanting day to 1.0 at the harvesting day, were: $$Kc=0.598+0.959G-0.501G^2$$ for spring cabbages $$Kc=0.402+1.887G-1.432G^2$$ for autumn cabbages 4. The soil factor (Kf), the ratio of the actual evapotranspiration to the maximum evapotranspiration, showed 1.0 when the available soil water fraction (f) was higher than a threshold value (fp) and decreased linearly with decreasing f below fp. The relationships were: Kf=1.0 for $$f{\geq}fp$$ Kf=a+bf for f$$I{\leq}Esm$$ Es = Esm for I > Esm 6. The model for estimating actual evapotranspiration (ETa) was based on the water balance neglecting capillary rise as: ETa=PET. Kc. Kf+Es 7. The model for estimating relative yield (Y/Ym) was selected among the regression equations with the measured ETa as: Y/Ym=a+bln(ETa) The coefficients and b were 0.07 and 0.73 for spring Chinese cabbages and 0.37 and 0.66 for autumn Chinese cabbages, respectively. 8. The estimated ETa and Y/Ym were compared with the measured values to verify the model established above. The estimated ETa showed disparities within 0.29mm/day for spring Chinese cabbages and 0.19mm/day for autumn Chinese cabbages. The average deviation of the estimated relative yield were 0.14 and 0.09, respectively. 9. The deviations between the estimated values by the model and the actual values obtained from three cropping field experiments after the completion of the model calibration were within reasonable confidence range. Therefore, this model was validated to be used in practical purpose.

  • PDF