• Title/Summary/Keyword: Accurate calculation method

Search Result 640, Processing Time 0.034 seconds

Study on Glomerular Filtration Rate comparison according to renal depth measurement of kidney donors (신 공여자에서 신장 깊이 측정에 따른 사구체여과율의 비교에 관한 고찰)

  • Lee, Han Wool;Park, Min Soo;Kang, Chun Goo;Cho, Seok Won;Kim, Joo Yeon;Kwon, O Jun;Lim, Han Sang;Kim, Jae Sam;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.48-56
    • /
    • 2014
  • Purpose $^{99m}Tc$-DTPA renal scintigraphy serves as a key indicator to measure a kidney donor's Glomerular Filtration Rate (GFR) and determine the possibility of kidney transplant. The Gates method utilized to measure GFR considers 3 variables of renal depth, injection dose, and net kidney counts. In this research, we seek to compare changes in kidney donors' GFR according to renal depth measurement methods of the 3 variables. Materials and Methods We investigated 32 kidney donors who had visited the hospital from October, 2013 to March, 2014 and received abdominal CT and $^{99m}Tc$-DTPA GFR examination. With the cross-section image of the CT and the lateral image from a gamma camera, we measured the renal depth and compared with renal depth calculation equations-Tonnesen, Taylor, and Itoh methods. Renal depth-specific GFR was calculated by using Xeleris Ver. 2.1220 of GE. Then the results were compared with MDRD (Modification of Diet Renal Disease) GFRs based on serum creatinine level. Results The renal depths measured based on the CT and gamma camera images showed high correlation. Tonessen equation gave the lowest GFR value while the value calculated by using the renal depth of CT image was the highest with a 16.62% gap. MDRD GFR showed no statistically significant difference among values calculated through Taylor, Itoh, CT and gamma camera renal depth application (P>0.05), but exhibited a statistically significant change in the value based on Tonnesen equation (P<0.05). Conclusion This research has found that, in GFR evaluation in kidney donors by utilizing $^{99m}Tc$-DTPA, Tonnesen equation-based Gates method underestimated the value than the MDRD GFR. Therefore, if a MDRD GFR value shows a huge difference from the actual examination value, using an image-based renal depth measurement, instead of Tonnesen equation applied to Gates method, is expected to give an accurate GFR value to kidney donors.

  • PDF

Utility Evaluation on Application of Geometric Mean Depending on Depth of Kidney in Split Renal Function Test Using 99mTc-MAG3 (99mTc-MAG3를 이용한 상대적 신장 기능 평가 시 신장 깊이에 따른 기하평균 적용의 유용성 평가)

  • Lee, Eun-Byeul;Lee, Wang-Hui;Ahn, Sung-Min
    • Journal of radiological science and technology
    • /
    • v.39 no.2
    • /
    • pp.199-208
    • /
    • 2016
  • $^{99}mTc-MAG_3$ Renal scan is a method that acquires dynamic renal scan image by using $^{99}mTc-MAG_3$ and dynamically visualizes process of radioactive agent being absorbed to kidney and excreted continuously. Once the test starts, ratio in both kidneys in 1~2.5 minutes was measured to obtain split renal function and split renal function can be expressed in ratio based on overall renal function. This study is based on compares split renal function obtained from data acquired from posterior detector, which is a conventional renal function test method, with split renal function acquired from the geometric mean of values obtained from anterior and posterior detectors, and studies utility of attenuation compensation depending on difference in geometric mean kidney depth. From July, 2015 to February 2016, 33 patients who undertook $^{99}mTc-MAG_3$ Renal scan(13 male, 20 female, average age of 44.66 with range of 5~70, average height of 160.40cm, average weight of 55.40kg) were selected as subjects. Depth of kidney was shown to be 65.82 mm at average for left and 71.62 mm at average for right. In supine position, 30 out of 33 patients showed higher ratio of deep-situated kidney and lower ratio of shallow-situated kidney. Such result is deemed to be due to correction by attenuation between deep-situated kidney and detector and in case where there is difference between the depth of both kidneys such as, lesions in or around kidney, spine malformation, and ectopic kidney, ratio of deep-situated kidney must be compensated for more accurate calculation of split renal function, when compared to the conventional test method (posterior detector counting).

The characteristics on dose distribution of a large field (넓은 광자선 조사면($40{\times}40cm^2$ 이상)의 선량분포 특성)

  • Lee Sang Rok;Jeong Deok Yang;Lee Byoung Koo;Kwon Young Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.15 no.1
    • /
    • pp.19-27
    • /
    • 2003
  • I. Purpose In special cases of Total Body Irradiation(TBI), Half Body Irradiation(HBI), Non-Hodgkin's lymphoma, E-Wing's sarcoma, lymphosarcoma and neuroblastoma a large field can be used clinically. The dose distribution of a large field can use the measurement result which gets from dose distribution of a small field (standard SSD 100cm, size of field under $40{\times}40cm2$) in the substitution which always measures in practice and it will be able to calibrate. With only the method of simple calculation, it is difficult to know the dose and its uniformity of actual body region by various factor of scatter radiation. II. Method & Materials In this study, using Multidata Water Phantom from standard SSD 100cm according to the size change of field, it measures the basic parameter (PDD,TMR,Output,Sc,Sp) From SSD 180cm (phantom is to the bottom vertically) according to increasing of a field, it measures a basic parameter. From SSD 350cm (phantom is to the surface of a wall, using small water phantom. which includes mylar capable of horizontal beam's measurement) it measured with the same method and compared with each other. III. Results & Conclusion In comparison with the standard dose data, parameter which measures between SSD 180cm and 350cm, it turned out there was little difference. The error range is not up to extent of the experimental error. In order to get the accurate data, it dose measures from anthropomorphous phantom or for this objective the dose measurement which is the possibility of getting the absolute value which uses the unlimited phantom that is devised especially is demanded. Additionally, it needs to consider ionization chamber use of small volume and stem effect of cable by a large field.

  • PDF

Introduction to the Benthic Health Index Used in Fisheries Environment Assessment (어장환경평가에 사용하는 저서생태계 건강도지수(Benthic Health Index)에 대한 소개)

  • Rae Hong Jung;Sang-Pil Yoon;Sohyun Park;Sok-Jin Hong;Youn Jung Kim;Sunyoung Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.779-793
    • /
    • 2023
  • Intensive and long-term aquaculture activities in Korea have generated considerable amounts of organic matter, deteriorating the sedimentary environment and ecosystem. The Korean government enacted the Fishery Management Act to preserve and manage the environment of fish farms. Based on this, a fisheries environment assessment has been conducted on fish cage farms since 2014, necessitating the development of a scientific and objective evaluation method suitable for the domestic environment. Therefore, a benthic health index (BHI) was developed using the relationship between benthic polychaete communities and organic matter, a major source of pollution in fish farms. In this study, the development process and calculation method of the BHI have been introduced. The BHI was calculated by classifying 225 species of polychaetes appearing in domestic coastal and aquaculture areas into four groups by linking the concentration gradient of the total organic carbon in the sediment and the distributional characteristics of each species and assigning differential weights to each group. Using BHI, the benthic fauna communities were assigned to one of the four ecological classes (Grade 1: Normal, Grade 2: Slightly polluted, Grade 3: Moderately polluted, and Grade 4: Heavily polluted). The application of the developed index in the field enabled effective evaluation of the Korean environment, being relatively more accurate and less affected by the season compared with the existing evaluation methods like the diversity index or AZTI's Marine Biotic Index developed overseas. In addition, using BHI will be useful in the environmental management of fish farms, as the environment can be graded in quantified figures.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

The Effect of Photoneutron Dose in High Energy Radiotherapy (10 MV 이상 고에너지 치료 시 발생되는 광중성자의 영향)

  • Park, Byoung Suk;Ahn, Jong Ho;Kwon, Dong Yeol;Seo, Jeong Min;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.9-14
    • /
    • 2013
  • Purpose: High-energy radiotherapy with 10 MV or higher develops photoneutron through photonuclear reaction. Photoneutron has higher radiation weighting factor than X-ray, thus low dose can greatly affect the human body. An accurate dosimetric calculation and consultation are needed. This study compared and analyzed the dose change of photoneutron in terms of space according to the size of photon beam energy and treatment methods. Materials and Methods: To measure the dose change of photoneutron by the size of photon beam energy, patients with the same therapy area were recruited and conventional plans with 10 MV and 15 MV were each made. To measure the difference between the two treatment methods, 10 MV conventional plan and 10 MV IMRT plan was made. A detector was placed at the point which was 100 cm away from the photon beam isocenter, which was placed in the center of $^3He$ proportional counter, and the photoneutron dose was measured. $^3He$ proportional counter was placed 50 cm longitudinally superior to and inferior to the couch with the central point as the standard to measure the dose change by position changes. A commercial program was used for dose change analysis. Results: The average integral dose by energy size was $220.27{\mu}Sv$ and $526.61{\mu}Sv$ in 10 MV and 15 MV conventional RT, respectively. The average dose increased 2.39 times in 15 MV conventional RT. The average photoneutron integral dose in conventional RT and IMRT with the same energy was $220.27{\mu}Sv$ and $308.27{\mu}Sv$ each; the dose in IMRT increased 1.40 times. The average photoneutron integral dose by measurement location resulted significantly higher in point 2 than 3 in conventional RT, 7.1% higher in 10 MV, and 3.0% higher in 15 MV. Conclusion: When high energy radiotherapy, it should consider energy selection, treatment method and patient position to reduce unnecessary dose by photoneutron. Also, the dose data of photoneutron needs to be systematized to find methods to apply computerization programs. This is considered to decrease secondary cancer probabilities and side effects due to radiation therapy and to minimize unnecessary dose for the patients.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Determination of Minimal Pressure Support Level During Weaning from Pressure Support Ventilation (압력보조 환기법으로 기계호흡 이탈시 최소압력보조(Minimal Pressure Support) 수준의 결정)

  • Jung, Bock-Hyun;Koh, Youn-Suck;Lim, Chae-Man;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.2
    • /
    • pp.380-387
    • /
    • 1998
  • Background: Minimal pressure support(PSmin) is a level of pressure support which offset the imposed work of breathing(WOBimp) developed by endotracheal tube and ventilator circuits in pressure support ventilation While the lower applied level of pressure support compared to PSmin could induce respiratory muscle fatigue, the higher level than PSmin could keep respiratory muscle rest resulting in prolongation of weaning period during weaning from mechanical ventilation PSmin has been usually applied in the level of 5~10 cm$H_2O$, but the accurate level of PSmin is difficult to be determinated in individual cases. PSmin is known to be calculated by using the equation of "PSmin = peak inspiratory flow rate during spontaneus ventilation$\times$total ventilatory system resistance", but correlation of calculated PSmin and measured PSmin has not been known. The objects of this study were firstly to assess whether customarily applied pressure support level of 5~10 cm$H_2O$ would be appropriate to offset the imposed work of breathing among the patients under weaning process, and secondly to estimate the correlation between the measured PSmin and calculated PSmin. Method : 1) Measurement of PSmin : Intratracheal pressure changes were measured through Hi-Lo jet tracheal tube (8mm in diameter, Mallinckroft, USA) by using pulmonary monitor(CP-100 pulmonary monitor, Bicore, USA), and then pressure support level of mechanical ventilator were increased until WOBimp was reached to 0.01 J/L or less. Measured PSmin was defined as the lowest pressure to make WOBimp 0.01 J/L or less. 2) Calculation of PSmin : Peak airway pressure(Ppeak), plateau airway pressure(Pplat) and mean inspiratory flow rate of the subjects were measured on volume control mode of mechanical ventilation after sedation. Spontaneous peak inspiratory flow rates were measured on CPAP mode(O cm$H_2O$). Thereafter PSmin was calculated by using the equation "PSmin = peak inspiratory flow rate$\times$R, R = (Ppeak-Pplat)/mean inspiratory flow rate during volume control mode on mechanical ventilation". Results: Sixteen patients who were considered as the candidate for weaning from mechanical ventilation were included in the study. Mean age was 64(${\pm}14$) years, and the mean of total ventilation times was 9(${\pm}4$) days. All patients except one were males. The measured PSmin of the subjects ranged 4.0~12.5cm$H_2O$ in 14 patients. The mean level of PSmin was 7.6(${\pm}2.5\;cmH_2O$) in measured PSmin, 8.6 (${\pm}3.25\;cmH_2O$) in calculated PSmin Correlation between the measured PSmin and the calculated PSmin is significantly high(n=9, r=0.88, p=0.002). The calculated PSmin show a tendancy to be higher than the corresponding measured PSmin in 8 out of 9 subjects(p=0.09). The ratio of measured PSmin/calculated PSmin was 0.81(${\pm}0.05$). Conclusion: Minimal pressure support levels were different in individual cases in the range from 4 to 12.5 cm$H_2O$. Because the equation-driven calculated PSmin showed a good correlation with measured PSmin, the application of equation-driven PSmin would be then appropriate compared with conventional application of 5~10 cm$H_2O$ in patients under difficult weaning process with pressure support ventilation.

  • PDF

Modeling of Estimating Soil Moisture, Evapotranspiration and Yield of Chinese Cabbages from Meteorological Data at Different Growth Stages (기상자료(氣象資料)에 의(依)한 배추 생육시기별(生育時期別) 토양수분(土壤水分), 증발산량(蒸發散量) 및 수량(收量)의 추정모형(推定模型))

  • Im, Jeong-Nam;Yoo, Soon-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.4
    • /
    • pp.386-408
    • /
    • 1988
  • A study was conducted to develop a model for estimating evapotranspiration and yield of Chinese cabbages from meteorological factors from 1981 to 1986 in Suweon, Korea. Lysimeters with water table maintained at 50cm depth were used to measure the potential evapotranspiration and the maximum evapotranspiration in situ. The actual evapotranspiration and the yield were measured in the field plots irrigated with different soil moisture regimes of -0.2, -0.5, and -1.0 bars, respectively. The soil water content throughout the profile was monitored by a neutron moisture depth gauge and the soil water potentials were measured using gypsum block and tensiometer. The fresh weight of Chinese cabbages at harvest was measured as yield. The data collected in situ were analyzed to obtain parameters related to modeling. The results were summarized as followings: 1. The 5-year mean of potential evapotranspiration (PET) gradually increased from 2.38 mm/day in early April to 3.98 mm/day in mid-June, and thereafter, decreased to 1.06 mm/day in mid-November. The estimated PET by Penman, Radiation or Blanney-Criddle methods were overestimated in comparison with the measured PET, while those by Pan-evaporation method were underestimated. The correlation between the estimated and the measured PET, however, showed high significance except for July and August by Blanney-Criddle method, which implied that the coefficients should be adjusted to the Korean conditions. 2. The meteorological factors which showed hgih correlation with the measured PET were temperature, vapour pressure deficit, sunshine hours, solar radiation and pan-evaporation. Several multiple regression equations using meteorological factors were formulated to estimate PET. The equation with pan-evaporation (Eo) was the simplest but highly accurate. PET = 0.712 + 0.705Eo 3. The crop coefficient of Chinese cabbages (Kc), the ratio of the maximum evapotranspiration (ETm) to PET, ranged from 0.5 to 0.7 at early growth stage and from 0.9 to 1.2 at mid and late growth stages. The regression equation with respect to the growth progress degree (G), ranging from 0.0 at transplanting day to 1.0 at the harvesting day, were: $$Kc=0.598+0.959G-0.501G^2$$ for spring cabbages $$Kc=0.402+1.887G-1.432G^2$$ for autumn cabbages 4. The soil factor (Kf), the ratio of the actual evapotranspiration to the maximum evapotranspiration, showed 1.0 when the available soil water fraction (f) was higher than a threshold value (fp) and decreased linearly with decreasing f below fp. The relationships were: Kf=1.0 for $$f{\geq}fp$$ Kf=a+bf for f$$I{\leq}Esm$$ Es = Esm for I > Esm 6. The model for estimating actual evapotranspiration (ETa) was based on the water balance neglecting capillary rise as: ETa=PET. Kc. Kf+Es 7. The model for estimating relative yield (Y/Ym) was selected among the regression equations with the measured ETa as: Y/Ym=a+bln(ETa) The coefficients and b were 0.07 and 0.73 for spring Chinese cabbages and 0.37 and 0.66 for autumn Chinese cabbages, respectively. 8. The estimated ETa and Y/Ym were compared with the measured values to verify the model established above. The estimated ETa showed disparities within 0.29mm/day for spring Chinese cabbages and 0.19mm/day for autumn Chinese cabbages. The average deviation of the estimated relative yield were 0.14 and 0.09, respectively. 9. The deviations between the estimated values by the model and the actual values obtained from three cropping field experiments after the completion of the model calibration were within reasonable confidence range. Therefore, this model was validated to be used in practical purpose.

  • PDF