• Title/Summary/Keyword: model of computation

Search Result 2,056, Processing Time 0.032 seconds

Estimation or Threshold Runoff on Han River Watershed (한강유역 한강유출량 산정)

  • Kim, Jin-Hoon;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.2 s.163
    • /
    • pp.151-160
    • /
    • 2006
  • In this study, threshold runoff which is a hydrologic component of flash flood guidance(FFG) is estimated by using Manning's bankfull flow and Geomorphoclimatic Instantaneous Unit Hydrograph(GcIUH) methods on Han River watershed. Geographic Information System(GIS) and 3' Digital Elevation Model database have been used to prepare the basin parameters of a very fine drainage area($1.02\~56.41km^2$), stream length and stream slope for threshold runoff computation. Also, cross-sectional data of basin and stream channel are collected for a statistical analysis of regional regression relationships and then those are used to estimate the stream parameters. The estimated threshold runoff values are ranged from 2 mm/h to 14 mm/6hr on Han River headwater basin with the 1-hour duration values are$97\%$ up to 8mm and the 6-hour values are $98\%$ up to 14mm. The sensitivity analysis shows that threshold runoff is more variative to the stream channel cross-sectional factors such as a stream slope, top width and friction slope than the drainage area. In comparisons between the computed threshold runoffs on this study area and the three other regions in the United States, the computed results on Han River watershed are reasonable.

Analysis of RTM Process Using the Extended Finite Element Method (확장 유한 요소 법을 적용한 RTM 공정 해석)

  • Jung, Yeonhee;Kim, Seung Jo;Han, Woo-Suck
    • Composites Research
    • /
    • v.26 no.6
    • /
    • pp.363-372
    • /
    • 2013
  • Numerical simulation for Resin Transfer Molding manufacturing process is attempted by using the eXtended Finite Element Method (XFEM) combined with the level set method. XFEM allows to obtaining a good numerical precision of the pressure near the resin flow front, where its gradient is discontinuous. The enriched shape functions of XFEM are derived by using the level set values so as to correctly describe the interpolation with the resin flow front. In addition, the level set method is used to transport the resin flow front at each time step during the mold filling. The level set values are calculated by an implicit characteristic Galerkin FEM. The multi-frontal solver of IPSAP is adopted to solve the system. This work is validated by comparing the obtained results with analytic solutions. Moreover, a localization method of XFEM and level set method is proposed to increase the computing efficiency. The computation domain is reduced to the small region near the resin flow front. Therefore, the total computing time is strongly reduced by it. The efficiency test is made with a simple channel flow model. Several application examples are analyzed to demonstrate ability of this method.

unifying solution method for logical topology design on wavelength routed optical networks (WDM의 논리망 구성과 파장할당 그리고 트래픽 라우팅을 위한 개선된 통합 해법)

  • 홍성필
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1452-1460
    • /
    • 2000
  • A series of papers in recent literature on logical topology design for wavelength routed optical networks have proposed mathematical models and solution methods unifying logical topology design wavelength assignment and traffic routing. The most recent one is by Krishnaswamy and Sivarajan which is more unifying and complete than the previous models. Especially the mathematical formulation is an integer linear program and hence regarded in readiness for an efficient solution method compared to the previous nonlinear programming models. The solution method in [7] is however elementary one relying on the rounding of linear program relaxation. When the rounding happens to be successful it tends to produce near-optimal solutions. In general there is no such guarantee so that the obtained solution may not satisfy the essential constraints such as logical -path hop-count and even wavelength number constraints. Also the computational efforts for linear program relaxation seems to be too excessive. In this paper we propose an improved and unifying solution method based on the same to be too excessive. In this paper we propose an improved and unifying solution method based on the same model. First its computation is considerably smaller. Second it guarantees the solution satisfies all the constraints. Finally applied the same instances the quality of solution is fairly competitive to the previous near optimal solution.

  • PDF

Application of the Homogenization Analysis to Calculation of a Permeability Coefficient (투수계수 산정을 위한 균질화 해석법의 적응)

  • 채병곤
    • Journal of Soil and Groundwater Environment
    • /
    • v.9 no.1
    • /
    • pp.79-86
    • /
    • 2004
  • Hydraulic conductivity along rock fracture is mainly dependent on fracture geometries such as orientation, aperture, roughness and connectivity. Therefore, it needs to consider fracture geometries sufficiently on a fracture model for a numerical analysis to calculate permeability coefficient in a fracture. This study performed new type of numerical analysis using a homogenization analysis method to calculate permeability coefficient accurately along single fractures with several fracture models that were considered fracture geometries as much as possible. First of all, fracture roughness and aperture variation due to normal stress applied on a fracture were directly measured under a confocal laser scaning microscope (CLSM). The acquired geometric data were used as input data to construct fracture models for the homogenization analysis (HA). Using the constructed fracture models, the homogenization analysis method can compute permeability coefficient with consideration of material properties both in microscale and in macroscale. The HA is a new type of perturbation theory developed to characterize the behavior of a micro inhomogeneous material with a periodic microstructure. It calculates micro scale permeability coefficient at homogeneous microscale, and then, computes a homogenized permeability coefficient (C-permeability coefficient) at macro scale. Therefore, it is possible to analyze accurate characteristics of permeability reflected with local effect of facture geometry. Several computations of the HA were conducted to prove validity of the HA results compared with the empirical equations of permeability in the previous studies using the constructed 2-D fracture models. The model can be classified into a parallel plate model that has fracture roughness and identical aperture along a fracture. According to the computation results, the conventional C-permeability coefficients have values in the range of the same order or difference of one order from the permeability coefficients calculated by an empirical equation. It means that the HA result is valid to calculate permeability coefficient along a fracture. However, it should be noted that C-permeability coefficient is more accurate result than the preexisting equations of permeability calculation, because the HA considers permeability characteristics of locally inhomogeneous fracture geometries and material properties both in microscale and macroscale.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Multiple Linear Analysis for Generating Parametric Images of Irreversible Radiotracer (비가역 방사성추적자 파라메터 영상을 위한 다중선형분석법)

  • Kim, Su-Jin;Lee, Jae-Sung;Lee, Won-Woo;Kim, Yu-Kyeong;Jang, Sung-June;Son, Kyu-Ri;Kim, Hyo-Cheol;Chung, Jin-Wook;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.4
    • /
    • pp.317-325
    • /
    • 2007
  • Purpose: Biological parameters can be quantified using dynamic PET data with compartment modeling and Nonlinear Least Square (NLS) estimation. However, the generation of parametric images using the NLS is not appropriate because of the initial value problem and excessive computation time. In irreversible model, Patlak graphical analysis (PGA) has been commonly used as an alternative to the NLS method. In PGA, however, the start time ($t^*$, time where linear phase starts) has to be determined. In this study, we suggest a new Multiple Linear Analysis for irreversible radiotracer (MLAIR) to estimate fluoride bone influx rate (Ki). Methods: $[^{18}F]Fluoride$ dynamic PET scans was acquired for 60 min in three normal mini-pigs. The plasma input curve was derived using blood sampling from the femoral artery. Tissue time-activity curves were measured by drawing region of interests (ROls) on the femur head, vertebra, and muscle. Parametric images of Ki were generated using MLAIR and PGA methods. Result: In ROI analysis, estimated Ki values using MLAIR and PGA method was slightly higher than those of NLS, but the results of MLAIR and PGA were equivalent. Patlak slopes (Ki) were changed with different $t^*$ in low uptake region. Compared with PGA, the quality of parametric image was considerably improved using new method. Conclusion: The results showed that the MLAIR was efficient and robust method for the generation of Ki parametric image from $[^{18}F]Fluoride$ PET. It will be also a good alternative to PGA for the radiotracers with irreversible three compartment model.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

An Alternative Perspective to Resolve Modelling Uncertainty in Reliability Analysis for D/t Limitation Models of CFST (CFST의 D/t 제한모델들에 대한 신뢰성해석에서 모델링불확실성을 해결하는 선택적 방법)

  • Han, Taek Hee;Kim, Jung Joong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.4
    • /
    • pp.409-415
    • /
    • 2015
  • For the design of Concrete-Filled Steel Tube(CFST) columns, the outside diameter D to the steel tube thickness t ratio(D/t ratio) is limited to prevent the local buckling of steel tubes. Each design code proposes the respective model to compute the maximum D/t ratio using the yield strength of steel $f_y$ or $f_y$ and the elastic modulus of steel E. Considering the uncertainty in $f_y$ and E, the reliability index ${beta}$ for the local buckling of a CFST section can be calculated by formulating the limit state function including the maximum D/t models. The resulted ${beta}$ depends on the maximum D/t model used for the reliability analysis. This variability in reliability analysis is due to ambiguity in choosing computational models and it is called as "modelling uncertainty." This uncertainty can be considered as "non-specificity" of an epistemic uncertainty and modelled by constructing possibility distribution functions. In this study, three different computation models for the maximum D/t ratio are used to conduct reliability analyses for the local buckling of a CFST section and the reliability index ${beta}$ will be computed respectively. The "non-specific ${beta}s$" will be modelled by possibility distribution function and a metric, degree of confirmation, is measured from the possibility distribution function. It is shown that the degree of confirmation increases when ${beta}$ decreases. Conclusively, a new set of reliability indices associated with a degree of confirmation is determined and it is allowed to decide reliability index for the local buckling of a CFST section with an acceptable confirmation level.

External Gravity Field in the Korean Peninsula Area (한반도 지역에서의 상층중력장)

  • Jung, Ae Young;Choi, Kwang-Sun;Lee, Young-Cheol;Lee, Jung Mo
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.451-465
    • /
    • 2015
  • The free-air anomalies are computed using a data set from various types of gravity measurements in the Korean Peninsula area. The gravity values extracted from the Earth Gravitational Model 2008 are used in the surrounding region. The upward continuation technique suggested by Dragomir is used in the computation of the external free-air anomalies at various altitudes. The integration radius 10 times the altitude is used in order to keep the accuracy of results and computational resources. The direct geodesic formula developed by Bowring is employed in integration. At the 1-km altitude, the free-air anomalies vary from -41.315 to 189.327 mgal with the standard deviation of 22.612 mgal. At the 3-km altitude, they vary from -36.478 to 156.209 mgal with the standard deviation of 20.641 mgal. At the 1,000-km altitude, they vary from 3.170 to 5.864 mgal with the standard deviation of 0.670 mgal. The predicted free-air anomalies at 3-km altitude are compared to the published free-air anomalies reduced from the airborne gravity measurements at the same altitude. The rms difference is 3.88 mgal. Considering the reported 2.21-mgal airborne gravity cross-over accuracy, this rms difference is not serious. Possible causes in the difference appear to be external free-air anomaly simulation errors in this work and/or the gravity reduction errors of the other. The external gravity field is predicted by adding the external free-air anomaly to the normal gravity computed using the closed form formula for the gravity above and below the surface of the ellipsoid. The predicted external gravity field in this work is expected to reasonably present the real external gravity field. This work seems to be the first structured research on the external free-air anomaly in the Korean Peninsula area, and the external gravity field can be used to improve the accuracy of the inertial navigation system.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.