• Title/Summary/Keyword: Default factor

Search Result 100, Processing Time 0.023 seconds

Influence of N Fertilization Level, Rainfall, and Temperature on the Emission of N2O in the Jeju Black Volcanic Ash Soil with Soybean Cultivation (콩 재배 화산회토양에서 질소시비 수준 및 강우, 온도 환경 변화에 따른 아산화질소 배출 특성)

  • Yang, Sang-Ho;Kang, Ho-Jun;Lee, Shin-Chan;Oh, Han-Jun;Kim, Gun-Yeob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.3
    • /
    • pp.451-458
    • /
    • 2012
  • This study was conducted to investigate the characteristic factors which have been influenced on nitrous oxide ($N_2O$) emissions related to the environment change of nitrogen application level, rainfall and temperature during the soybean cultivation at black volcanic ash soil from 2010 to 2011. During the soybean cultivation, the more amount of nitrogen fertilizer applied, $N_2O$ emissions amounts were released much. $N_2O$ emissions with the cultivation time were released much at the first and middle of cultivation with heavy rainfall, but it was released very low until the end of cultivation and drought season. $N_2O$ emissions mainly were influenced by the rainfall and soil water content. The correlation ($r$) with $N_2O$ emissions, soil water, soil temperature and soil EC in 2010 were very significant at $0.4591^{**}$, $0.6312^{**}$ and $0.3691^{**}$ respectively. In 2011, soil water was very significant at $0.4821^{**}$, but soil temperature and soil EC were not significant at 0.1646 and 0.1543 respectively. Also, $NO_3$-N and soil nitrogen ($NO_3-N+NO_4-N$) were very significant at $0.6902^{**}$ and $0.6277^*$ respectively, but $NO_4$-N was not significant at 0.1775. During the soybean cultivation, the average emissions factor of 2 years released by the nitrogen fertilizer application was presumed to be 0.0202 ($N_2O$-N kg $N^{-1}\;kg^{-1}$). This factor was higher about 2.8 and 2 times than the Japan's (0.0073 $N_2O$-N kg $N^{-1}\;kg^{-1}$) value and 2006 IPCC guideline default value (0.0100 $N_2O$-N kg $N^{-1}\;kg^{-1}$) respectively.

Influence of N Fertilization Level, Rainfall and Temperature on the Emission of N2O in the Jeju Black Volcanic Ash Soil with Potato Cultivation (감자 재배 화산회토양에서 질소시비 수준, 강우 및 온도 환경 변화에 따른 아산화질소 배출 특성)

  • Yang, Sang-Ho;Kang, Ho-Jun;Lee, Shin-Chan;Oh, Han-Jun;Kim, Gun-Yeob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.4
    • /
    • pp.544-550
    • /
    • 2012
  • This study was conducted to investigate the characteristic factors which have been influenced on nitrous oxide ($N_2O$) emissions related to the environment change of nitrogen application level, rainfall and temperature during the potato cultivation at black volcanic ash soil from 2010 to 2011. During the potato cultivation, the more amount of nitrogen fertilizer applied, $N_2O$ emissions amounts were released much. $N_2O$ emissions with the cultivation time were released much at the first and middle of cultivation with heavy rainfall, but it was released very low until the end of cultivation and drought season. $N_2O$ emissions mainly were influenced by the rainfall and soil water content. The correlation (r) with $N_2O$ emissions, soil wate, soil temperature in 2010 were very significant at $0.6251^{**}$ and $0.6082^{**}$ respectively, but soil EC was not significant to 0.10824. In 2011, soil temperature was very significant at $0.4879^{**}$, but soil water and soil EC were not significant at 0.0468 and 0.0400 respectively. Also, $NH_4$-N was very significant at $0.7476^{**}$, but $NO_3$-N and soil nitrogen ($NO_3-N+NH_4-N$) were not significant at 0.0843 and 0.1797, respectively. During the potato cultivation period, the average emissions factor of 2 years released by the nitrogen fertilizer application was presumed to be 0.0040 ($N_2O-N\;kg\;N^{-1}\;kg^{-1}$). This factor was lower about 2.5 times than the IPCC guideline default value (0.0100 $N_2O-N\;kg\;N^{-1}\;kg^{-1}$).

Evaluation of Greenhouse Gas Emissions in Cropland Sector on Local Government Levels based on 2006 IPCC Guideline (2006 IPCC 가이드라인을 적용한 지자체별 경종부문 온실가스 배출량 평가)

  • Jeong, Hyun-Cheol;Kim, Gun-Yeob;Lee, Seul-Bi;Lee, Jong-Sik;Lee, Jung-Hwan;So, Kyu-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.5
    • /
    • pp.842-847
    • /
    • 2012
  • This study was conducted to estimate the greenhouse gas emissions on local government levels from 1990 to 2010 using 2006 IPCC guideline methodology. To calculate greenhouse gas emissions based on the 16 local governments, emission factor and scaling factor were used with default value and activity data came from the food, agricultural, forestry and fisheries statistical yearbook of MIFAFF (Ministry for Food, Agriculture, Forestry, and Fisheries). The total emissions in crop sector gradually decreased from 1990 to 2010 due to a decline in agricultural land and nitrogen fertilizer usage. The annual average emission of greenhouse gas was the highest in Jeonnam (JN) with 1,698 Gg $CO_2$-eq and following Chungnam (CN), Gyungbuk (GB), Jeonbuk (JB) and Gyunggi (GG). The sum of top-six locals emission had occupied 83.4% of the total emission in cropland sector. The annual average emissions in 1990 by applying 2006 IPCC guideline were approximately 43% less than the national greenhouse gas inventory by 1996 IPCC guideline. Jeonnam (JN) province occupied also the highest results of greenhouse gas emission estimated by gas types (methane, nitrous oxide and carbon dioxide) and emission sources such as rice cultivation, agricultural soil, field burning of crop residue and urea fertilizer.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

The Effect of Accounts Receivable Management on Business Performance & Organizational Satisfaction: Focused on Micro Manufacturing Industries (매출채권관리가 재무적 경영성과와 조직만족에 미치는 영향: 도시형소공인을 중심으로)

  • Lee, Jong Gab;Ha, Kyu Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.12 no.6
    • /
    • pp.13-24
    • /
    • 2017
  • The purpose of this study is to examine the effect of the management of receivables on the management performance of micro manufacturing industries. The results of the survey are as follows. First, among the factors of management of pre- and post-trade receivables in the micro manufacturing industries, management organization and regulations, contract execution management, bad debt control, which are the subordinate factors of credit control, are positive (+) significant effect on stability. In terms of profitability, management organizations and regulations, which are subordinate factors of credit control management, have a positive (+) significant effect on profitability. The recovery management, which is a factor of management of post - receivable receivables, did not have a significant effect on the stability and profitability of financial management performance. Second, the effect of financial performance on organizational satisfaction is positively related to stability, while profitability has no significant effect on organizational satisfaction. The implication of this study is that pre - trade receivables management is more important than post - trade receivables management in the management of accounts receivables of micro manufacturing industries. Proactive credit management refers to the procedure of establishing and managing personal guarantees and physical guarantees in order to smooth the execution of the obligations at the same time as the contract is concluded through processes such as credit investigation, analysis and evaluation, and sales decision before the contract is concluded. Post receivables management based on the assumption of default is a receivables management procedure from receipt of receivables that are already defaulted to bad debts to bad debt processing. If the collection of receivables is delayed or bad debt is increased, Furthermore, a corporation may be subject to bankruptcy risk (insolvency by paper profits). Therefore, it is meaningful that this study suggests direction to induce change of contract type in advance by understanding the possibility of settlement of accounts receivable and recovery of bad debts within the day of transition in case of contract of micro manufacturing industries.

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Assessment of Inhalation Dose Sensitivity by Physicochemical Properties of Airborne Particulates Containing Naturally Occurring Radioactive Materials (천연방사성물질을 함유한 공기 중 부유입자 흡입 시 입자의 물리화학적 특성에 따른 호흡방사선량 민감도 평가)

  • Kim, Si Young;Choi, Cheol Kyu;Park, Il;Kim, Yong Geon;Choi, Won Chul;Kim, Kwang Pyo
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.4
    • /
    • pp.216-222
    • /
    • 2015
  • Facilities processing raw materials containing naturally occurring radioactive materials (NORM) may give rise to enhanced radiation dose to workers due to chronic inhalation of airborne particulates. Internal radiation dose due to particulate inhalation varies depending on particulate properties, including size, shape, density, and absorption type. The objective of the present study was to assess inhalation dose sensitivity to physicochemical properties of airborne particulates. Committed effective doses to workers resulting from inhalation of airborne particulates were calculated based on International Commission on Radiological Protection 66 human respiratory tract model. Inhalation dose generally increased with decreasing particulate size. Committed effective doses due to inhalation of $0.01{\mu}m$ sized particulates were higher than doses due to $100{\mu}m$ sized particulates by factors of about 100 and 50 for $^{238}U$ and $^{230}Th$, respectively. Inhalation dose increased with decreasing shape factor. Shape factors of 1 and 2 resulted in dose difference by about 18 %. Inhalation dose increased with particulate mass density. Particulate mass densities of $11g{\cdot}cm^{-3}$ and $0.7g{\cdot}cm^{-3}$ resulted in dose difference by about 60 %. For $^{238}U$, inhalation doses were higher for absorption type of S, M, and F in that sequence. Committed effective dose for absorption type S of $^{238}U$ was about 9 times higher than dose for absorption F. For $^{230}Th$, inhalation doses were higher for absorption type of F, M, and S in that sequence. Committed effective dose for absorption type F of $^{230}Th$ was about 16 times higher than dose for absorption S. Consequently, use of default values for particulate properties without consideration of site specific physiochemical properties may potentially skew radiation dose estimates to unrealistic values up to 1-2 orders of magnitude. For this reason, it is highly recommended to consider site specific working materials and conditions and use the site specific particulate properties to accurately access radiation dose to workers at NORM processing facilities.

A study on the effect of collimator angle on PAN-Pelvis volumetric modulated arc therapy (VMAT) including junction (접합부를 포함한 PAN-전골반암 VMAT 치료 계획 시 콜리메이터 각도의 영향에 관한 고찰)

  • Kim, Hyeon Yeong;Chang, Nam Jun;Jung, Hae Youn;Jeong, Yun Ju;Won, Hui Su;Seok, Jin Yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.61-71
    • /
    • 2020
  • Purpose: To investigate the effect of collimator angle on plan quality of PAN-Pelvis Multi-isocenter VMAT plan, dose reproducibility at the junction and impact on set-up error at the junction. Material and method: 10 adult patients with whole pelvis cancer including PAN were selected for the study. Using Trubeam STx equipped with HD MLC, we changed the collimator angle to 20°, 30°, and 45° except 10° which was the default collimator angle in the Eclipse(version 13.7) and all other treatment conditions were set to be the same for each patient and four plans were established also. To evaluate these plans, PTV coverage, coverage index(CVI) and homogeneity index (HI) were compared and clinical indicators for each treatment sites in normal tissues were analyzed. To evaluate dose reproducibility at the junction, the absolute dose was measured using a Falmer type ionization chamber and dose changes at the junction were evaluated by moving the position of the isocenter in and out 1~3mm and setting up the virtual volume at the junction. Result: CVI mean value was PTV-45 0.985±0.004, PTV-55 0.998±0.003 at 45° and HI mean value was PTV-45 1.140±0.074, and PTV-55 1.031±0.074 at 45° which were closest to 1. V20Gy of the kidneys decreased by 9.66% and average dose of bladder and V30 decreased by 1.88% and 2.16% at 45° compared to 10° for the critical organs. The dose value at the junction of the plan and the actual measured were within 0.3% and within tolerance. At the junction, due to set-up error the maximum dose increased to 14.56%, 9.88%, 8.03%, and 7.05%, at 10°, 20°, 30°, 45°, and the minimum dose decreased to 13.18%, 10.91%, 8.42%, and 4.53%, at 10°, 20°, 30°, 45° Conclusion: In terms of CVI, HI of PTV and critical organ protection, overall improved values were shown as the collimator angle increased. The impact on set-up error at the junction by collimator angle decreased as the angle increased and it will help improve the anxiety about the set up error. In conclusion, the collimator angle should be recognized as a factor that can affect the quality of the multi-isocenter VMAT plan and the dose at the junction, and be careful in setting the collimator angle in the treatment plan.

Changes in Meteorological Variables by SO2 Emissions over East Asia using a Linux-based U.K. Earth System Model (리눅스 기반 U.K. 지구시스템모형을 이용한 동아시아 SO2 배출에 따른 기상장 변화)

  • Youn, Daeok;Song, Hyunggyu;Lee, Johan
    • Journal of the Korean earth science society
    • /
    • v.43 no.1
    • /
    • pp.60-76
    • /
    • 2022
  • This study presents a software full setup and the following test execution times in a Linux cluster for the United Kingdom Earth System Model (UKESM) and then compares the model results from control and experimental simulations of the UKESM relative to various observations. Despite its low resolution, the latest version of the UKESM can simulate tropospheric chemistry-aerosol processes and the stratospheric ozone chemistry using the United Kingdom Chemistry and Aerosol (UKCA) module. The UKESM with UKCA (UKESM-UKCA) can treat atmospheric chemistryaerosol-cloud-radiation interactions throughout the whole atmosphere. In addition to the control UKESM run with the default CMIP5 SO2 emission dataset, an experimental run was conducted to evaluate the aerosol effects on meteorology by changing atmospheric SO2 loading with the newest REAS data over East Asia. The simulation period of the two model runs was 28 years, from January 1, 1982 to December 31, 2009. Spatial distributions of monthly mean aerosol optical depth, 2-m temperature, and precipitation intensity from model simulations and observations over East Asia were compared. The spatial patterns of surface temperature and precipitation from the two model simulations were generally in reasonable agreement with the observations. The simulated ozone concentration and total column ozone also agreed reasonably with the ERA5 reanalyzed one. Comparisons of spatial patterns and linear trends led to the conclusion that the model simulation with the newest SO2 emission dataset over East Asia showed better temporal changes in temperature and precipitation over the western Pacific and inland China. Our results are in line with previous finding that SO2 emissions over East Asia are an important factor for the atmospheric environment and climate change. This study confirms that the UKESM can be installed and operated in a Linux cluster-computing environment. Thus, researchers in various fields would have better access to the UKESM, which can handle the carbon cycle and atmospheric environment on Earth with interactions between the atmosphere, ocean, sea ice, and land.

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.