• Title/Summary/Keyword: distribution management model

Search Result 2,212, Processing Time 0.031 seconds

Development of Stand Yield Table Based on Current Growth Characteristics of Chamaecyparis obtusa Stands (현실임분 생장특성에 의한 편백 임분수확표 개발)

  • Jung, Su Young;Lee, Kwang Soo;Lee, Ho Sang;Ji Bae, Eun;Park, Jun Hyung;Ko, Chi-Ung
    • Journal of Korean Society of Forest Science
    • /
    • v.109 no.4
    • /
    • pp.477-483
    • /
    • 2020
  • We constructed a stand yield table for Chamaecyparis obtusa based on data from an actual forest. The previous stand yield table had a number of disadvantages because it was based on actual forest information. In the present study we used data from more than 200 sampling plots in a stand of Chamaecyparis obtusa. The analysis included theestimation, recovery and prediction of the distribution of values for diameter at breast height (DBH), and the result is a valuable process for the preparation ofstand yield tables. The DBH distribution model uses a Weibull function, and the site index (base age: 30 years), the standard for assessing forest productivity, was derived using the Chapman-Richards formula. Several estimation formulas for the preparation of the stand yield table were considered for the fitness index, and the optimal formula was chosen. The analysis shows that the site index is in the range of 10 to 18 in the Chamaecyparis obtusa stand. The estimated stand volume of each sample plot was found to have an accuracy of 62%. According to the residuals analysis, the stands showed even distribution around zero, which indicates that the results are useful in the field. Comparing the table constructed in this study to the existing stand yield table, we found that our table yielded comparatively higher values for growth. This is probably because the existing analysis data used a small amount of research data that did not properly reflect. We hope that the stand yield table of Chamaecyparis obtusa, a representative species of southern regions, will be widely used for forest management. As these forests stabilize and growth progresses, we plan to construct an additional yield table applicable to the production of developed stands.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Human Health Risk, Environmental and Economic Assessment Based on Multimedia Fugacity Model for Determination of Best Available Technology (BAT) for VOC Reduction in Industrial Complex (산업단지 VOC 저감 최적가용기법(BAT) 선정을 위한 다매체 거동모델 기반 인체위해성·환경성·경제성 평가)

  • Kim, Yelin;Rhee, Gahee;Heo, Sungku;Nam, Kijeon;Li, Qian;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.58 no.3
    • /
    • pp.325-345
    • /
    • 2020
  • Determination of Best available technology (BAT) was suggested to reduce volatile organic compounds (VOCs) in a petrochemical industrial complex, by conducting human health risk, environmental, and economic assessment based on multimedia fugacity model. Fate and distribution of benzene, toluene, ethylbenzene, and xylene (BTEX) was predicted by the multimedia fugacity model, which represent VOCs emitted from the industrial complex in U-city. Media-integrated human health risk assessment and sensitivity analysis were conducted to predict the human health risk of BTEX and identify the critical variable which has adverse effects on human health. Besides, the environmental and economic assessment was conducted to determine the BAT for VOCs reduction. It is concluded that BTEX highly remained in soil media (60%, 61%, 64% and 63%), and xylene has remained as the highest proportion of BTEX in each environment media. From the candidates of BAT, the absorption was excluded due to its high human health risk. Moreover, it is identified that the half-life and exposure coefficient of each exposure route are highly correlated with human health risk by sensitivity analysis. In last, considering environmental and economic assessment, the regenerative thermal oxidation, the regenerative catalytic oxidation, the bio-filtration, the UV oxidation, and the activated carbon adsorption were determined as BAT for reducing VOCs in the petrochemical industrial complex. The suggested BAT determination methodology based on the media-integrated approach can contribute to the application of BAT into the workplace to efficiently manage the discharge facilities and operate an integrated environmental management system.

Impacts of Three-dimensional Land Cover on Urban Air Temperatures (도시기온에 작용하는 입체적 토지피복의 영향)

  • Jo, Hyun-Kil;Ahn, Tae-Won
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.37 no.3
    • /
    • pp.54-60
    • /
    • 2009
  • The purpose of this study is to analyze the impacts of three-dimensional land cover on changing urban air temperatures and to explore some strategies of urban landscaping towards mitigation of heat build-up. This study located study spaces within a diameter of 300m around 24 Automatic Weather Stations(AWS) in Seoul, and collected data of diverse variables which could affect summer energy budgets and air temperatures. The study also selected reflecting study objectives 6 smaller-scale spaces with a diameter of 30m in Chuncheon, and measured summer air temperatures and three-dimensional land cover to compare their relationships with results from Seoul's AWS. Linear regression models derived from data of Seoul's AWS revealed that vegetation volume, greenspace area, building volume, building area, population density, and pavement area contributed to a statistically significant change in summer air temperatures. Of these variables, vegetation and building volume indicated the highest accountability for total variability of changes in the air temperatures. Multiple regression models derived from combinations of the significant variables also showed that both vegetation and building volume generated a model with the best fitness. Based on this multiple regression model, a 10% increase of vegetation volume decreased the air temperatures by approximately 0.14%, while a 10% increase of building volume raised them by 0.26%. Relationships between Chuncheon's summer air temperatures and land cover distribution for the smaller-scale spaces also disclosed that the air temperatures were negatively correlated to vegetation volume and greenspace area, while they were positively correlated to hardscape area. Similarly to the case of Seoul's AWS, the air temperatures for the smaller-scale spaces decreased by 0.32% ($0.08^{\circ}C$) as vegetation volume increased by 10%, based on the most appropriate linear model. Thus, urban landscaping for the reduction of summer air temperatures requires strategies to improve vegetation volume and simultaneously to decrease building volume. For Seoul's AWS, the impact of building volume on changing the air temperatures was about 2 times greater than that of vegetation volume. Wall and rooftop greening for shading and evapotranspiration is suggested to control atmospheric heating by three-dimensional building surfaces, enlarging vegetation volume through multilayered plantings on soil surfaces.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Gains To Bidding Firms' Stock Returns From Merger (기업합병의 성과에 영향을 주는 요인에 대한 실증적 연구)

  • Kim, Yong-Kap
    • Management & Information Systems Review
    • /
    • v.23
    • /
    • pp.41-74
    • /
    • 2007
  • In Korea, corporate merger activities were activated since 1980, and nowadays(particuarly since 1986) the changes in domestic and international economic circumstances have made corporate managers have strong interests in merger. Korea and America have different business environments and it is easily conceivable that there exists many differences in motives, methods, and effects of mergers between the two countries. According to recent studies on takeover bids in America, takeover bids have information effects, tax implications, and co-insurance effects, and the form of payment(cash versus securities), the relative size of target and bidder, the leverage effect, Tobin's q, number of bidders(single versus multiple bidder), the time period (before 1968, 1968-1980, 1981 and later), and the target firm reaction (hostile versus friendly) are important determinants of the magnitude of takeover gains and their distribution between targets and bidders at the announcement of takeover bids. This study examines the theory of takeover bids, the status quo and problems of merger in Korea, and then investigates how the announcement of merger are reflected in common stock returns of bidding firms, finally explores empirically the factors influencing abnormal returns of bidding firms' stock price. The hypotheses of this study are as follows ; Shareholders of bidding firms benefit from mergers. And common stock returns of bidding firms at the announcement of takeover bids, shows significant differences according to the condition of the ratio of target size relative to bidding firm, whether the target being a member of the conglomerate to which bidding firm belongs, whether the target being a listed company, the time period(before 1986, 1986, and later), the number of bidding firm's stock in exchange for a stock of the target, whether the merger being a horizontal and vertical merger or a conglomerate merger, and the ratios of debt to equity capital of target and bidding firm. The data analyzed in this study were drawn from public announcements of proposals to acquire a target firm by means of merger. The sample contains all bidding firms which were listed in the stock market and also engaged in successful mergers in the period 1980 through 1992 for which there are daily stock returns. A merger bid was considered successful if it resulted in a completed merger and the target firm disappeared as a separate entity. The final sample contains 113 acquiring firms. The research hypotheses examined in this study are tested by applying an event-type methodology similar to that described in Dodd and Warner. The ordinary-least-squares coefficients of the market-model regression were estimated over the period t=-135 to t=-16 relative to the date of the proposal's initial announcement, t=0. Daily abnormal common stock returns were calculated for each firm i over the interval t=-15 to t=+15. A daily average abnormal return(AR) for each day t was computed. Average cumulative abnormal returns($CART_{T_1,T_2}$) were also derived by summing the $AR_t's$ over various intervals. The expected values of $AR_t$ and $CART_{T_1,T_2}$ are zero in the absence of abnormal performance. The test statistics of $AR_t$ and $CAR_{T_1,T_2}$ are based on the average standardized abnormal return($ASAR_t$) and the average standardized cumulative abnormal return ($ASCAR_{T_1,T_2}$), respectively. Assuming that the individual abnormal returns are normal and independent across t and across securities, the statistics $Z_t$ and $Z_{T_1,T_2}$ which follow a unit-normal distribution(Dodd and Warner), are used to test the hypotheses that the average standardized abnormal returns and the average cumulative standardized abnormal returns equal zero.

  • PDF

A Hydrodynamic Modeling Study to Analyze the Water Plume and Mixing Pattern of the Lake Euiam (의암호 수체 흐름과 혼합 패턴에 관한 모델 연구)

  • Park, Seongwon;Lee, Hye Won;Lee, Yong Seok;Park, Seok Soon
    • Korean Journal of Ecology and Environment
    • /
    • v.46 no.4
    • /
    • pp.488-498
    • /
    • 2013
  • A three-dimensional hydrodynamic model was applied to the Lake Euiam. The lake has three inflows, of which Gongji Stream has the smallest flow rate and poorest water. The dam-storage volume, watershed area, lake shape and discharge type of the Chuncheon Dam and the Soyang Dam are different. Therefore, it is difficult to analyze the water plume and mixing pattern due to the difference of the two dams regarding the amount of outflow and water temperature. In this study, we analyzed the effects of different characteristics on temperature and conductivity using the model appropriate for the Lake Euiam. We selected an integrated system supporting 3-D time varying modeling (GEMSS) to represent large temporal and spatial variations in hydrodynamics and transport of the Lake Euiam. The model represents the water temperature and hydrodynamics in the lake reasonably well. We examined residence time and spreading patterns of the incoming flows in the lake based on the results of the validated model. The results of the water temperature and conductivity distribution indicated that characteristics of upstream dams greatly influence Lake Euiam. In this study, the three-dimensional time variable water quality model successfully simulated the temporal and spatial variations of the hydrodynamics in the Lake Euiam. The model may be used for efficient water quality management.

Spatio-Temporal Incidence Modeling and Prediction of the Vector-Borne Disease Using an Ecological Model and Deep Neural Network for Climate Change Adaption (기후 변화 적응을 위한 벡터매개질병의 생태 모델 및 심층 인공 신경망 기반 공간-시간적 발병 모델링 및 예측)

  • Kim, SangYoun;Nam, KiJeon;Heo, SungKu;Lee, SunJung;Choi, JiHun;Park, JunKyu;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.58 no.2
    • /
    • pp.197-208
    • /
    • 2020
  • This study was carried out to analyze spatial and temporal incidence characteristics of scrub typhus and predict the future incidence of scrub typhus since the incidences of scrub typhus have been rapidly increased among vector-borne diseases. A maximum entropy (MaxEnt) ecological model was implemented to predict spatial distribution and incidence rate of scrub typhus using spatial data sets on environmental and social variables. Additionally, relationships between the incidence of scrub typhus and critical spatial data were analyzed. Elevation and temperature were analyzed as dominant spatial factors which influenced the growth environment of Leptotrombidium scutellare (L. scutellare) which is the primary vector of scrub typhus. A temporal number of diseases by scrub typhus was predicted by a deep neural network (DNN). The model considered the time-lagged effect of scrub typhus. The DNN-based prediction model showed that temperature, precipitation, and humidity in summer had significant influence factors on the activity of L. scutellare and the number of diseases at fall. Moreover, the DNN-based prediction model had superior performance compared to a conventional statistical prediction model. Finally, the spatial and temporal models were used under climate change scenario. The future characteristics of scrub typhus showed that the maximum incidence rate would increase by 8%, areas of the high potential of incidence rate would increase by 9%, and disease occurrence duration would expand by 2 months. The results would contribute to the disease management and prediction for the health of residents in terms of public health.

Population Phenology and an Early Season Adult Emergence model of Pumpkin Fruit Fly, Bactrocera depressa (Diptera: Tephritidae) (호박과실파리 발생생태 및 계절초기 성충우화시기 예찰 모형)

  • Kang, Taek-Jun;Jeon, Heung-Yong;Kim, Hyeong-Hwan;Yang, Chang-Yeol;Kim, Dong-Soon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.4
    • /
    • pp.158-166
    • /
    • 2008
  • The pumpkin fruit fly, Bactrocera depressa (Tephritidae: Diptera), is one of the most important pests in Cucurbitaceae plants. This study was conducted to investigate the basic ecology of B. depressa, and to develop a forecasting model for predicting the time of adult emergence in early season. In green pumpkin producing farms, the oviposition punctures caused by the oviposition of B. depressa occurred first between mid- and late July, peaked in late August, and then decreased in mid-September followed by disappearance of the symptoms in late September, during which oviposition activity of B. depressa is considered active. In full-ripened pumpkin producing farms, damaged fruits abruptly increased from early Auguest, because the decay of pumpkins caused by larval development began from that time. B. depressa produced a mean oviposition puncture of 2.2 per fruit and total 28.8-29.8 eggs per fruit. Adult emergence from overwintering pupae, which was monitored using a ground emergence trap, was first observed between mid- and late May, and peaked during late May to early June. The development times from overwintering pupae to adult emergence decreased with increasing temperature: 59.0 days at $15^{\circ}C$, 39.3 days at $20^{\circ}C$, 25.8 days at$25^{\circ}C$ and 21.4 days at $30^{\circ}C$. The pupae did not develop to adult at $35^{\circ}C$. The lower developmental threshold temperature was calculated as $6.8^{\circ}C$ by linear regression. The thermal constant was 482.3 degree-days. The non-linear model of Gaussian equation well explained the relationship between the development rate and temperature. The Weibull function provided a good fit for the distribution of development times of overwintering pupae. The predicted date of 50% adult emergence by a degree-day model showed one day deviation from the observed actual date. Also, the output estimated by rate summation model, which was consisted of the developmental model and the Weibull function, well pursued the actual pattern of cumulative frequency curve of B. depressa adult emergence. Consequently, it is expected that the present results could be used to establish the management strategy of B. depressa.

Regional Inequalities in Healthcare Indices in Korea: Geo-economic Review and Action Plan (우리나라 보건지표의 지역 격차: 지경학적 고찰과 대응방안)

  • Kim, Chun-Bae;Chung, Moo-Kwon;Kong, In Deok
    • Health Policy and Management
    • /
    • v.28 no.3
    • /
    • pp.240-250
    • /
    • 2018
  • By the end of 2017, in a world of 7.6 billion people, there were inequalities in healthcare indices both within and between nations, and this gap continues to increase. Therefore, this study aims to understand the current status of regional inequalities in healthcare indices and to find an action plan to tackle regional health inequality through a geo-economic review in Korea. Since 2008, there was great inequality in life expectancy and healthy life expectancy by region in not only metropolitan cities but also districts in Korea. While the community health statistics from 2008-2017 show a continuous increase of inequality during the last 10 years in most healthcare indices related to noncommunicable diseases (except for some, like smoking), the inequality has doubled in 254 districts. Furthermore, health inequality intensified as the gap between urban (metropolitan cities) and rural regions (counties) for rates of obesity (self-reported), sufficient walking practices, and healthy lifestyle practices increased from twofold to fivefold. However, regionalism and uneven development are natural consequences of the spatial perspective caused by state-lead developmentalism as Korea has fixed the accumulation strategy as its model for growth with the background of export-led industrialization in the 1960s and heavy and chemical industrialization in the 1970s, although the Constitution of the Republic of Korea recognizes the legal value of balanced development within the regions by specifying "the balanced development of the state" or "ensuring the balanced development of all regions." In addition, the danger of a 30% decline or extinction of local government nationwide is expected by 2040 as we face not only a decline in general and ageing populations but also the era of the demographic cliff. Thus, the government should continuously operate the "Special Committee on Regional Balanced Development" with a government-wide effort until 2030 to prevent disparities in the health conditions of local residents, which is the responsibility of the nation in terms of strengthening governance. To address the regional inequalities of rural and urban regions, it is necessary to re-adjust the basic subsidy and cost-sharing rates with local governments of current national subsidies based mainly on population scale, financial independence of local government, or distribution of healthcare resources and healthcare indices (showing high inequalities) overall.