• Title/Summary/Keyword: exponential distribution

Search Result 825, Processing Time 0.029 seconds

Estimation of Groundwater Recharge by Considering Runoff Process and Groundwater Level Variation in Watershed (유역 유출과정과 지하수위 변동을 고려한 분포형 지하수 함양량 산정방안)

  • Chung, Il-Moon;Kim, Nam-Won;Lee, Jeong-Woo
    • Journal of Soil and Groundwater Environment
    • /
    • v.12 no.5
    • /
    • pp.19-32
    • /
    • 2007
  • In Korea, there have been various methods of estimating groundwater recharge which generally can be subdivided into three types: baseflow separation method by means of groundwater recession curve, water budget analysis based on lumped conceptual model in watershed, and water table fluctuation method (WTF) by using the data from groundwater monitoring wells. However, groundwater recharge rate shows the spatial-temporal variability due to climatic condition, land use and hydrogeological heterogeneity, so these methods have various limits to deal with these characteristics. To overcome these limitations, we present a new method of estimating recharge based on water balance components from the SWAT-MODFLOW which is an integrated surface-ground water model. Groundwater levels in the interest area close to the stream have dynamics similar to stream flow, whereas levels further upslope respond to precipitation with a delay. As these behaviours are related to the physical process of recharge, it is needed to account for the time delay in aquifer recharge once the water exits the soil profile to represent these features. In SWAT, a single linear reservoir storage module with an exponential decay weighting function is used to compute the recharge from soil to aquifer on a given day. However, this module has some limitations expressing recharge variation when the delay time is too long and transient recharge trend does not match to the groundwater table time series, the multi-reservoir storage routing module which represents more realistic time delay through vadose zone is newly suggested in this study. In this module, the parameter related to the delay time should be optimized by checking the correlation between simulated recharge and observed groundwater levels. The final step of this procedure is to compare simulated groundwater table with observed one as well as to compare simulated watershed runoff with observed one. This method is applied to Mihocheon watershed in Korea for the purpose of testing the procedure of proper estimation of spatio-temporal groundwater recharge distribution. As the newly suggested method of estimating recharge has the advantages of effectiveness of watershed model as well as the accuracy of WTF method, the estimated daily recharge rate would be an advanced quantity reflecting the heterogeneity of hydrogeology, climatic condition, land use as well as physical behaviour of water in soil layers and aquifers.

Regeneration Processes of Nutrients in the Polar Front Area of the East Sea III. Distribution Patterns of Water Masses and Nutrients in the Middle-Northern last Sea of Korea in October, 1995 (동해 극전선역의 영양염류 순환 과정 III. 1995년 10월 동해 중부 및 북부 해역의 수괴와 영양염의 분포)

  • CHO Hyun-Jin;MOON Chang-Ho;YANG Han-Seob;KANG Won-Bae;LEE Kwang-Woo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.30 no.3
    • /
    • pp.393-407
    • /
    • 1997
  • A survey of biological and chemical characteristics in the middle-northern East Sea of Korea was carried out at 28 stations in October, 1995 on board R/V Tam-Yang. On the basis of the vertical profiles of temperature, salinity and dissolved oxygen, water masses in the study area were divided into 5 major groups; (1) Low Saline Surface Water (LSSW), (2) Tsushima Surface Water (TSW), (3) Tsushima Middle Water (TMW), (4) North Korean Cold Water (NKCW), (5) last Sea Porper Water (ESPW). Other 4 mixed water masses were also observed. It is highly possible that the LSSW which occured at depths of $0\~30m$ in the most southern part of the study area is originated from the Yangtze River (Kiang) of China due to very low salinity $(<32.0\%_{\circ})$ relatively high concentration of dissolved silicate and no sources of freshwater input into that area. Oxygen maximum layer in the vertical profile was located near surface at northern cold waters and became deeper at the warm southern area. Oxygen minimum layer af depths $50\~100m$, which is TMW, were found in only southern area. In the vortical profiles of nutrients, the concentrations were very low in the surface layer and increased drammatically near the thermocline. The highest concentration occurred in the ESPW. The relatively low value of Si/P ratio in the ESPW (13.63) compared to other reports in the East Sea was due to continuous increase of P with depth as well as Si. The N : P ratio was about 6.92, showing that nitrogenous nutrient is the limiting factor for phytoplankton growth. The exponential relationship between Si and P, compared to the linear relationship between N and P, indicates that nitrate and phosphate have approximately the same regenerative pattern, but silicate has delayed regenerative pattern.

  • PDF

Evaluation for Rock Cleavage Using Distribution of Microcrack Spacings (IV) (미세균열의 간격 분포를 이용한 결의 평가(IV))

  • Park, Deok-Won
    • The Journal of the Petrological Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.127-141
    • /
    • 2017
  • Jurassic granite from Geochang was analysed with respect to the characteristics of the rock cleavage. The multicriteria evaluation for the six directions of rock cleavages was performed using the microcrack spacing-related parameters derived from the enlarged photomicrographs (${\times}6.7$) of the thin section and the spacing-cumulative frequency diagrams. The results of analysis for the representative values of these spacing parameters with respect to the rock cleavage are summarized as follows. First, the analysis for deriving the main parameter indicating the order of arrangement among six diagrams was performed. The values of five parameters with respect to six directions of the rock cleavages were arranged in increasing or decreasing order for the above analysis. The decreasing order of the values of main parameter(mean spacing-median spacing, $S_{mean}-S_{median}$) and mean spacing are consistent with the order of H1, H2, G1, G2, R1 and R2 directions. These sequential arrangements of six directions of the rock cleavages can provide a basis for those of the six diagrams related to spacing. Second, the nine correlation charts between the above main parameter and various parameters were arranged in decreasing order of correlation coefficient ($R^2$). These related charts shows a high correlation of power-law function in common. The values of mean spacing, density (${\rho}$) and length of line oa are directly proportional to the value of main parameter, while the values of constant (a), exponent (${\lambda}$), spacing frequency (N), length of line oa', slope of exponential straight line (${\theta}$) and total length ($1mm{\geq}$) are inverse proportional. Third, the results of correlation analysis between the values of parameters for three planes and those for three rock cleavages are as follows. The values of frequency, total spacing, constant, exponent, slope and length of line oa' for three planes and three rock cleavages show an order of R' < G' < H' and H < G < R, respectively. On the other hand, the values of mean spacing, (mean spacing-median spacing), density and length of line oa show an order of H' < G' < R' and R < G < H, respectively. The correlation of the mutually reverse order of the values of parameters between three planes and three rock cleavages can be drawn. This type of correlation analysis is useful for discriminating three quarrying planes.

Export Prediction Using Separated Learning Method and Recommendation of Potential Export Countries (분리학습 모델을 이용한 수출액 예측 및 수출 유망국가 추천)

  • Jang, Yeongjin;Won, Jongkwan;Lee, Chaerok
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.69-88
    • /
    • 2022
  • One of the characteristics of South Korea's economic structure is that it is highly dependent on exports. Thus, many businesses are closely related to the global economy and diplomatic situation. In addition, small and medium-sized enterprises(SMEs) specialized in exporting are struggling due to the spread of COVID-19. Therefore, this study aimed to develop a model to forecast exports for next year to support SMEs' export strategy and decision making. Also, this study proposed a strategy to recommend promising export countries of each item based on the forecasting model. We analyzed important variables used in previous studies such as country-specific, item-specific, and macro-economic variables and collected those variables to train our prediction model. Next, through the exploratory data analysis(EDA) it was found that exports, which is a target variable, have a highly skewed distribution. To deal with this issue and improve predictive performance, we suggest a separated learning method. In a separated learning method, the whole dataset is divided into homogeneous subgroups and a prediction algorithm is applied to each group. Thus, characteristics of each group can be more precisely trained using different input variables and algorithms. In this study, we divided the dataset into five subgroups based on the exports to decrease skewness of the target variable. After the separation, we found that each group has different characteristics in countries and goods. For example, In Group 1, most of the exporting countries are developing countries and the majority of exporting goods are low value products such as glass and prints. On the other hand, major exporting countries of South Korea such as China, USA, and Vietnam are included in Group 4 and Group 5 and most exporting goods in these groups are high value products. Then we used LightGBM(LGBM) and Exponential Moving Average(EMA) for prediction. Considering the characteristics of each group, models were built using LGBM for Group 1 to 4 and EMA for Group 5. To evaluate the performance of the model, we compare different model structures and algorithms. As a result, it was found that the separated learning model had best performance compared to other models. After the model was built, we also provided variable importance of each group using SHAP-value to add explainability of our model. Based on the prediction model, we proposed a second-stage recommendation strategy for potential export countries. In the first phase, BCG matrix was used to find Star and Question Mark markets that are expected to grow rapidly. In the second phase, we calculated scores for each country and recommendations were made according to ranking. Using this recommendation framework, potential export countries were selected and information about those countries for each item was presented. There are several implications of this study. First of all, most of the preceding studies have conducted research on the specific situation or country. However, this study use various variables and develops a machine learning model for a wide range of countries and items. Second, as to our knowledge, it is the first attempt to adopt a separated learning method for exports prediction. By separating the dataset into 5 homogeneous subgroups, we could enhance the predictive performance of the model. Also, more detailed explanation of models by group is provided using SHAP values. Lastly, this study has several practical implications. There are some platforms which serve trade information including KOTRA, but most of them are based on past data. Therefore, it is not easy for companies to predict future trends. By utilizing the model and recommendation strategy in this research, trade related services in each platform can be improved so that companies including SMEs can fully utilize the service when making strategies and decisions for exports.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.