The Generalized Hoek-Brown (GHB) criterion is a nonlinear failure criterion specialized for rock engineering applications and has recently seen increased usage. However, the GHB criterion expresses the relationship between minimum and maximum principal stresses at failure, and when GSI≠100, it has disadvantage of being difficult to express as an explicit relationship between the normal and shear stresses acting on the failure plane, i.e., as a Mohr failure envelope. This disadvantage makes it challenging to apply the GHB criterion in numerical analysis techniques such as limit equilibrium analysis, upper-bound limit analysis, and the critical plane approach. Consequently, recent studies have attempted to express the GHB Mohr failure envelope as an approximate analytical formula, and there is still a need for continued interest in related research. This study presents improved formulations for the approximate GHB Mohr failure envelope, offering higher accuracy in predicting shear strength compared to existing formulas. The improved formulation process employs a method to enhance the approximation accuracy of the tangential friction angle and utilizes the tangent line equation of the nonlinear GHB failure envelope to improve the accuracy of shear strength approximation. In the latter part of this paper, the advantages and limitations of the proposed approximate GHB failure envelopes in terms of shear strength prediction accuracy and calculation time are discussed.
Journal of the Korean Association of Geographic Information Studies
/
v.22
no.4
/
pp.116-130
/
2019
In South Korea, forest fire occurrences are increasing in size and duration due to various factors such as the increase in fuel materials and frequent drying conditions in forests. Therefore, it is necessary to minimize the damage caused by forest fires by appropriately providing the probability of forest fire risk. The purpose of this study is to improve the Daily Weather Index(DWI) provided by the current forest fire forecasting system in South Korea. A new Fire Risk Index(FRI) is proposed in this study, which is provided in a 5km grid through the synergistic use of numerical weather forecast data, satellite-based drought indices, and forest fire-prone areas. The FRI is calculated based on the product of the Fine Fuel Moisture Code(FFMC) optimized for Korea, an integrated drought index, and spatio-temporal weighting approaches. In order to improve the temporal accuracy of forest fire risk, monthly weights were applied based on the forest fire occurrences by month. Similarly, spatial weights were applied using the forest fire density information to improve the spatial accuracy of forest fire risk. In the time series analysis of the number of monthly forest fires and the FRI, the relationship between the two were well simulated. In addition, it was possible to provide more spatially detailed information on forest fire risk when using FRI in the 5km grid than DWI based on administrative units. The research findings from this study can help make appropriate decisions before and after forest fire occurrences.
The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
/
v.19
no.3
/
pp.169-179
/
2014
Monthly mean surface heat fluxes in the southeastern Yellow Sea are calculated using directly observed airsea variables from an ocean buoy station including short- and longwave radiations, and COARE 3.0 bulk flux algorithm. The calculated monthly mean heat fluxes are then compared with previous estimates of climatological monthly mean surface heat fluxes near the buoy location. Sea surface receives heat through net shortwave radiation ($Q_i$) and loses heat as net longwave radiation ($Q_b$), sensible heat flux ($Q_h$), and latent heat flux ($Q_e$). $Q_e$ is the largest contribution to the total heat loss of about 51 %, and $Q_b$ and $Q_h$ account for 34% and 15% of the total heat loss, respectively. Net heat flux ($Q_n$) shows maximum in May ($191.4W/m^2$) when $Q_i$ shows its annual maximum, and minimum in December ($-264.9W/m^2$) when the heat loss terms show their annual minimum values. Annual mean $Q_n$ is estimated to be $1.9W/m^2$, which is negligibly small considering instrument errors (maximum of ${\pm}19.7W/m^2$). In the previous estimates, summertime incoming radiations ($Q_i$) are underestimated by about $10{\sim}40W/m^2$, and wintertime heat losses due to $Q_e$ and $Q_h$ are overestimated by about $50W/m^2$ and $30{\sim}70W/m^2$, respectively. Consequently, as compared to $Q_n$ from the present study, the amount of net heat gain during the period of net oceanic heat gain between April and August is underestimated, while the ocean's net heat loss in winter is overestimated in other studies. The difference in $Q_n$ is as large as $70{\sim}130W/m^2$ in December and January. Analysis of long-term reanalysis product (MERRA) indicates that the difference in the monthly mean heat fluxes between the present and previous studies is not due to the temporal variability of fluxes but due to inaccurate data used for the calculation of the heat fluxes. This study suggests that caution should be exercised in using the climatological monthly mean surface heat fluxes documented previously for various research and numerical modeling purposes.
At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.
The deep geological repository for high-level radioactive waste disposal is a multi barrier system comprised of engineered barriers and a natural barrier. The long-term integrity of the deep geological repository is affected by the coupled interactions between the individual barrier components. Erosion and piping phenomena in the compacted bentonite buffer due to buffer-rock interactions results in the removal of bentonite particles via groundwater flow and can negatively impact the integrity and performance of the buffer. Rapid groundwater inflow at the early stages of disposal can lead to piping in the bentonite buffer due to the buildup of pore water pressure. The physiochemical processes between the bentonite buffer and groundwater lead to bentonite swelling and gelation, resulting in bentonite erosion from the buffer surface. Hence, the evaluation of erosion and piping occurrence and its effects on the integrity of the bentonite buffer is crucial in determining the long-term integrity of the deep geological repository. Previous studies on bentonite erosion and piping failed to consider the complex coupled thermo-hydro-mechanical-chemical behavior of bentonite-groundwater interactions and lacked a comprehensive model that can consider the complex phenomena observed from the experimental tests. In this technical note, previous studies on the mechanisms, lab-scale experiments and numerical modeling of bentonite buffer erosion and piping are introduced, and the future expected challenges in the investigation of bentonite buffer erosion and piping are summarized.
Much attention has been focused on the Baekpoman area due to the archaeological achievements of the past, but studies on prehistoric times when villages began to form is insufficient, and the Bronze Age village landscape was examined in order to supplement this. In the area of Baekpo Bay, the natural geographical limit connected to the inland was culturally confirmed by the distribution density of dolmens, and the generality of the Bronze Age settlement was confirmed with the Hwangsan-ri settlement. Bunto Village in Hwangsan-ri represents a farming-based village in the Baekpo Bay area, and the residential group and the tomb group are located on the same hill, and it is composed of three individual residential groups, and the village landscape had attached buildings used as warehouses and storage facilities. In the area of Baekpo Bay, it spread in the Tamjin River basin and the Yeongsan River basin where Songgukri culture and dolmen culture were integrated, and the density distribution of the villages was considered to correspond to the distribution density of dolmens. In order to examine the landscape of village distribution, the classification of Sochon-Jungchon-Daechon was applied, and it was classified as Sochon, a sub-unit constituting the village, in that the number of settlements constituting the village in the Bronze Age was mostly less than five. There are numerical differences between Jungchon and Daechon, and the distribution pattern does not necessarily coincide with the hierarchy. The three individual residential groups of Bunto Village in Hwangsan-ri are Jungchon composed of complex communities of blood relatives with each family community, and a stabilized village landscape was created in the Gusancheon area. In the area of Baekpo Bay, Bronze Age villages formed a landscape in which small villages were scattered around the rivers and formed a single-layered relationship. Dolmens (tombs) were formed between the villages and villages, and seem to have coexisted. Sochondeul is a family community based on agriculture, and it is believed that self-sufficient stabilized rural villages that live by acquiring various wild resources in rivers, mountains, and the sea formed a landscape.
Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.
shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
shows various market conditions captured by the two consumer heterogeneities.
(a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
(c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition.
summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.