• Title/Summary/Keyword: dynamic variable

Search Result 1,306, Processing Time 0.035 seconds

Assessment of water supply reliability in the Geum River Basin using univariate climate response functions: a case study for changing instreamflow managements (단변량 기후반응함수를 이용한 금강수계 이수안전도 평가: 하천유지유량 관리 변화를 고려한 사례연구)

  • Kim, Daeha;Choi, Si Jung;Jang, Su Hyung;Kang, Dae Hu
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.993-1003
    • /
    • 2023
  • Due to the increasing greenhouse gas emissions, the global mean temperature has risen by 1.1℃ compared to pre-industrial levels, and significant changes are expected in functioning of water supply systems. In this study, we assessed impacts of climate change and instreamflow management on water supply reliability in the Geum River basin, Korea. We proposed univariate climate response functions, where mean precipitation and potential evaporation were coupled as an explanatory variable, to assess impacts of climate stress on multiple water supply reliabilities. To this end, natural streamflows were generated in the 19 sub-basins with the conceptual GR6J model. Then, the simulated streamflows were input into the Water Evaluation And Planning (WEAP) model. The dynamic optimization by WEAP allowed us to assess water supply reliability against the 2020 water demand projections. Results showed that when minimizing the water shortage of the entire river basin under the 1991-2020 climate, water supply reliability was lowest in the Bocheongcheon among the sub-basins. In a scenario where the priority of instreamflow maintenance is adjusted to be the same as municipal and industrial water use, water supply reliability in the Bocheongcheon, Chogang, and Nonsancheon sub-basins significantly decreased. The stress tests with 325 sets of climate perturbations showed that water supply reliability in the three sub-basins considerably decreased under all the climate stresses, while the sub-basins connected to large infrastructures did not change significantly. When using the 2021-2050 climate projections with the stress test results, water supply reliability in the Geum River basin was expected to generally improve, but if the priority of instreamflow maintenance is increased, water shortage is expected to worsen in geographically isolated sub-basins. Here, we suggest that the climate response function can be established by a single explanatory variable to assess climate change impacts of many sub-basin's performance simultaneously.

INTRINSIC NMR ISOTOPE SHIFTS OF CYCLOOCTANONE AT LOW TEMPERATURE (저온에서의 싸이클로옥타논에 대한 고유동위원소 효과)

  • Jung, Miewon
    • Analytical Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.213-224
    • /
    • 1994
  • Several isotopomers of cyclooctanone were prepared by selective deuterium substitution. Intrinsic isotope effects on $^{13}C$ NMR chemical shifts of these isotopomers were investigated systematically at low temperature. These istope effects were discussed in relation to the preferred boat-chair conformation of cyclooctanone. Deuterium isotope effects on NMR chemical shifts have been known for a long time. Especially in a conformationally mobile molecule, isotope perturbation could affect NMR signals through a combination of isotope effects on equilibria and intrinsic effects. The distinction between intrinsic and nonintrinsic effects is quite difficult at ambient temperature due to involvement of both equilibrium and intrinsic isotope effects. However if equilibria between possible conformers of cyclooctanone are slowed down enough on the NMR time scale by lowering temperature, it should be possible to measure intrinsic isotope shifts from the separated signals at low temperature. $^{13}C$ NMR has been successfully utilized in the study on molecular conformation in solution when one deals with stable conformers or molecules were rapid interconversion occurs at ambient temperature. The study of dynamic processes in general requires analysis of spectra at several temperature. Anet et al. did $^1H$ NMR study of cyclooctanone at low temperature to freeze out a stable conformation, but were not able initially to deduce which conformation was stable because of the complexity of alkyl region in the $^1H$ NMR spectrum. They also reported the $^1H$ and $^{13}C$ NMR spectra of the $C_9-C_{16}$ cycloalkanones with changing temperature from $-80^{\circ}C$ to $-170^{\circ}C$, but they did not report a variable temperature $^{13}C$ NMR study of cyclooctanone. For the analysis of the intrinsic isotope effect with relation to cylooctanone conformation, $^{13}C$ NMR spectra are obtained in the present work at low temperatures (up to $-150^{\circ}C$) in order to find the chemical shifts at the temperature at which the dynamic process can be "frozen-out" on the NMR time scale and cyclooctanone can be observed as a stable conformation. Both the ring inversion and pseudorotational processes must be "frozen-out" in order to see separate resonances for all eight carbons in cyclooctanone. In contrast to $^1H$ spectra, slowing down just the ring inversion process has no apparent effects on the $^{13}C$ spectra because exchange of environments within the pairs of methylene carbons can still occur by the pseudorotational process. Several isotopomers of cyclooctanone were prepared by selective deuterium substitution (fig. 1) : complete deuterium labeling at C-2 and C-8 positions gave cyclooctanone-2, 2, 8, $8-D_4$ : complete labeling at C-2 and C-7 positions afforded the 2, 2, 7, $7-D_4$ isotopomer : di-deuteration at C-3 gave the 3, $3-D_2$ isotopomer : mono-deuteration provided cyclooctanone-2-D, 4-D and 5-D isotopomers : and partial deuteration on the C-2 and C-8 position, with a chiral and difunctional case catalyst, gave the trans-2, $8-D_2$ isotopomer. These isotopomer were investigated systematically in relation with cyclooctanone conformation and intrinsic isotope effects on $^{13}C$ NMR chemical shifts at low temperature. The determination of the intrinsic effects could help in the analysis of the more complex effects at higher temperature. For quantitative analysis of intrinsic isotope effects, the $^{13}C$ NMR spectrum has been obtained for a mixture of the labeled and unlabeled compounds because the signal separations are very small.

  • PDF

Differentiation of True Recurrence from Delayed Radiation Therapy-related Changes in Primary Brain Tumors Using Diffusion-weighted Imaging, Dynamic Susceptibility Contrast Perfusion Imaging, and Susceptibility-weighted Imaging (확산강조영상, 역동적조영관류영상, 자화율강조영상을 이용한 원발성 뇌종양환자에서의 종양재발과 지연성 방사선치료연관변화의 감별)

  • Kim, Dong Hyeon;Choi, Seung Hong;Ryoo, Inseon;Yoon, Tae Jin;Kim, Tae Min;Lee, Se-Hoon;Park, Chul-Kee;Kim, Ji-Hoon;Sohn, Chul-Ho;Park, Sung-Hye;Kim, Il Han
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.120-132
    • /
    • 2014
  • Purpose : To compare dynamic susceptibility contrast imaging, diffusion-weighted imaging, and susceptibility-weighted imaging (SWI) for the differentiation of tumor recurrence and delayed radiation therapy (RT)-related changes in patients treated with RT for primary brain tumors. Materials and Methods: We enrolled 24 patients treated with RT for various primary brain tumors, who showed newly appearing enhancing lesions more than one year after completion of RT on follow-up MRI. The enhancing-lesions were confirmed as recurrences (n=14) or RT-changes (n=10). We calculated the mean values of normalized cerebral blood volume (nCBV), apparent diffusion coefficient (ADC), and proportion of dark signal intensity on SWI (proSWI) for the enhancing-lesions. All the values between the two groups were compared using t-test. A multivariable logistic regression model was used to determine the best predictor of differential diagnosis. The cutoff value of the best predictor obtained from receiver-operating characteristic curve analysis was applied to calculate the sensitivity, specificity, and accuracy for the diagnosis. Results: The mean nCBV value was significantly higher in the recurrence group than in the RT-change group (P=.004), and the mean proSWI was significantly lower in the recurrence group (P<.001). However, no significant difference was observed in the mean ADC values between the two groups. A multivariable logistic regression analysis showed that proSWI was the only independent variable for the differentiation; the sensitivity, specificity, and accuracy were 78.6% (11 of 14), 100% (10 of 10), and 87.5% (21 of 24), respectively. Conclusion: The proSWI was the most promising parameter for the differentiation of newly developed enhancing-lesions more than one year after RT completion in brain tumor patients.

Cooperative Sales Promotion in Manufacturer-Retailer Channel under Unplanned Buying Potential (비계획구매를 고려한 제조업체와 유통업체의 판매촉진 비용 분담)

  • Kim, Hyun Sik
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.29-53
    • /
    • 2012
  • As so many marketers get to use diverse sales promotion methods, manufacturer and retailer in a channel often use them too. In this context, diverse issues on sales promotion management arise. One of them is the issue of unplanned buying. Consumers' unplanned buying is clearly better off for the retailer but not for manufacturer. This asymmetric influence of unplanned buying should be dealt with prudently because of its possibility of provocation of channel conflict. However, there have been scarce studies on the sales promotion management strategy considering the unplanned buying and its asymmetric effect on retailer and manufacturer. In this paper, we try to find a better way for a manufacturer in a channel to promote performance through the retailer's sales promotion efforts when there is potential of unplanned buying effect. We investigate via game-theoretic modeling what is the optimal cost sharing level between the manufacturer and retailer when there is unplanned buying effect. We investigated following issues about the topic as follows: (1) What structure of cost sharing mechanism should the manufacturer and retailer in a channel choose when unplanned buying effect is strong (or weak)? (2) How much payoff could the manufacturer and retailer in a channel get when unplanned buying effect is strong (or weak)? We focus on the impact of unplanned buying effect on the optimal cost sharing mechanism for sales promotions between a manufacturer and a retailer in a same channel. So we consider two players in the game, a manufacturer and a retailer who are interacting in a same distribution channel. The model is of complete information game type. In the model, the manufacturer is the Stackelberg leader and the retailer is the follower. Variables in the model are as following table. Manufacturer's objective function in the basic game is as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. And retailer's is as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+p_u(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. The model is of four stages in two periods. Stages of the game are as follows. (Stage 1) Manufacturer sets wholesale price of the first period($w_1$) and cost sharing level of channel sales promotion(${\Psi}$). (Stage 2) Retailer sets retail price of the focal brand($p_1$), the unplanned buying item($p_u$), and sales promotion level(L). (Stage 3) Manufacturer sets wholesale price of the second period($w_2$). (Stage 4) Retailer sets retail price of the second period($p_2$). Since the model is a kind of dynamic games, we try to find a subgame perfect equilibrium to derive some theoretical and managerial implications. In order to obtain the subgame perfect equilibrium, we use the backward induction method. In using backward induction approach, we solve the problems backward from stage 4 to stage 1. By completely knowing follower's optimal reaction to the leader's potential actions, we can fold the game tree backward. Equilibrium of each variable in the basic game is as following table. We conducted more analysis of additional game about diverse cost level of manufacturer. Manufacturer's objective function in the additional game is same with that of the basic game as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. But retailer's objective function is different from that of the basic game as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+(p_u-c)(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. Equilibrium of each variable in this additional game is as following table. Major findings of the current study are as follows: (1) As the unplanned buying effect gets stronger, manufacturer and retailer had better increase the cost for sales promotion. (2) As the unplanned buying effect gets stronger, manufacturer had better decrease the cost sharing portion of total cost for sales promotion. (3) Manufacturer's profit is increasing function of the unplanned buying effect. (4) All results of (1),(2),(3) are alleviated by the increase of retailer's procurement cost to acquire unplanned buying items. The authors discuss the implications of those results for the marketers in manufacturers or retailers. The current study firstly suggests some managerial implications for the manufacturer how to share the sales promotion cost with the retailer in a channel to the high or low level of the consumers' unplanned buying potential.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Clinical and Electrophysiological Study on Guillain-Barr$\acute{e}$ Syndrome (Guillain-Barr$\acute{e}$ 증후군의 임상적 및 전기생리학적 연구)

  • Yun, Sung-Hwan;Hah, Jung-Sang;Joo, Sung-Gyun;Cho, Yong-Kook;Kim, Jung-Hyun;Chung, Ji-Yeun
    • Journal of Yeungnam Medical Science
    • /
    • v.22 no.1
    • /
    • pp.52-61
    • /
    • 2005
  • Background: Guillain-Barre syndrome is defined as a recognizable clinical entity that is characterized by rapidly evolving symmetric limb weakness, the loss of tendon reflexes, absent or mild sensory signs, and variable autonomic dysfunctions. This study evaluated the clinical and electrophysiological findings retrospectively. Materials and Methods: Forty-five patients with Guillain-Barre syndrome, who were admitted to the Yeungnam University Hospital for six years from Jan. 1994 to Dec. 1999 were investigated. The correlation between the clinical manifestation and the electrophysiological study was evaluated. Results: The male to female ratio was 1.8:1 and there was a peak seasonal incidence in the winter. A preceding illness was noted in 66.7 % of cases, and an upper respiratory tract infection was the most common one. The most common clinical manifestations were a loss of tendon reflex and ascending muscle weakness and paralysis. The cerebrospinal fluid examinations revealed, albuminocytologic dissociation in 33 cases (73.3 %). Intravenous immunoglobulin therapy was performed in 29 cases (64.4 %). The sequential electrophysiological abnormalities were most marked at 2 to 4 weeks after onset. At that time the most significant change was a decrease in the compound muscle action potential amplitude. These 45 patients with Guillain-Barre syndrome were subclassified using the clinical and electrophysiological data. Conclusion: The result in this study, concured with other research on the clinical and electrophysiological data of Guillain-Barre syndrome. However, an extensive and dynamic investigation is necessary to determine the reason for the peak seasonal incidence in winter.

  • PDF

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Analysis of Slope Stability Considering the Saturation Depth Ratio by Rainfall Infiltration in Unsaturated Soil (불포화토 내 강우침투에 따른 포화깊이비를 고려한 사면안정해석)

  • Chae, Byung-Gon;Park, Kyu-Bo;Park, Hyuck-Jin;Choi, Jung-Hae;Kim, Man-Il
    • The Journal of Engineering Geology
    • /
    • v.22 no.3
    • /
    • pp.343-351
    • /
    • 2012
  • This study proposes a modified equation to calculate the factor of safety for an infinite slope considering the saturation depth ratio as a new variable calculated from rainfall infiltration into unsaturated soil. For the proposed equation, this study introduces the concepts of the saturation depth ratio and subsurface flow depth. Analysis of the factor of safety for an infinite slope is conducted by the sequential calculation of the effective upslope contributing area, subsurface flow depth, and the saturation depth ratio based on quasi-dynamic wetness index theory. The calculation process makes it possible to understand changes in the factor of safety and the infiltration behavior of individual rainfall events. This study analyzes stability changes in an infinite slope, considering the saturation depth ratio of soil, based on the proposed equation and the results of soil column tests performed by Park et al. (2011 a). The analysis results show that changes in the factor of safety are dependent on the saturation depth ratio, which reflects the rainfall infiltration into unsaturated weathered gneiss soil. Under continuous rainfall with intensities of 20 and 50 mm/h, the time taken for the factor of safety to decrease to less than 1.3 was 2.86-5.38 hours and 1.34-2.92 hours, respectively; in the case of repeated rainfall events, the time taken was between 3.27 and 5.61 hours. The results demonstrate that it is possible to understand changes in the factor of safety for an infinite slope dependent on the saturation depth ratio.

The Kinematic Analysis of Jumeok Jireugi in Taekwondo of Security Martial Arts (경호무도의 태권도 주먹 지르기 동작 운동학적 분석)

  • Lee, See-Hwan;Yang, Young-Mo
    • Korean Security Journal
    • /
    • no.31
    • /
    • pp.187-207
    • /
    • 2012
  • The purpose of this study was to analyze the punching movement at the horseback riding stance, one of the basic movements in Taekwondo, with 3D images and further the kinetic variables such as time, velocity, angle, angular velocity, and angular acceleration according to the types. It also aimed to examine the characteristics of each type and suggest instructional methods for the right punching movement. For those purposes, three members from the College Taekwondo Poomse Demonstration Squad were put to the test. The research findings led to the following conclusions: 1. Performance Time of the Punching Movement : In Section 1, Type 1 and 2 recorded $0.24{\pm}0.07s$ and $0.42{\pm}0.08s$, respectively, for the punching movement at the horseback riding stance. While Type 1 took less performance time in the punching movement, Type 2 took less time for take back according to each section's percentage in the total performance time. 2. Variables of Linear Velocity and Linear Acceleration : Each type recorded different linear velocity for each aspect, but the highest linear velocity represented the moment of impact for each type. Type 2 recorded the highest linear velocity in Aspect 4, which was the moment of impact. 3. Variable of Joint Angle : There were no big outer differences in the joint angle during the punching movement between Type 1 in the aspect of impact and Type 2, but the individuals assumed dynamic positions in the punching movement of Type 2 with more diverse changes to the joint angle. 4. Variables of Angular Velocity and Angular Acceleration During the punching movement of Type 1, the Aspect 3 in the moment of impact recorded angular velocity of $0.79{\pm}0.02deg/s$, $0.91{\pm}0.04deg/s$, and $5.24{\pm}0.09deg/s$ at the pelvis, shoulder, and wrist respectively. During the punching movement of Type 2, the Aspect 3 in the moment of impact recorded angular velocity of $1.32{\pm}0.03deg/s$, $0.21{\pm}0.03deg/s$, and $4.98{\pm}0.08deg/$ at the shoulder, wrist, and pelvis, respectively. In the Aspect 3 in the moment of impact in Type 2, the angular acceleration at the right wrist joint was $176.24{\pm}1.11deg/s^2$, which was bigger than that in the moment of impact in Type 1.

  • PDF