• Title/Summary/Keyword: 예측 선행시간

Search Result 292, Processing Time 0.034 seconds

Development and Evaluation of Traffic Conflict Criteria at an intersection (교차로 교통상충기준 개발 및 평가에 관한 연구)

  • 하태준;박형규;박제진;박찬모
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.2
    • /
    • pp.105-115
    • /
    • 2002
  • For many rears, traffic accident statistics are the most direct measure of safety for a signalized intersection. However it takes more than 2 or 3 yearn to collect certain accident data for adequate sample sizes. And the accident data itself is unreliable because of the difference between accident data recorded and accident that is actually occurred. Therefore, it is rather difficult to evaluate safety for a intersection by using accident data. For these reasons, traffic conflict technique(TCT) was developed as a buick and accurate counter-measure of safety for a intersection. However, the collected conflict data is not always reliable because there is absence of clear criteria for conflict. This study developed objective and accurate conflict criteria, which is shown below based on traffic engineering theory. Frist, the rear-end conflict is regarded, when the following vehicle takes evasive maneuver against the first vehicle within a certain distance, according to car-following theory. Second, lane-change conflict is regarded when the following vehicle takes evasive maneuver against first vehicle which is changing its lane within the minimum stopping distance of the following vehicle. Third, cross and opposing-left turn conflicts are regarded when the vehicle which receives green sign takes evasive maneuver against the vehicle which lost its right-of-way crossing a intersection. As a result of correlation analysis between conflict and accident, it is verified that the suggested conflict criteria in this study ave applicable. And it is proven that estimating safety evaluation for a intersection with conflict data is possible, according to the regression analysis preformed between accident and conflict, EPDO accident and conflict. Adopting the conflict criteria suggested in this study would be both quick and accurate method for diagnosing safety and operational deficiencies and for evaluation improvements at intersections. Further research is required to refine the suggested conflict criteria to extend its application. In addition, it is necessary to develope other types of conflict criteria, not included in this study, in later study.

Relationship Between Standardized Precipitation Index and Groundwater Levels: A Proposal for Establishment of Drought Index Wells (표준강수지수와 지하수위의 상관성 평가 및 가뭄관측정 설치 방안 고찰)

  • Kim Gyoo-Bum;Yun Han-Heum;Kim Dae-Ho
    • Journal of Soil and Groundwater Environment
    • /
    • v.11 no.3
    • /
    • pp.31-42
    • /
    • 2006
  • Drought indices, such as PDSI (palmer Drought Severity Index), SWSI (Surface Water Supply Index) and SPI (Standardized Precipitation Index), have been developed to assess and forecast an intensity of drought. To find the applicability of groundwater level data to a drought assessment, a correlation analysis between SPI and groundwater levels was conducted for each time series at a drought season in 2001. The comparative results between SPI and groundwater levels of shallow wells of three national groundwater monitoring stations, Chungju Gageum, Yangpyung Gaegun, and Yeongju Munjeong, show that these two factors are highly correlated. In case of SPI with a duration of 1 month, cross-correlation coefficients between two factors are 0.843 at Chungju Gageum, 0.825 at Yangpyung Gaegun, and 0.737 at Yeongju Munjeong. The time lag between peak values of two factors is nearly zero in case of SPI with a duration of 1 month, which means that groundwater level fluctuation is similar to SPI values. Moreover, in case of SPI with a duration of 3 month, it is found that groundwater level can be a leading indicator to predict the SPI values I week later. Some of the national groundwater monitoring stations can be designated as DIW (Drought Index Well) based on the detailed survey of site characteristics and also new DIWs need to be drilled to assess and forecast the drought in this country.

A Study on the Forecasting Model on Market Share of a Retail Facility -Focusing on Extension of Interaction Model- (유통시설의 시장점유율 예측 모델에 관한 연구 -상호작용 모델의 확장을 중심으로)

  • 최민성
    • Journal of Distribution Research
    • /
    • v.5 no.2
    • /
    • pp.49-68
    • /
    • 2001
  • In this chapter, we summarize the results on the optimal location selection and present limitation and direction of research. In order to reach the objective, this study selected and tested the interaction model which obtains the value of co-ordinates on location selection through the optimization technique. This study used the original variables in the model, but the results indicated that there is difference in reality. In order to overcome this difference, this study peformed market survey and found the new variables (first data such as price, quality and assortment of goods, and the second data such as aggregate area, and area of shop, and the number of cars in the parking lot). Then this study determined an optimal variable by empirical analysis which compares an actual value of market share in 1988 with the market share yielded in the model. However, this study found the market share in each variables does not reflect a reality due to an assumption of λ-value in the model. In order to improve this, this study performed a sensitivity analysis which adds the λ value from 1.0 to 2.9 marginally. The analyzed result indicated the highest significance with the market share ratio in 1998 at λ of 1.0. Applying the weighted value to a variable from each of the first data and second data yielded the results that more variables from the first data coincided with the realistic rank on sales. Although this study have some limits and improvements, if a marketer uses this extended model, more significant results will be produced.

  • PDF

THE EFFECT OF ORTHODONTIC TREATMENT BY PREMOLAR EXTRACTION ON THE PRONUNCIATION OF THE KOREAN CONSONATS (소구치 발거를 통한 교정치료가 한국어 자음의 발음에 미치는 영향)

  • Lee, Jeong-Hee;Yoon, Young-Jooh;Kim, Kwang-Won
    • The korean journal of orthodontics
    • /
    • v.27 no.1
    • /
    • pp.91-103
    • /
    • 1997
  • This paper aimed to study what the influences of orthodontic treatment of pronunciation are. We compared the duration and the acoustic wave patterns of Korean consonants pronounced by a control group with those of a patient who had his four premolars extracted and had been given orthodontic treatment The results were as follows : 1. Compared to the control group, the treatment group had a longer duration time of consonant pronunciation for all consonants but "ㅅ(s)" and "ㅌ($(t^h)$" in CV(consonant-vowel) pairs. Especially in the case of "ㅈ(dz)", "ㅆ$({\varphi}^h)$" for CV-pairs, and "ㄷ(d)" in VCV(vowel-consonant-vowel) clusters, the duration of consonant sound showed a sharp contrast between the control group and the treatment group. 2. There were clear differences in the acoustic wave patterns of "ㅉ(ts)", "ㅆ$({\varphi}^h)$" and "ㅊ$(c^h)$", all of which were in VCV-clusters. The acoustic wave pattern of "ㅉ(ts)", when pronounced by the treatment group, was stronger than the control group's. This phenomenon was most remarkable in the transitive section where the "ㅉ(ts)" sound flowed into the following vowel. When a preceding vowel shifted to the consonant "ㅆ$({\varphi}^h)$", the attack property of the appeared clearly in the acoustic waves of the treament group, while in the control group the starting point of consonart was indistinctive. Consonant duration for the treatment group was longer, and the appearance of a zero crossing point in the acoustic wave was more frequent. In the case of "ㅊ$(c^h)$", the treatment group produced a strong acoustic wave, and the property of aspiration was obvious in it. 3. When the treatment group pronounced "ㄷ(d)" and "ㅈ(dz)" in CV-pairs, the acoustic-wave was similar to that of aspirated "ㅌ$(t^h)$" and "ㅊ$(c^h)$". 4. The aspirated "ㅌ$(t^h)$" and "ㅊ$(c^h)$" pronounced by the treatment group showed the stronger airstream and acoustic wave form.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Perception of Internet Cyber Community Participants on Reconciliation of Divorced Couple (이혼 후 재결합에 대한 인터넷 사이버공동체 참여자들의 인식)

  • Lim, Choon-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.10
    • /
    • pp.237-253
    • /
    • 2012
  • The purpose of this study was to examine the perception of cyber community participants on reconciliation of divorced couple and find reasons for pros and cons concerning reconciliation after divorce. The data were collected through searching for website or Internet cafe related to 'reconciliation after divorce' on Internet. The contents of 7 cases for asking for advice on reuniting and opinions of cyber community participants on them were analyzed. Cyber community participants expressed their approval of divorced couple's reuniting for following reasons: 'strong motive for reunion', 'sexual relation with ex-partner', 'parental responsibility', 'regarding reconciliation as a better choice than remarriage', 'regarding as a good choice for child' etc. But cyber community members were opposed to divorced couple's reuniting for following reasons: 'doubt about real intention to reuniting', 'no self-reflection on previous marriage and ex-spouse', 'concern for recurrence of former marital conflict', 'reuniting only for child not for couple', 'no prior settlement of former marital conflict', 'no forgiveness and tolerance for ex-spouse', 'no reflection and change', 'no effort on ex-spouse's side' etc. Though these results were restrictive, this study identified issues surrounding reconciliation after divorce through asking and giving advices by anonymous members on cyberspace. These findings implied that we should take more interest in reconciliation as an realistic alternative marriage pattern after divorce and consider what is important to successful reuniting after divorce.

Association of Positive Ureaplasma in Gastric Fluid with Clinical Features in Preterm Infants

  • Jung, Yu-Jin
    • Neonatal Medicine
    • /
    • v.18 no.2
    • /
    • pp.280-287
    • /
    • 2011
  • Purpose: The purpose of the present study was to determine the association of positive Ureaplasma urealyticum in gastric fluid with clinical features and outcomes in preterm infants. Methods: Gastric fluid from the preterm infants was first aspirated within 30 minutes and cultured within 24 hours after birth to check for U. urealyticum. Infants were divided into two groups on the basis of the presence/absence of U. urealyticum. Results: U. urealyticum in gastric fluid was identified in 17 of 91 (19%) preterm infants. Compared with the negative U. urealyticum group, there were significantly higher percentage of infants with gestational age ${\leq}$30 weeks (P=0.020), higher Apgar score at 1 minute and 5 minutes (P=0.017 and P=0.048, respectively), and higher rate of vaginal delivery (P=0.000) in the positive U. urealyticum group. Although the incidence rate of bronchopulmonary dysplasia between the two groups was not different, the frequency of bronchopulmonary dysplasia without previous respiratory distress syndrome was significantly higher in the positive group (11%) than that in the negative group (1%) (P=0.030). Conclusion: The detection of U. urealyticum in gastric fluid is more frequent in infants with gestational age ${\leq}$30 weeks. It can be helpful to predict the development of bronchopulmonary dysplasia without previous respiratory distress syndrome in preterm infants.

Probable Volcanic Flood of the Cheonji Caldera Lake Triggered by Volcanic Eruption of Mt. Baekdusan (백두산 화산분화로 인해 천지에서 발생 가능한 화산홍수)

  • Lee, Khil-Ha;Kim, Sung-Wook;Yoo, Soon-Young;Kim, Sang-Hyun
    • Journal of the Korean earth science society
    • /
    • v.34 no.6
    • /
    • pp.492-506
    • /
    • 2013
  • The historical accounts and materials about the eruption of Mt. Baekdusan as observed by the geological survey is now showing some signs of waking from a long slumber. As a response of the volcanic eruption of Mt. Baekdusan, water release may occur from the stored water in Lake Cheonjii caldera. The volcanic flood is crucial in that it has huge potential energy that can destruct all kinds of man-made structures and that its velocity can reach up to 100 km $hr^{-1}$ to cover hundreds of kilometers of downstream of Lake Cheonji. The ultimate goal of the study is to estimate the level of damage caused by the volcanic flood of Lake Cheon-Ji caldera. As a preliminary study a scenario-based numerical analysis is performed to build hydrographs as a function of time. The analysis is performed for each scenario (breach, magma uplift, combination of uplift and breach, formation of precipitation etc.) and the parameters to require a model structure is chosen on the basis of the historic records of other volcanos. This study only considers the amount of water at the rim site as a function of time for the estimation whereas the downstream routing process is not considered in this study.

Internal Dose Assessment of Worker by Radioactive Aerosol Generated During Mechanical Cutting of Radioactive Concrete (원전 방사성 콘크리트 기계적 절단의 방사성 에어로졸에 대한 작업자 내부피폭선량 평가)

  • Park, Jihye;Yang, Wonseok;Chae, Nakkyu;Lee, Minho;Choi, Sungyeol
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.157-167
    • /
    • 2020
  • Removing radioactive concrete is crucial in the decommissioning of nuclear power plants. However, this process generates radioactive aerosols, exposing workers to radiation. Although large amounts of radioactive concrete are generated during decommissioning, studies on the internal exposure of workers to radioactive aerosols generated from the cutting of radioactive concrete are very limited. In this study, therefore, we calculate the internal radiation doses of workers exposed to radioactive aerosols during activities such as drilling and cutting of radioactive concrete, using previous research data. The electrical-mobility-equivalent diameter measured in a previous study was converted to aerodynamic diameter using the Newton-Raphson method. Furthermore, the specific activity of each nuclide in radioactive concrete 10 years after nuclear power plants are shut down was calculated using the ORIGEN code. Eventually, we calculated the committed effective dose for each nuclide using the IMBA software. The maximum effective dose of 152Eu constituted 83.09% of the total dose; moreover, the five highest-ranked elements (152Eu, 154Eu, 60Co, 239Pu, 55Fe) constituted 99.63%. Therefore, we postulate that these major elements could be measured first for rapid radiation exposure management of workers involved in decommissioning of nuclear power plants, even if all radioactive elements in concrete are not considered.