• Title/Summary/Keyword: state prediction

Search Result 1,482, Processing Time 0.027 seconds

Variation of Inflow Density Currents with Different Flood Magnitude in Daecheong Reservoir (홍수 규모별 대청호에 유입하는 하천 밀도류의 특성 변화)

  • Yoon, Sung-Wan;Chung, Se-Woong;Choi, Jung-Kyu
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.12
    • /
    • pp.1219-1230
    • /
    • 2008
  • Stream inflows induced by flood runoffs have a higher density than the ambient reservoir water because of a lower water temperature and elevated suspended sediment(SS) concentration. As the propagation of density currents that formed by density difference between inflow and ambient water affects reservoir water quality and ecosystem, an understanding of reservoir density current is essential for an optimization of filed monitoring, analysis and forecast of SS and nutrient transport, and their proper management and control. This study was aimed to quantify the characteristics of inflow density current including plunge depth($d_p$) and distance($X_p$), separation depth($d_s$), interflow thickness($h_i$), arrival time to dam($t_a$), reduction ratio(${\beta}$) of SS contained stream inflow for different flood magnitude in Daecheong Reservoir with a validated two-dimensional(2D) numerical model. 10 different flood scenarios corresponding to inflow densimetric Froude number($Fr_i$) range from 0.920 to 9.205 were set up based on the hydrograph obtained from June 13 to July 3, 2004. A fully developed stratification condition was assumed as an initial water temperature profile. Higher $Fr_i$(inertia-to-buoyancy ratio) resulted in a greater $d_p,\;X_p,\;d_s,\;h_i$, and faster propagation of interflow, while the effect of reservoir geometry on these characteristics was significant. The Hebbert equation that estimates $d_p$ assuming steady-state flow condition with triangular cross section substantially over-estimated the $d_p$ because it does not consider the spatial variation of reservoir geometry and water surface changes during flood events. The ${\beta}$ values between inflow and dam sites were decreased as $Fr_i$ increased, but reversed after $Fr_i$>9.0 because of turbulent mixing effect. The results provides a practical and effective prediction measures for reservoir operators to first capture the behavior of turbidity inflow.

A STUDY ON THE MEASUREMENT OF THE IMPLANT STABILITY USING RESONANCE FREQUENCY ANALYSIS (공진 주파수 분석법에 의한 임플랜트의 안정성 측정에 관한 연구)

  • Park Cheol;Lim Ju-Hwan;Cho In-Ho;Lim Heon-Song
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.41 no.2
    • /
    • pp.182-206
    • /
    • 2003
  • Statement of problem : Successful osseointegration of endosseous threaded implants is dependent on many factors. These may include the surface characteristics and gross geometry of implants, the quality and quantity of bone where implants are placed, and the magnitude and direction of stress in functional occlusion. Therefore clinical quantitative measurement of primary stability at placement and functional state of implant may play a role in prediction of possible clinical symptoms and the renovation of implant geometry, types and surface characteristic according to each patients conditions. Ultimately, it may increase success rate of implants. Purpose : Many available non-invasive techniques used for the clinical measurement of implant stability and osseointegration include percussion, radiography, the $Periotest^{(R)}$, Dental Fine $Tester^{(R)}$ and so on. There is, however, relatively little research undertaken to standardize quantitative measurement of stability of implant and osseointegration due to the various clinical applications performed by each individual operator. Therefore, in order to develop non-invasive experimental method to measure stability of implant quantitatively, the resonance frequency analyzer to measure the natural frequency of specific substance was developed in the procedure of this study. Material & method : To test the stability of the resonance frequency analyzer developed in this study, following methods and materials were used : 1) In-vitro study: the implant was placed in both epoxy resin of which physical properties are similar to the bone stiffness of human and fresh cow rib bone specimen. Then the resonance frequency values of them were measured and analyzed. In an attempt to test the reliability of the data gathered with the resonance frequency analyzer, comparative analysis with the data from the Periotest was conducted. 2) In-vivo study: the implants were inserted into the tibiae of 10 New Zealand rabbits and the resonance frequency value of them with connected abutments at healing time are measured immediately after insertion and gauged every 4 weeks for 16 weeks. Results : Results from these studies were such as follows : The same length implants placed in Hot Melt showed the repetitive resonance frequency values. As the length of abutment increased, the resonance frequency value changed significantly (p<0.01). As the thickness of transducer increased in order of 0.5, 1.0 and 2.0 mm, the resonance frequency value significantly increased (p<0.05). The implants placed in PL-2 and epoxy resin with different exposure degree resulted in the increase of resonance frequency value as the exposure degree of implants and the length of abutment decreased. In comparative experiment based on physical properties, as the thickness of transducer increased, the resonance frequency value increased significantly(p<0.01). As the stiffness of substances where implants were placed increased, and the effective length of implants decreased, the resonance frequencies value increased significantly (p<0.05). In the experiment with cow rib bone specimen, the increase of the length of abutment resulted in significant difference between the results from resonance frequency analyzer and the $Periotest^{(R)}$. There was no difference with significant meaning in the comparison based on the direction of measurement between the resonance frequency value and the $Periotest^{(R)}$ value (p<0.05). In-vivo experiment resulted in repetitive patternes of resonance frequency. As the time elapsed, the resonance frequency value increased significantly with the exception of 4th and 8th week (p<0.05). Conclusion : The development of resonance frequency analyzer is an attempt to standardize the quantitative measurement of stability of implant and osseointegration and compensate for the reliability of data from other non-invasive measuring devices It is considered that further research is needed to improve the efficiency of clinical application of resonance frequency analyzer. In addition, further investigation is warranted on the standardized quantitative analysis of the stability of implant.

Analytical Review of the Forensic Anthropological Techniques for Stature Estimation in Korea (한국에서 사용되는 법의인류학적 키 추정 방법에 대한 제언)

  • Jeong, Yangseung;Woo, Eun Jin
    • Anatomy & Biological Anthropology
    • /
    • v.31 no.4
    • /
    • pp.121-131
    • /
    • 2018
  • Stature is one of the unique biological properties of a person, which can be used for identification of the individual. In this regard, statures are estimated for the unknown victims from crimes and disasters. However, the accuracy of estimates may be compromised by inappropriate methodologies and/or practices of stature estimation. Discussed in this study are the methodological issues related to the current practices of forensic anthropological stature estimation in Korea, followed by suggestions to enhance the accuracy of the stature estimates. Summaries of forensic anthropological examinations for 560 skeletal remains, which were conducted at the National Forensic Service (NFS), were reviewed. Mr. Yoo Byung-eun's case is utilized as an example of the NFS's practices. To estimate Mr. Yoo's stature, Trotter's (1970) femur equation was applied even though the fibula equation of a lower standard error was available. In his case report, the standard error associated with the equation (${\pm}3.8cm$) was interpreted as an 'error range', which gave a hasty impression that the prediction interval is that narrow. Also, stature shrinkage by aging was not considered, so the estimated stature in Mr. Yoo's case report should be regarded as his maximum living stature, rather than his stature-at-death. Lastly, applying Trotter's (1970) White female equations to Korean female remains is likely to underestimate their statures. The anatomical method will enhance the accuracy of stature estimates. However, in cases that the anatomical method is not feasible, the mathematical method based on Korean samples should be considered. Since 1980's, effort has been made to generate stature estimation equations using Korean samples. Applying the equations based on Korean samples to Korean skeletal remains will enhance the accuracy of the stature estimates, which will eventually increase the likelihood of successful identification of the unknowns.

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Analysis and Forecast of Venture Capital Investment on Generative AI Startups: Focusing on the U.S. and South Korea (생성 AI 스타트업에 대한 벤처투자 분석과 예측: 미국과 한국을 중심으로)

  • Lee, Seungah;Jung, Taehyun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.21-35
    • /
    • 2023
  • Expectations surrounding generative AI technology and its profound ramifications are sweeping across various industrial domains. Given the anticipated pivotal role of the startup ecosystem in the utilization and advancement of generative AI technology, it is imperative to cultivate a deeper comprehension of the present state and distinctive attributes characterizing venture capital (VC) investments within this domain. The current investigation delves into South Korea's landscape of VC investment deals and prognosticates the projected VC investments by juxtaposing these against the United States, the frontrunner in the generative AI industry and its associated ecosystem. For analytical purposes, a compilation of 286 investment deals originating from 117 U.S. generative AI startups spanning the period from 2008 to 2023, as well as 144 investment deals from 42 South Korean generative AI startups covering the years 2011 to 2023, was amassed to construct new datasets. The outcomes of this endeavor reveal an upward trajectory in the count of VC investment deals within both the U.S. and South Korea during recent years. Predominantly, these deals have been concentrated within the early-stage investment realm. Noteworthy disparities between the two nations have also come to light. Specifically, in the U.S., in contrast to South Korea, the quantum of recent VC deals has escalated, marking an augmentation ranging from 285% to 488% in the corresponding developmental stage. While the interval between disparate investment stages demonstrated a slight elongation in South Korea relative to the U.S., this discrepancy did not achieve statistical significance. Furthermore, the proportion of VC investments channeled into generative AI enterprises, relative to the aggregate number of deals, exhibited a higher quotient in South Korea compared to the U.S. Upon a comprehensive sectoral breakdown of generative AI, it was discerned that within the U.S., 59.2% of total deals were concentrated in the text and model sectors, whereas in South Korea, 61.9% of deals centered around the video, image, and chat sectors. Through forecasting, the anticipated VC investments in South Korea from 2023 to 2029 were derived via four distinct models, culminating in an estimated average requirement of 3.4 trillion Korean won (ranging from at least 2.408 trillion won to a maximum of 5.919 trillion won). This research bears pragmatic significance as it methodically dissects VC investments within the generative AI domain across both the U.S. and South Korea, culminating in the presentation of an estimated VC investment projection for the latter. Furthermore, its academic significance lies in laying the groundwork for prospective scholarly inquiries by dissecting the current landscape of generative AI VC investments, a sphere that has hitherto remained void of rigorous academic investigation supported by empirical data. Additionally, the study introduces two innovative methodologies for the prediction of VC investment sums. Upon broader integration, application, and refinement of these methodologies within diverse academic explorations, they stand poised to enhance the prognosticative capacity pertaining to VC investment costs.

  • PDF

Long-term Predictability for El Nino/La Nina using PNU/CME CGCM (PNU/CME CGCM을 이용한 엘니뇨/라니냐 장기 예측성 연구)

  • Jeong, Hye-In;Ahn, Joong-Bae
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.12 no.3
    • /
    • pp.170-177
    • /
    • 2007
  • In this study, the long-term predictability of El Nino and La Nina events of Pusan National University Coupled General Circulation Model(PNU/CME CGCM) developed from a Research and Development Grant funded by Korea Meteorology Administration(KMA) was examined in terms of the correlation coefficients of the sea surface temperature between the model and observation and skill scores at the tropical Pacific. For the purpose, long-term global climate was hindcasted using PNU/CME CGCM for 12 months starting from April, July, October and January(APR RUN, JUL RUN, OCT RUN and JAN RUN, respectively) of each and every years between 1979 and 2004. Each 12-month hindcast consisted of 5 ensemble members. Relatively high correlation was maintained throughout the 12-month lead hindcasts at the equatorial Pacific for the four RUNs starting at different months. It is found that the predictability of our CGCM in forecasting equatorial SST anomalies is more pronounced within 6-month of lead time, in particular. For the assessment of model capability in predicting El Nino and La Nina, various skill scores such as Hit rates and False Alarm rate are calculated. According to the results, PNU/CME CGCM has a good predictability in forecasting warm and cold events, in spite of relatively poor capability in predicting normal state of equatorial Pacific. The predictability of our CGCM was also compared with those of other CGCMs participating DEMETER project. The comparative analysis also illustrated that our CGCM has reasonable long-term predictability comparable to the DEMETER participating CGCMs. As a conclusion, PNU/CME CGCM can predict El Nino and La Nina events at least 12 months ahead in terms of NIino 3.4 SST anomaly, showing much better predictability within 6-month of leading time.

The Clinical Outcomes of Marginal Donor Hearts: A Single Center Experience

  • Soo Yong Lee;Seok Hyun Kim;Min Ho Ju;Mi Hee Lim;Chee-hoon Lee;Hyung Gon Je;Ji Hoon Lim;Ga Yun Kim;Ji Soo Oh;Jin Hee Choi;Min Ku Chon;Sang Hyun Lee;Ki Won Hwang;Jeong Su Kim;Yong Hyun Park;June Hong Kim;Kook Jin Chun
    • Korean Circulation Journal
    • /
    • v.53 no.4
    • /
    • pp.254-267
    • /
    • 2023
  • Background and Objectives: Although the shortage of donor is a common problem worldwide, a significant portion of unutilized hearts are classified as marginal donor (MD) hearts. However, research on the correlation between the MD and the prognosis of heart transplantation (HTx) is lacking. This study was conducted to investigate the clinical impact of MD in HTx. Methods: Consecutive 73 HTxs during 2014 and 2021 in a tertiary hospital were analyzed. MD was defined as follows; a donor age >55 years, left ventricular ejection fraction <50%, cold ischemic time >240 minutes, or significant cardiac structural problems. Preoperative characteristics and postoperative hemodynamic data, primary graft dysfunction (PGD), and the survival rate were analyzed. Risk stratification by Index for Mortality Prediction after Cardiac Transplantation (IMPACT) score was performed to examine the outcomes according to the recipient state. Each group was sub-divided into 2 risk groups according to the IMPACT score (low <10 vs. high ≥10). Results: A total of 32 (43.8%) patients received an organ from MDs. Extracorporeal membrane oxygenation was more frequent in the non-MD group (34.4% vs. 70.7, p=0.007) There was no significant difference in PGD, 30-day mortality and long-term survival between groups. In the subgroup analysis, early outcomes did not differ between low- and high-risk groups. However, the long-term survival was better in the low-risk group (p=0.01). Conclusions: The outcomes of MD group were not significantly different from non-MD group. Particularly, in low-risk recipient, the MD group showed excellent early and long-term outcomes. These results suggest the usability of selected MD hearts without increasing adverse events.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.