• Title/Summary/Keyword: Field failure data analysis

Search Result 230, Processing Time 0.024 seconds

Informed Consent as a Litigation Strategy in the Field of Aesthetic Surgery: An Analysis Based on Court Precedents

  • Park, Bo Young;Kwon, Jungwoo;Kang, So Ra;Hong, Seung Eun
    • Archives of Plastic Surgery
    • /
    • v.43 no.5
    • /
    • pp.402-410
    • /
    • 2016
  • Background In an increasing number of lawsuits doctors lose, despite providing preoperative patient education, because of failure to prove informed consent. We analyzed judicial precedents associated with insufficient informed consent to identify judicial factors and trends related to aesthetic surgery medical litigation. Methods We collected data from civil trials between 1995 and 2015 that were related to aesthetic surgery and resulted in findings of insufficient informed consent. Based on these data, we analyzed the lawsuits, including the distribution of surgeries, dissatisfactions, litigation expenses, and relationship to informed consent. Results Cases were found involving the following types of surgery: facial rejuvenation (38 cases), facial contouring surgery (27 cases), mammoplasty (16 cases), blepharoplasty (29 cases), rhinoplasty (21 cases), body-contouring surgery (15 cases), and breast reconstruction (2 cases). Common reasons for postoperative dissatisfaction were deformities (22%), scars (17%), asymmetry (14%), and infections (6%). Most of the malpractice lawsuits occurred in Seoul (population 10 million people; 54% of total plastic surgeons) and in primary-level local clinics (113 cases, 82.5%). In cases in which only invalid informed consent was recognized, the average amount of consolation money was KRW 9,107,143 (USD 8438). In cases in which both violation of non-malfeasance and invalid informed consent were recognized, the average amount of consolation money was KRW 12,741,857 (USD 11,806), corresponding to 38.6% of the amount of the judgment. Conclusions Surgeons should pay special attention to obtaining informed consent, because it is a double-edged sword; it has clinical purposes for doctors and patients but may also be a litigation strategy for lawyers.

Performance Analysis of Access Channel Decoder Implemeted for CDMA2000 1X Smart Antenna Base Station (CDMA2000 1X 스마트 안테나 기지국용으로 구현된 액세스 채널 복조기의 성능 분석)

  • 김성도;현승헌;최승원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.2A
    • /
    • pp.147-156
    • /
    • 2004
  • This paper presents an implementation and performance analysis of an access channel decoder which exploits a diversity gain due to the independent magnitude of received signals energy at each of antenna elements of a smart antenna BTS (Base-station Transceiver Subsystem) operating in CDMA2000 1X signal environment. Proposed access channel decoder consists of a searcher supporting 4 fingers, Walsh demodulator, and demodulator controller. They have been implemented with 5 of 1 million-gate FPGA's (Field Programmable Gate Array) Altera's APEX EP20K1000EBC652 and TMS320C6203 DSP (digital signal processing). The objective of the proposed access channel decoders is to enhance the data retrieval at co]1-site during the access period, for which the optimal weight vector of the smart antenna BTS is not available. Through experimental tests, we confirmed that the proposed access channel decoder exploitng the diversity technique outperforms the conventional one, which is based on a single antenna channel, in terms of detection probability of access probe, access channel failure probability, and $E_{b/}$ $N_{o}$ in Walsh demodulator.r.r.

The Basic Study on the Method of Acoustic Emission Signal Processing for the Failure Detection in the NPP Structures (원전 구조물 결함 탐지를 위한 음향방출 신호 처리 방안에 대한 기초 연구)

  • Kim, Jong-Hyun;Korea Aerospace University, Jae-Seong;Lee, Jung;Kwag, No-Gwon;Lee, Bo-Young
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.29 no.5
    • /
    • pp.485-492
    • /
    • 2009
  • The thermal fatigue crack(TFC) is one of the life-limiting mechanisms at the nuclear power plant operating conditions. In order to evaluate the structural integrity, various non-destructive test methods such as radiographic test, ultrasonic test and eddy current are used in the industrial field. However, these methods have restrictions that defect detection is possible after the crack growth. For this reason, acoustic emission testing(AET) is becoming one of powerful inspection methods, because AET has an advantage that possible to monitor the structure continuously. Generally, every mechanism that affects the integrity of the structure or equipment is a source of acoustic emission signal. Therefore the noise filtering is one of the major works to the almost AET researchers. In this study, acoustic emission signal was collected from the pipes which were in the successive thermal fatigue cycles. The data were filtered based on the results from previous experiments. Through the data analysis, the signal characteristics to distinguish the effective signal from the noises for the TFC were proven as the waveform difference. The experiment results provide preliminary information for the acoustic emission technique to the continuous monitoring of the structure failure detection.

Evaluation of Optimal Time Between Overhaul Period of the First Driving Devices for High-Speed Railway Vehicle (고속철도차량 1차 구동장치에 대한 완전분해정비의 최적 주기 평가)

  • Jung, Jin-Tae;Kim, Chul-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8700-8706
    • /
    • 2015
  • The first driving device of the power bogies for the Korean high-speed railway vehicle consists of the traction motor (TM) and the motor reduction gears unit (MRU). Although TM and MRU are the mechanically integrated structures, their time between overhauls (TBO) have two separate intervals due to different technical requirements(i.e. TBO of MRU: $1.8{\times}10^6km$, TBO of TM: $2.5{\times}10^6km$). Therefore, to reduce the unnecessary number of preventive maintenances, it is important to evaluate the optimal TBO with a viewpoint of reliability-center maintenance towards cost-effective solution. In this study, derived from the field data in maintenance, fault tree analysis and failure rate of the subsystem considering criticality of the components are evaluated respectively. To minimize the conventional total maintenance cost, the same optimal TBO of the components is derived from genetic algorithm considering target reliability and improvement factor. In this algorithm, a chromosome which comprised of each individual is the minimum preventive maintenance interval. The fitness function of the individual in generation is acquired through the formulation using an inverse number of the total maintenance cost. Whereas the lowest common multiple method produces only a four percent reduction compared to what the existing method did, the optimal TBO of them using genetic algorithm is $2.25{\times}10^6$km, which is reduced to about 14% comparing the conventional method.

Change of Fractured Rock Permeability due to Thermo-Mechanical Loading of a Deep Geological Repository for Nuclear Waste - a Study on a Candidate Site in Forsmark, Sweden

  • Min, Ki-Bok;Stephansson, Ove
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2009.06a
    • /
    • pp.187-187
    • /
    • 2009
  • Opening of fractures induced by shear dilation or normal deformation can be a significant source of fracture permeability change in fractured rock, which is important for the performance assessment of geological repositories for spent nuclear fuel. As the repository generates heat and later cools the fluid-carrying ability of the rocks becomes a dynamic variable during the lifespan of the repository. Heating causes expansion of the rock close to the repository and, at the same time, contraction close to the surface. During the cooling phase of the repository, the opposite takes place. Heating and cooling together with the, virgin stress can induce shear dilation of fractures and deformation zones and change the flow field around the repository. The objectives of this work are to examine the contribution of thermal stress to the shear slip of fracture in mid- and far-field around a KBS-3 type of repository and to investigate the effect of evolution of stress on the rock mass permeability. In the first part of this study, zones of fracture shear slip were examined by conducting a three-dimensional, thermo-mechanical analysis of a spent fuel repository model in the size of 2 km $\times$ 2 km $\times$ 800 m. Stress evolutions of importance for fracture shear slip are: (1) comparatively high horizontal compressive thermal stress at the repository level, (2) generation of vertical tensile thermal stress right above the repository, (3) horizontal tensile stress near the surface, which can induce tensile failure, and generation of shear stresses at the comers of the repository. In the second part of the study, fracture data from Forsmark, Sweden is used to establish fracture network models (DFN). Stress paths obtained from the thermo-mechanical analysis were used as boundary conditions in DFN-DEM (Discrete Element Method) analysis of six DFN models at the repository level. Increases of permeability up to a factor of four were observed during thermal loading history and shear dilation of fractures was not recovered after cooling of the repository. An understanding of the stress path and potential areas of slip induced shear dilation and related permeability changes during the lifetime of a repository for spent nuclear fuel is of utmost importance for analysing long-term safety. The result of this study will assist in identifying critical areas around a repository where fracture shear slip is likely to develop. The presentation also includes a brief introduction to the ongoing site investigation on two candidate sites for geological repository in Sweden.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Reliable Assessment of Rainfall-Induced Slope Instability (강우로 인한 사면의 불안정성에 대한 신뢰성 있는 평가)

  • Kim, Yun-Ki;Choi, Jung-Chan;Lee, Seung-Rae;Seong, Joo-Hyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.5
    • /
    • pp.53-64
    • /
    • 2009
  • Many slope failures are induced by rainfall infiltration. A lot of recent researches are therefore focused on rainfall-induced slope instability and the rainfall infiltration is recognized as the important triggering factor. The rainfall infiltrates into the soil slope and makes the matric suction lost in the slope and even the positive pore water pressure develops near the surface of the slope. They decrease the resisting shear strength. In Korea, a few public institutions suggested conservative slope design guidelines that assume a fully saturated soil condition. However, this assumption is irrelevant and sometimes soil properties are misused in the slope design method to fulfill the requirement. In this study, a more relevant slope stability evaluation method is suggested to take into account the real rainfall infiltration phenomenon. Unsaturated soil properties such as shear strength, soil-water characteristic curve and permeability for Korean weathered soils were obtained by laboratory tests and also estimated by artificial neural network models. For real-time assessment of slope instability, failure warning criteria of slope based on deterministic and probabilistic analyses were introduced to complement uncertainties of field measurement data. The slope stability evaluation technique can be combined with field measurement data of important factors, such as matric suction and water content, to develop an early warning system for probably unstable slopes due to the rainfall.

2-D/3-D Seismic Data Acquisition and Quality Control for Gas Hydrate Exploration in the Ulleung Basin (울릉분지 가스하이드레이트 2/3차원 탄성파 탐사자료 취득 및 품질관리)

  • Koo, Nam-Hyung;Kim, Won-Sik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun;Yoo, Dong-Geun;Lee, Ho-Young;Park, Keun-Pil
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.2
    • /
    • pp.127-136
    • /
    • 2008
  • To identify the potential area of gas hydrate in the Ulleung Basin, 2-D and 3-D seismic surveys using R/V Tamhae II were conducted in 2005 and 2006. Seismic survey equipment consisted of navigation system, recording system, streamer cable and air-gun source. For reliable velocity analysis in a deep sea area where water depths are mostly greater than 1,000 m and the target depth is up to about 500 msec interval below the seafloor, 3-km-long streamer and 1,035 $in^3$ tuned air-gun array were used. During the survey, a suite of quality control operations including source signature analysis, 2-D brute stack, RMS noise analysis and FK analysis were performed. The source signature was calculated to verify its conformity to quality specification and the gun dropout test was carried out to examine signature changes due to a single air gun's failure. From the online quality analysis, we could conclude that the overall data quality was very good even though some seismic data were affected by swell noise, parity error, spike noise and current rip noise. Especially, by checking the result of data quality enhancement using FK filtering and missing trace restoration technique for the 3-D seismic data inevitably contaminated with current rip noises, the acquired data were accepted and the field survey could be conducted continuously. Even in survey areas where the acquired data would be unsuitable for quality specification, the marine seismic survey efficiency could be improved by showing the possibility of noise suppression through onboard data processing.

Study on Shear Strength Using a Portable Dynamic Cone Penetration Test and Relationship between N-Nc (소형동적콘관입시험을 이용한 전단강도 산정 및 N-Nc 상관관계 연구)

  • Kim, Hyukho;Lim, Heuidae
    • Economic and Environmental Geology
    • /
    • v.50 no.2
    • /
    • pp.145-157
    • /
    • 2017
  • Because of Recent intensive rainfall, nationally landslides and slope failure phenomenon has been frequently occur. Providing proposed-measures to the natural disasters that occur in these localities and the slope, must be derived ground of strength parameters(shear strength) as a design input data. However, it is such as extra deforestation and a lot of economic costs in order to make the access to the current area and the slopes ground survey is required. Thus, by small dynamic cone penetration test machine using the human to carry in the field, it is possible to easily measure the characteristics and strength constant of the ground of more than one region. In this study through researching analysis of the domestic and foreign small dynamic cone penetration test method, it has proposed a cone material and test methods suitable for the country. Cone penetration test Nc in the field has comparated with analysis of the value and the standard penetration test N value. And, in addition to this, direct shear test and borehole shear test were performed by depth, bedrock, and soil type and passing #200 and the correlation of the Nc value. In particular, in the present study, for the sandy soil that has distict distribute in mountain, it is proposed relation of shear strength corresponding to the Nc value (cohesion and internal friction angle) in order to calculate such effective ground shear strength.

Observational failure analysis of precast buildings after the 2012 Emilia earthquakes

  • Minghini, Fabio;Ongaretto, Elena;Ligabue, Veronica;Savoia, Marco;Tullini, Nerio
    • Earthquakes and Structures
    • /
    • v.11 no.2
    • /
    • pp.327-346
    • /
    • 2016
  • The 2012 Emilia (Italy) earthquakes struck a highly industrialized area including several thousands of industrial prefabricated buildings. Due to the lack of specific design and detailing for earthquake resistance, precast reinforced concrete (RC) buildings suffered from severe damages and even partial or total collapses in many cases. The present study reports a data inventory of damages from field survey on prefabricated buildings. The damage database concerns more than 1400 buildings (about 30% of the total precast building stock in the struck region). Making use of the available shakemaps of the two mainshocks, damage distributions were related with distance from the nearest epicentre and corresponding Pseudo-Spectral Acceleration for a period of 1 second (PSA at 1 s) or Peak Ground Acceleration (PGA). It was found that about 90% of the severely damaged to collapsed buildings included into the database stay within 16 km from the epicentre and experienced a PSA larger than 0.12 g. Moreover, 90% of slightly to moderately damaged buildings are located at less than 25 km from the epicentre and were affected by a PSA larger than 0.06 g. Nevertheless, the undamaged buildings examined are almost uniformly distributed over the struck region and 10% of them suffered a PSA not lower than 0.19g. The damage distributions in terms of the maximum experienced PGA show a sudden increase for $PGA{\geq}0.28g$. In this PGA interval, 442 buildings were collected in the database; 55% of them suffered severe damages up to collapse, 32% reported slight to moderate damages, whereas the remaining 13% resulted undamaged.