• Title/Summary/Keyword: probability density

Search Result 1,302, Processing Time 0.026 seconds

Optimal Seismic Rehabilitation of Structures Using Probabilistic Seismic Demand Model (확률적 지진요구모델을 이용한 구조물의 최적 내진보강)

  • Park, Joo-Nam;Choi, Eun-Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.1-10
    • /
    • 2008
  • The seismic performance of a structure designed without consideration of seismic loading can be effectively enhanced through seismic rehabilitation. The appropriate level of rehabilitation should be determined based on the decision criteria that minimize the anticipated earthquake-related losses. To estimate the anticipated losses, seismic risk analysis should be performed considering the probabilistic characteristics of the hazard and the structural damage. This study presents the decision procedure in which the probabilistic seismic demand model is utilized for the effective estimation and minimization of the total seismic losses through seismic rehabilitation. The probability density function and the cumulative distribution function of the structural damage for a specified time period are established in a closed form, and are combined with the loss functions to derive the expected seismic loss. The procedure presented in this study could be effectively used for making decisions on the seismic rehabilitation of structural systems.

An Empirical Study on the Size Distribution of Venture Firms in the center of KOSDAQ Listed Companies (국내 벤처기업 진화과정에 관한 실증분석 - 코스닥상장 기술벤처기업 분석을 중심으로 -)

  • Cho, Sang-Sup;Yang, Young-Seok
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.1
    • /
    • pp.23-37
    • /
    • 2011
  • This paper is brought to carry out an empirical study whether evolution process of venture firm's scale is following the Gibrat's law; random evolution process, or Pareto law; self-organizing process. The empirical test, as attaching theoretical explanation, of this research utilize the serial data samples of 92 KOSDAQ listed companies from the year of 2005 through 2008. Summarizing the research results are as followed. First, Gini Coefficients representing the density of venture firm's scale has been constantly reduced since the year of 2005 in terms of number of employee, while these index increased during the same time period from the perspective of sales volume. Second, the evolution process of Korea venture firm's scale is following the Power Law related to Pareto Law. In particular, estimated Pareto coefficient, ${\alpha}$, is shown lower than 1 which is significant result. Third, the probability of joining in the top tier group of firm starting from the early stage growing is forecasted into 6.9%, the result which emphasize the starting scale of venture firm play an important role in long term evolution of venture firm.

  • PDF

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.

Optimization of Stream Gauge Network Using the Entropy Theory (엔트로피 이론을 이용한 수위관측망의 최적화)

  • Yoo, Chul-Sang;Kim, In-Bae
    • Journal of Korea Water Resources Association
    • /
    • v.36 no.2
    • /
    • pp.161-172
    • /
    • 2003
  • This study has evaluated the stream gauge network with the main emphasis on if the current stream gauge network can catch the runoff characteristics of the basin. As the evaluation of the stream gauge network in this study does not consider a special purpose of a stream gauge, nor the effect from a hydraulic structure, it becomes an optimization of current stream gauge network under the condition that each stream gauge measures the natural runoff volume. This study has been applied to the Nam-Han River Basin for the optimization of total 31 stream gauge stations using the entropy concept. Summarizing the results are as follows. (1) The unit hydrograph representing the basin response from rainfall can be transferred into a probability density function for the application of the entropy concept to optimize the stream gauge network. (2) Accurate derivation of unit hydrographs representing stream gauge sites was found the most important part for the evaluation of stream gauge network, which was assured in this research by comparing the measured and derived unit hydrographs. (3) The Nam-Han River Basin was found to need at least 28 stream gauge stations, which was derived by considering both the shape of the unit hydrograph and the runoff volume. If considering only the shape of the unit hydrograph, the number of stream gauges required decreases to 23.

A Study of the Failure Distribution and the Failure Difference by the Stress on the K-1 Tracked Vehicle (K-1전차의 고장분포와 부하에 따른 고장률 차이에 대한 연구)

  • Lee, Sang-Jin;Choi, Seok-Yoon
    • Journal of the military operations research society of Korea
    • /
    • v.35 no.2
    • /
    • pp.33-49
    • /
    • 2009
  • The objective of this study is as follows. First, the hazard function on the failure probability density function of the K-1 tracked vehicles can be occurred in the form of the bathtub curve. Second, the failure mode may be different under two different operational situations. The research result shows that the bathtub curve can be fitted in the Weibull distribution, that assumes different shapes according to the specific stage of the system's life cycle. The K-1 tracked vehicle has a relatively high hazard(failure) rate at the time of its first service. The failure rate starts decreasing for a time immediately after it goes into service. After the break-in period, the surviving components have a fairly constant hazard rate. As the K-1 system ages, deterioration of its various parts takes place and the hazard rate starts Increasing. Second, the result shows the failure rate in the harsh operational environment is higher than that in the mild operational environment. In conclusion, the bathtub curve can be logically appropriate in establishing the depot overhaul cycle. Moreover, it is necessary for determining the right time of the depot overhaul to consider not only the age of defense equipment but also the different operational environment.

Sample thread based real-time BRDF rendering (샘플 쓰레드 기반 실시간 BRDF 렌더링)

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • In this paper, we propose a novel noiseless method of BRDF rendering on a GPU in real-time. Illumination at a surface point is formulated as an integral of BRDF producted with incident radiance over the hemi-sphere domain. The most popular method to compute the integral is the Monte Carlo method, which needs a large number of samples to achieve good image quality. But, it leads to increase of rendering time. Otherwise, a small number of sample points cause serious image noise. The main contribution of our work is a new importance sampling scheme producing a set of incoming ray samples varying continuously with respect to the eye ray. An incoming ray is importance-based sampled at different latitude angles of the eye ray, and then the ray samples are linearly connected to form a curve, called a thread. These threads give continuously moving incident rays for eye ray change, so they do not make image noise. Since even a small number of threads can achieve a plausible quality and also can be precomputed before rendering, they enable real-time BRDF rendering on the GPU.

Application of an Automated Time Domain Reflectometry to Solute Transport Study at Field Scale: Transport Concept (시간영역 광전자파 분석기 (Automatic TDR System)를 이용한 오염물질의 거동에 관한 연구: 오염물질 운송개념)

  • Kim, Dong-Ju
    • Economic and Environmental Geology
    • /
    • v.29 no.6
    • /
    • pp.713-724
    • /
    • 1996
  • The time-series resident solute concentrations, monitored at two field plots using the automated 144-channel TDR system by Kim (this issue), are used to investigate the dominant transport mechanism at field scale. Two models, based on contradictory assumptions for describing the solute transport in the vadose zone, are fitted to the measured mean breakthrough curves (BTCs): the deterministic one-dimensional convection-dispersion model (CDE) and the stochastic-convective lognormal transfer function model (CLT). In addition, moment analysis has been performed using the probability density functions (pdfs) of the travel time of resident concentration. Results of moment analysis have shown that the first and second time moments of resident pdf are larger than those of flux pdf. Based on the time moments, expressed in function of model parameters, variance and dispersion of resident solute travel times are derived. The relationship between variance or dispersion of solute travel time and depth has been found to be identical for both the time-series flux and resident concentrations. Based on these relationships, the two models have been tested. However, due to the significant variations of transport properties across depth, the test has led to unreliable results. Consequently, the model performance has been evaluated based on predictability of the time-series resident BTCs at other depths after calibration at the first depth. The evaluation of model predictability has resulted in a clear conclusion that for both experimental sites the CLT model gives more accurate prediction than the CDE model. This suggests that solute transport at natural field soils is more likely governed by a stream tube model concept with correlated flow than a complete mixing model. Poor prediction of CDE model is attributed to the underestimation of solute spreading and thus resulting in an overprediction of peak concentration.

  • PDF

Application of Multi-Dimensional Precipitation Models to the Sampling Error Problem (관측오차문제에 대한 다차원 강우모형의 적용)

  • Yu, Cheol-Sang
    • Journal of Korea Water Resources Association
    • /
    • v.30 no.5
    • /
    • pp.441-447
    • /
    • 1997
  • Rainfall observation using rain gage network or satellites includes the sampling error depending on the observation methods or plans. For example, the sampling using rain gages is continuous in time but discontinuous in space, which is nothing but the source of the sampling error. The sampling using satellites is the reverse case that continuous in space and discontinuous in time. The sampling error may be quantified by use of the temporal-spatial characteristics of rainfall and the sampling design. One of recent works on this problem was done by North and Nakamoto (1989), who derived a formulation for estimating the sampling error based on the temporal-spatial rainfall spectrum and the design scheme. The formula enables us to design an optimal rain gage network or a satellite operation plan providing the statistical characteristics of rainfall. In this paper the formula is reviewed and applied for the sampling error problems using several multi-dimensional precipitation models. The results show the limitation of the formulation, which cannot distinguish the model difference in case the model parameters can reproduce similar second order statistics of rainfall. The limitation can be improved by developing a new way to consider the higher order statistics, and eventually the probability density function (PDF) of rainfall.

  • PDF

Microstructure, Tensile Strength and Probabilistic Fatigue Life Evaluation of Gray Cast Iron (회주철의 미세구조와 인장거동 분석 및 확률론적 피로수명평가)

  • Sung, Yong Hyeon;Han, Seung-Wook;Choi, Nak-Sam
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.8
    • /
    • pp.721-728
    • /
    • 2017
  • High-grade gray cast iron (HCI350) was prepared by adding Cr, Mo and Cu to the gray cast iron (GC300). Their microstructure, mechanical properties and fatigue strength were studied. Cast iron was made from round bar and plate-type castings, and was cut and polished to measure the percentage of each microstructure. The size of flake graphite decreased due to additives, while the structure of high density pearlite increased in volume percentage improving the tensile strength and fatigue strength. Based on the fatigue life data obtained from the fatigue test results, the probability - stress - life (P-S-N) curve was calculated using the 2-parameter Weibull distribution to which the maximum likelihood method was applied. The P-S-N curve showed that the fatigue strength of HCI350 was significantly improved and the dispersion of life data was lower than that of GC300. However, the fatigue life according to fatigue stress alleviation increased further. Data for reliability life design was presented by quantitatively showing the allowable stress value for the required life cycle number using the calculated P-S-N curve.

Design and Evaluation of a Fuzzy Logic based Multi-hop Broadcast Algorithm for IoT Applications (IoT 응용을 위한 퍼지 논리 기반 멀티홉 방송 알고리즘의 설계 및 평가)

  • Bae, Ihn-han;Kim, Chil-hwa;Noh, Heung-tae
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.17-23
    • /
    • 2016
  • In the future network such as Internet of Things (IoT), the number of computing devices are expected to grow exponentially, and each of the things communicates with the others and acquires information by itself. Due to the growing interest in IoT applications, the broadcasting in Opportunistic ad-hoc networks such as Machine-to-Machine (M2M) is very important transmission strategy which allows fast data dissemination. In distributed networks for IoT, the energy efficiency of the nodes is a key factor in the network performance. In this paper, we propose a fuzzy logic based probabilistic multi-hop broadcast (FPMCAST) algorithm which statistically disseminates data accordingly to the remaining energy rate, the replication density rate of sending node, and the distance rate between sending and receiving nodes. In proposed FPMCAST, the inference engine is based the fuzzy rule base which is consists of 27 if-then rules. It maps input and output parameters to membership functions of input and output. The output of fuzzy system defines the fuzzy sets for rebroadcasting probability, and defuzzification is used to extract a numeric result from the fuzzy set. Here Center of Gravity (COG) method is used to defuzzify the fuzzy set. Then, the performance of FPMCAST is evaluated through a simulation study. From the simulation, we demonstrate that the proposed FPMCAST algorithm significantly outperforms flooding and gossiping algorithms. Specially, the FPMCAST algorithm has longer network lifetime because the residual energy of each node consumes evenly.