• Title/Summary/Keyword: Empirical Probability

Search Result 332, Processing Time 0.023 seconds

Evaluation and Comparison of the Solubility Models for Solute in Monosolvents

  • Min-jie Zhi;Wan-feng Chen;Yang-bo Xi
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.53-69
    • /
    • 2024
  • The solubility of Cloxacillin sodium in ethanol, 1-propanol, isopropanol, and acetone solutions was measured at different temperatures. The melting property was also tested by using a differential scanning calorimeter (DSC). Then, the solubility data were fitted using Apelblat equation and λh equation, respectively. The Wilson model and NRTL model were not utilized to correlate the test data, since Cloxacillin sodium will decompose directly after melting. For comparison purposes, the four empirical models, i.e., Apelblat equation, λh equation, Wilson model and NRTL Model, were evaluated by using 1155 solubility curves of 103 solutes tested under different monosolvents and temperatures. The comparison results indicate that the Apelblat equation is superior to the others. Furthermore, a new method (named the calculation method) for determining the Apelblat equation using only three data points was proposed to solve the problem that there may not be enough solute in the determination of solubility. The log-logistic distribution function was used to further capture the trend of the correlation and to make better quantitative comparison between predicted data and the experimental ones for the Apelblat equation determined by different methods (fitting method or calculation method). It is found that the proposed calculation method not only greatly reduces the number of test data points, but also has satisfactory prediction accuracy.

Promoting the Consumption of Electric Vehicles: an Empirical Study in Vietnam

  • Cuong NGUYEN;Thao TRAN;Khanh HA;Han PHAN
    • The Journal of Industrial Distribution & Business
    • /
    • v.15 no.3
    • /
    • pp.21-29
    • /
    • 2024
  • Purpose: Electronic vehicles (EV) consumption become more prevalent among Vietnamese consumers. This paper aims to empirically assess the determinants of EV purchase intention among Vietnamese consumers. The research findings are expected to promote the consumption of electric vehicles in Vietnam. Research design, data and methodology: The quantitative research approach employed the Exploratory Factor Analysis (EFA). The sample size includes 301 respodents. Research design unified Theory of Acceptance and Use of Technology (UTAUT) and UTAUT2. The data collection process employ the non-probability sampling. Questionaire survey consists of 24 questions given to respondents via Google Form link. Data is processed by SPSS version 20 software. Results: The results proposed 04 determinants of the intention to buy electric vehicles: Government Support, Environmental Concern, Price Value, and Performance. Conclusions: Theorectical implications and managerial implications are also discussed to promote the consumption of electronic vehicles in Vietnam. Besides, the findings show that Price value, Environmental Concern and Performance positively affect the purchase intention of EV among Vietnamese consumers. Remarkably, Government Support is proven to be an insignificant factor in EV purchase intention. The call for further research rely on the role of government support in order to promote EV consumption in Vietnam and other emerging markets worldwide.

Characterizing and modelling nonstationary tri-directional thunderstorm wind time histories

  • Y.X. Liu;H.P. Hong
    • Wind and Structures
    • /
    • v.38 no.4
    • /
    • pp.277-293
    • /
    • 2024
  • The recorded thunderstorm winds at a point contain tri-directional components. The probabilistic characteristics of such recorded winds in terms of instantaneous mean wind speed and direction, and the probability distribution and the time-frequency dependent crossed and non-crossed power spectral density functions for the high-frequency fluctuating wind components are unclear. In the present study, we analyze the recorded tri-directional thunderstorm wind components by separating the recorded winds in terms of low-frequency time-varying mean wind speed and high-frequency fluctuating wind components in the alongwind direction and two orthogonal crosswind directions. We determine the time-varying mean wind speed and direction defined by azimuth and elevation angles, and analyze the spectra of high-frequency wind components in three orthogonal directions using continuous wavelet transforms. Additionally, we evaluate the coherence between each pair of fluctuating winds. Based on the analysis results, we develop empirical spectral models and lagged coherence models for the tri-directional fluctuating wind components, and we indicate that the fluctuating wind components can be treated as Gaussian. We show how they can be used to generate time histories of the tri-directional thunderstorm winds.

A study on a tendency of parameters for nonstationary distribution using ensemble empirical mode decomposition method (앙상블 경험적 모드분해법을 활용한 비정상성 확률분포형의 매개변수 추세 분석에 관한 연구)

  • Kim, Hanbeen;Kim, Taereem;Shin, Hongjoon;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.4
    • /
    • pp.253-261
    • /
    • 2017
  • A lot of nonstationary frequency analyses have been studied in recent years as the nonstationarity occurs in hydrologic time series data. In nonstationary frequency analysis, various forms of probability distributions have been proposed to consider the time-dependent statistical characteristics of nonstationary data, and various methods for parameter estimation also have been studied. In this study, we aim to introduce a parameter estimation method for nonstationary Gumbel distribution using ensemble empirical mode decomposition (EEMD); and to compare the results with the method of maximum likelihood. Annual maximum rainfall data with a trend observed by Korea Meteorological Administration (KMA) was applied. As a result, both EEMD and the method of maximum likelihood selected an appropriate nonstationary Gumbel distribution for linear trend data, while the EEMD selected more appropriate nonstationary Gumbel distribution than the method of maximum likelihood for quadratic trend data.

An Assessment of Groundwater Pollution Potential of a Proposed Petrochemical Plant Site in Ulsan, South Korea Hydrogeologic and site characterization and groundwater pollution potential by utilizing several empirical assessment methodologies (지하수 오염 가능성 평가 -수리지질 및 부지특성 조사와 경험적 평가 방법을 이용한 지하수 요염 가능성-)

  • Han, Jeong Sang;Han, Kyu Sang;Lee, Yong Dong;Yoo, Dae Ho
    • Economic and Environmental Geology
    • /
    • v.23 no.4
    • /
    • pp.425-452
    • /
    • 1990
  • A tentative hydrogeologic and hydrodispersive study was carried out to evaluate the groundwater pollution potential at a selected site by utilizing empirical assessment methodologies in an advanced stage of quantitative computer aided assessment. The upper most aquifer is defind as saturated overburden and weathered zone including the upper part of highly fractured rock. Representative hydraulic conductivity and storativity of the uppermost aquifer are estimated at 2.88 E-6 m/s and 0.09, respectively. Also calculated Darcian and average linear velocity of groundwater along the major pathway are 0.011 m/d and 0.12 m/d with average hydraulic gradient of 4.6% in the site. The results of empirical assessment methodologies indicate that 1) DRASTIC depicts that the site is situated on non-sensitive and non-vulnerable area. 2) Legrand numerical rating system shows that the probability of contamination and degree of acceptability are classed to "Maybe-Improbable, and Probable Acceptable and Marginally Unacceptable" with situation grade of "B". 3)Waste soil-site interaction matrix assessment categorizes that the study site is located on "Class-8 Site".

  • PDF

A Quantitative Trust Model based on Empirical Outcome Distributions and Satisfaction Degree (경험적 확률분포와 만족도에 기반한 정량적 신뢰 모델)

  • Kim, Hak-Joon;Sohn, Bong-Ki;Lee, Seung-Joo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.633-642
    • /
    • 2006
  • In the Internet environment many interactions between many users and unknown users take place and it is usually rare to have the trust information about others. Due to the lack of trust information, entities have to take some risks in transactions with others. In this perspective, it is crucial for the entities to be equipped with functionality to accumulate and manage the trust information on other entities in order to reduce risks and uncertainty in their transactions. This paper is concerned with a quantitative computational trust model which takes into account multiple evaluation criteria and uses the recommendation from others in order to get the trust for an entity. In the proposed trust model, the trust for an entity is defined as the expectation for the entity to yield satisfactory outcomes in the given situation. Once an interaction has been made with an entity, it is assumed that outcomes are observed with respect to evaluation criteria. When the trust information is needed, the satisfaction degree, which is the probability to generate satisfactory outcomes for each evaluation criterion, is computed based on the empirical outcome outcome distributions and the entity's preference degrees on the outcomes. Then, the satisfaction degrees for evaluation criteria are aggregated into a trust value. At that time, the reputation information is also incorporated into the trust value. This paper also shows that the model could help the entities effectively choose other entities for transactions with some experiments in e-commerce.

An Empirical Study on the Size Distribution of Venture Firms in the center of KOSDAQ Listed Companies (국내 벤처기업 진화과정에 관한 실증분석 - 코스닥상장 기술벤처기업 분석을 중심으로 -)

  • Cho, Sang-Sup;Yang, Young-Seok
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.1
    • /
    • pp.23-37
    • /
    • 2011
  • This paper is brought to carry out an empirical study whether evolution process of venture firm's scale is following the Gibrat's law; random evolution process, or Pareto law; self-organizing process. The empirical test, as attaching theoretical explanation, of this research utilize the serial data samples of 92 KOSDAQ listed companies from the year of 2005 through 2008. Summarizing the research results are as followed. First, Gini Coefficients representing the density of venture firm's scale has been constantly reduced since the year of 2005 in terms of number of employee, while these index increased during the same time period from the perspective of sales volume. Second, the evolution process of Korea venture firm's scale is following the Power Law related to Pareto Law. In particular, estimated Pareto coefficient, ${\alpha}$, is shown lower than 1 which is significant result. Third, the probability of joining in the top tier group of firm starting from the early stage growing is forecasted into 6.9%, the result which emphasize the starting scale of venture firm play an important role in long term evolution of venture firm.

  • PDF

A Stochastic Study for the Emergency Treatment of Carbon Monoxide Poisoning in Korea (일산화탄소중독(一酸化炭素中毒)의 진료대책(診療對策) 수립(樹立)을 위한 추계학적(推計學的) 연구(硏究))

  • Kim, Yong-Ik;Yun, Dork-Ro;Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.16 no.1
    • /
    • pp.135-152
    • /
    • 1983
  • Emergency medical service is an important part of the health care delivery system, and the optimal allocation of resources and their efficient utilization are essentially demanded. Since these conditions are the prerequisite to prompt treatment which, in turn, will be crucial for life saving and in reducing the undesirable sequelae of the event. This study, taking the hyperbaric chamber for carbon monoxide poisoning as an example, is to develop a stochastic approach for solving the problems of optimal allocation of such emergency medical facility in Korea. The hyperbaric chamber, in Korea, is used almost exclusively for the treatment of acute carbon monoxide poisoning, most of which occur at home, since the coal briquette is used as domestic fuel by 69.6 per cent of the Korean population. The annual incidence rate of the comatous and fatal carbon monoxide poisoning is estimated at 45.5 per 10,000 of coal briquette-using population. It offers a serious public health problem and occupies a large portion of the emergency outpatients, especially in the winter season. The requirement of hyperbaric chambers can be calculated by setting the level of the annual queueing rate, which is here defined as the proportion of the annual number of the queued patients among the annual number of the total patients. The rate is determined by the size of the coal briquette-using population which generate a certain number of carbon monoxide poisoning patients in terms of the annual incidence rate, and the number of hyperbaric chambers per hospital to which the patients are sent, assuming that there is no referral of the patients among hospitals. The queueing occurs due to the conflicting events of the 'arrival' of the patients and the 'service' of the hyperbaric chambers. Here, we can assume that the length of the service time of hyperbaric chambers is fixed at sixty minutes, and the service discipline is based on 'first come, first served'. The arrival pattern of the carbon monoxide poisoning is relatively unique, because it usually occurs while the people are in bed. Diurnal variation of the carbon monoxide poisoning can hardly be formulated mathematically, so empirical cumulative distribution of the probability of the hourly arrival of the patients was used for Monte Carlo simulation to calculate the probability of queueing by the number of the patients per day, for the cases of one, two or three hyperbaric chambers assumed to be available per hospital. Incidence of the carbon monoxide poisoning also has strong seasonal variation, because of the four distinctive seasons in Korea. So the number of the patients per day could not be assumed to be distributed according to the Poisson distribution. Testing the fitness of various distributions of rare event, it turned out to be that the daily distribution of the carbon monoxide poisoning fits well to the Polya-Eggenberger distribution. With this model, we could forecast the number of the poisonings per day by the size of the coal-briquette using population. By combining the probability of queueing by the number of patients per day, and the probability of the number of patients per day in a year, we can estimate the number of the queued patients and the number of the patients in a year by the number of hyperbaric chamber per hospital and by the size of coal briquette-using population. Setting 5 per cent as the annual queueing rate, the required number of hyperbaric chambers was calculated for each province and for the whole country, in the cases of 25, 50, 75 and 100 per cent of the treatment rate which stand for the rate of the patients treated by hyperbaric chamber among the patients who are to be treated. Findings of the study were as follows. 1. Probability of the number of patients per day follows Polya-Eggenberger distribution. $$P(X=\gamma)=\frac{\Pi\limits_{k=1}^\gamma[m+(K-1)\times10.86]}{\gamma!}\times11.86^{-{(\frac{m}{10.86}+\gamma)}}$$ when$${\gamma}=1,2,...,n$$$$P(X=0)=11.86^{-(m/10.86)}$$ when $${\gamma}=0$$ Hourly arrival pattern of the patients turned out to be bimodal, the large peak was observed in $7 : 00{\sim}8 : 00$ a.m., and the small peak in $11 : 00{\sim}12 : 00$ p.m. 2. In the cases of only one or two hyperbaric chambers installed per hospital, the annual queueing rate will be at the level of more than 5 per cent. Only in case of three chambers, however, the rate will reach 5 per cent when the average number of the patients per day is 0.481. 3. According to the results above, a hospital equipped with three hyperbaric chambers will be able to serve 166,485, 83,242, 55,495 and 41,620 of population, when the treatmet rate are 25, 50, 75 and 100 per cent. 4. The required number of hyperbaric chambers are estimated at 483, 963, 1,441 and 1,923 when the treatment rate are taken as 25, 50, 75 and 100 per cent. Therefore, the shortage are respectively turned out to be 312, 791. 1,270 and 1,752. The author believes that the methodology developed in this study will also be applicable to the problems of resource allocation for the other kinds of the emergency medical facilities.

  • PDF

Development of Empirical Fragility Function for High-speed Railway System Using 2004 Niigata Earthquake Case History (2004 니가타 지진 사례 분석을 통한 고속철도 시스템의 지진 취약도 곡선 개발)

  • Yang, Seunghoon;Kwak, Dongyoup
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.11
    • /
    • pp.111-119
    • /
    • 2019
  • The high-speed railway system is mainly composed of tunnel, bridge, and viaduct to meet the straightness needed for keeping the high speed up to 400 km/s. Seismic fragility for the high-speed railway infrastructure can be assessed as two ways: one way is studying each element of infrastructure analytically or numerically, but it requires lots of research efforts due to wide range of railway system. On the other hand, empirical method can be used to access the fragility of an entire system efficiently, which requires case history data. In this study, we collect the 2004 MW 6.6 Niigata earthquake case history data to develop empirical seismic fragility function for a railway system. Five types of intensity measures (IMs) and damage levels are assigned to all segments of target system for which the unit length is 200 m. From statistical analysis, probability of exceedance for a certain damage level (DL) is calculated as a function of IM. For those probability data points, log-normal CDF is fitted using MLE method, which forms fragility function for each damage level of exceedance. Evaluating fragility functions calculated, we observe that T=3.0 spectral acceleration (SAT3.0) is superior to other IMs, which has lower standard deviation of log-normal CDF and low error of the fit. This indicates that long-period ground motion has more impacts on railway infrastructure system such as tunnel and bridge. It is observed that when SAT3.0 = 0.1 g, P(DL>1) = 2%, and SAT3.0 = 0.2 g, P(DL>1) = 23.9%.

Efficient Correlation Channel Modeling for Transform Domain Wyner-Ziv Video Coding (Transform Domain Wyner-Ziv 비디오 부호를 위한 효과적인 상관 채널 모델링)

  • Oh, Ji-Eun;Jung, Chun-Sung;Kim, Dong-Yoon;Park, Hyun-Wook;Ha, Jeong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.23-31
    • /
    • 2010
  • The increasing demands on low-power, and low-complexity video encoder have been motivating extensive research activities on distributed video coding (DVC) in which the encoder compresses frames without utilizing inter-frame statistical correlation. In DVC encoder, contrary to the conventional video encoder, an error control code compresses the video frames by representing the frames in the form of syndrome bits. In the meantime, the DVC decoder generates side information which is modeled as a noisy version of the original video frames, and a decoder of the error-control code corrects the errors in the side information with the syndrome bits. The noisy observation, i.e., the side information can be understood as the output of a virtual channel corresponding to the orignal video frames, and the conditional probability of the virtual channel model is assumed to follow a Laplacian distribution. Thus, performance improvement of DVC systems depends on performances of the error-control code and the optimal reconstruction step in the DVC decoder. In turn, the performances of two constituent blocks are directly related to a better estimation of the parameter of the correlation channel. In this paper, we propose an algorithm to estimate the parameter of the correlation channel and also a low-complexity version of the proposed algorithm. In particular, the proposed algorithm minimizes squared-error of the Laplacian probability distribution and the empirical observations. Finally, we show that the conventional algorithm can be improved by adopting a confidential window. The proposed algorithm results in PSNR gain up to 1.8 dB and 1.1 dB on Mother and Foreman video sequences, respectively.