• Title/Summary/Keyword: Random measures

Search Result 290, Processing Time 0.025 seconds

Bio-Equivalence Analysis using Linear Mixed Model (선형혼합모형을 활용한 생물학적 동등성 분석)

  • An, Hyungmi;Lee, Youngjo;Yu, Kyung-Sang
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.289-294
    • /
    • 2015
  • Linear mixed models are commonly used in the clinical pharmaceutical studies to analyze repeated measures such as the crossover study data of bioequivalence studies. In these models, random effects describe the correlation between repeated outcomes and variance-covariance matrix explain within-subject variabilities. Bioequivalence analysis verifies whether a 90% confidence interval for geometric mean ratio of Cmax and AUC between reference drug and test drug is included in the bioequivalence margin [0.8, 1.25] performed using linear mixed models with period, sequence and treatment effects as fixed and sequence nested subject effects as random. A Levofloxacin study is referred to for an example of real data analysis.

An Exploratory Observation of Analyzing Event-Related Potential Data on the Basis of Random-Resampling Method (무선재추출법에 기초한 사건관련전위 자료분석에 대한 탐색적 고찰)

  • Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.20 no.2
    • /
    • pp.149-160
    • /
    • 2017
  • In hypothesis testing, the interpretation of a statistic obtained from the data analysis relies on a probabilistic distribution of the statistic constructed according to several statistical theories. For instance, the statistical significance of a mean difference between experimental conditions is determined according to a probabilistic distribution of the mean differences (e.g., Student's t) constructed under several theoretical assumptions for population characteristics. The present study explored the logic and advantages of random-resampling approach for analyzing event-related potentials (ERPs) where a hypothesis is tested according to the distribution of empirical statistics that is constructed based on randomly resampled dataset of real measures rather than a theoretical distribution of the statistics. To motivate ERP researchers' understanding of the random-resampling approach, the present study further introduced a specific example of data analyses where a random-permutation procedure was applied according to the random-resampling principle, as well as discussing several cautions ahead of its practical application to ERP data analyses.

A Constant Pitch Based Time Alignment for Power Analysis with Random Clock Power Trace (전력분석 공격에서 랜덤클럭 전력신호에 대한 일정피치 기반의 시간적 정렬 방법)

  • Park, Young-Goo;Lee, Hoon-Jae;Moon, Sang-Jae
    • The KIPS Transactions:PartC
    • /
    • v.18C no.1
    • /
    • pp.7-14
    • /
    • 2011
  • Power analysis attack on low-power consumed security devices such as smart cards is very powerful, but it is required that the correlation between the measured power signal and the mid-term estimated signal should be consistent in a time instant while running encryption algorithm. The power signals measured from the security device applying the random clock do not match the timing point of analysis, therefore random clock is used as counter measures against power analysis attacks. This paper propose a new constant pitch based time alignment for power analysis with random clock power trace. The proposed method neutralize the effects of random clock used to counter measure by aligning the irregular power signals with the time location and size using the constant pitch. Finally, we apply the proposed one to AES algorithm within randomly clocked environments to evaluate our method.

Studying the Comparative Analysis of Highway Traffic Accident Severity Using the Random Forest Method. (Random Forest를 활용한 고속도로 교통사고 심각도 비교분석에 관한 연구)

  • Sun-min Lee;Byoung-Jo Yoon;WutYeeLwin
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.156-168
    • /
    • 2024
  • Purpose: The trend of highway traffic accidents shows a repeating pattern of increase and decrease, with the fatality rate being highest on highways among all road types. Therefore, there is a need to establish improvement measures that reflect the situation within the country. Method: We conducted accident severity analysis using Random Forest on data from accidents occurring on 10 specific routes with high accident rates among national highways from 2019 to 2021. Factors influencing accident severity were identified. Result: The analysis, conducted using the SHAP package to determine the top 10 variable importance, revealed that among highway traffic accidents, the variables with a significant impact on accident severity are the age of the perpetrator being between 20 and less than 39 years, the time period being daytime (06:00-18:00), occurrence on weekends (Sat-Sun), seasons being summer and winter, violation of traffic regulations (failure to comply with safe driving), road type being a tunnel, geometric structure having a high number of lanes and a high speed limit. We identified a total of 10 independent variables that showed a positive correlation with highway traffic accident severity. Conclusion: As accidents on highways occur due to the complex interaction of various factors, predicting accidents poses significant challenges. However, utilizing the results obtained from this study, there is a need for in-depth analysis of the factors influencing the severity of highway traffic accidents. Efforts should be made to establish efficient and rational response measures based on the findings of this research.

A Study on the Uncertainty of MVRS (포구속도측정레이더의 불확도에 관한 연구)

  • Park, Yong-Suk;Choi, Ju-Ho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.1
    • /
    • pp.94-100
    • /
    • 2007
  • MVRS's measuring principles are based on the Doppler principle. It measures the velocities near the muzzle using the doppler signal output from the antenna and then predicts the velocity of the bullet leaving the muzzle by performing the regression analysis on previous measured velocities. There are a number of error sources when calculating the muzzle velocity. Antenna has long term frequency stability error and the doppler signal from the antenna has noise. These two error sources influence the accuracy of estimated velocities from the doppler signal. Estimated velocity errors result in the random error of data statistics. And when performing a regression analysis these random error components are transferred to the fitting error component. This study also analyzed the error components according to the hardware limitations of MVRS-700 and the signal processing method, and presented the calculated uncertainty of muzzle velocity.

Finite Source Queueing Models for Analysis of Complex Communication Systems (복잡한 통신 시스템의 성능분석을 위한 유한소스 대기 모형)

  • Che-Soong Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.26 no.2
    • /
    • pp.62-67
    • /
    • 2003
  • This paper deals with a First-Come, First-Served queueing model to analyze the behavior of heterogeneous finite source system with a single server Each sources and the processor are assumed to operate in independently Markovian environments, respectively. Each request is characterized by its own exponentially distributed source and service time with parameter depending on the state of the corresponding environment, that is, the arrival and service rates are subject to random fluctuations. Our aim is to get the usual stationary performance measures of the system, such as, utilizations, mean number of requests staying at the server, mean queue lengths, average waiting and sojourn times. In the case of fast arrivals or fast service asymptotic methods can be applied. In the intermediate situations stochastic simulation Is used. As applications of this model some problems in the field of telecommunications are treated.

Analysis of $M^{X}/G/1$ and $GEO^{X}/G/1$ Queues with Random Number of Vacations (임의의 횟수의 휴가를 갖는 $M^{X}/G/1$$GEO^{X}/G/1$ 대기행렬의 분석)

  • 채경철;김남기;이호우
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.27 no.2
    • /
    • pp.51-61
    • /
    • 2002
  • By using the arrival time approach of Chae et at. [6], we derive various performance measures including the queue length distributions (in PGFs) and the waiting time distributions (in LST and PGF) for both M$^{x}$ /G/1 and Geo$^{x}$ /G/1 queueing systems, both under the assumption that the server, when it becomes idle, takes multiple vacations up to a random maximum number. This is an extension of both Choudhury[7] and Zhang and Tian [11]. A few mistakes in Zhang and Tian are corrected and meaningful interpretations are supplemented.

Stochastic analysis of a non-identical two-unit parallel system with common-cause failure, critical human error, non-critical human error, preventive maintenance and two type of repair

  • El-Sherbeny, M.S.
    • International Journal of Reliability and Applications
    • /
    • v.11 no.2
    • /
    • pp.123-138
    • /
    • 2010
  • This paper investigates a mathematical model of a system composed of two non-identical unit parallel system with common-cause failure, critical human error, non-critical human error, preventive maintenance and two type of repair, i.e. cheaper and costlier. This system goes for preventive maintenance at random epochs. We assume that the failure, repair and maintenance times are independent random variables. The failure rates, repair rates and preventive maintenance rate are constant for each unit. The system is analyzed by using the graphical evaluation and review technique (GERT) to obtain various related measures and we study the effect of the preventive maintenance preventive maintenance on the system performance. Certain important results have been derived as special cases. The plots for the mean time to system failure and the steady-state availability A(${\infty}$) of the system are drawn for different parametric values.

  • PDF

Prediction of Academic Performance of College Students with Bipolar Disorder using different Deep learning and Machine learning algorithms

  • Peerbasha, S.;Surputheen, M. Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.350-358
    • /
    • 2021
  • In modern years, the performance of the students is analysed with lot of difficulties, which is a very important problem in all the academic institutions. The main idea of this paper is to analyze and evaluate the academic performance of the college students with bipolar disorder by applying data mining classification algorithms using Jupiter Notebook, python tool. This tool has been generally used as a decision-making tool in terms of academic performance of the students. The various classifiers could be logistic regression, random forest classifier gini, random forest classifier entropy, decision tree classifier, K-Neighbours classifier, Ada Boost classifier, Extra Tree Classifier, GaussianNB, BernoulliNB are used. The results of such classification model deals with 13 measures like Accuracy, Precision, Recall, F1 Measure, Sensitivity, Specificity, R Squared, Mean Absolute Error, Mean Squared Error, Root Mean Squared Error, TPR, TNR, FPR and FNR. Therefore, conclusion could be reached that the Decision Tree Classifier is better than that of different algorithms.

A Study on the Methods for Assessing Construct Validity (구성 타당성 평가방법에 관한 연구)

  • 이광희;이선규;장성호
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.22 no.50
    • /
    • pp.1-9
    • /
    • 1999
  • The purpose of this study is to establish a basis for assessing construct validity of measures used in organizational research. The classic Campbell and Fiske's(1959) criteria are found to be lacking in their assumptions, diagnostic information, and power. The inherent confounding of measurement error with systematic trait and method effects is a severe limitation for a proper interpretation of convergent and discriminant validity. The confirmatory factor analysis(CFA) approach overcomes most of the limitations found in Campbell and Fiske's(1959) method. However, the CFA approach confounds random error with unique variance specific to a measure. The second-order confirmatory factor analysis(SOCFA) was shown to harbor rather restrictive assumptions and is unlikely to be met in practice. The first-order, multiple-informant, multiple-item(FOMIMI) model is a viable option, but it may also be of limited use because of the large number measures.

  • PDF