• Title/Summary/Keyword: random parameter

Search Result 604, Processing Time 0.026 seconds

Research on Speed Estimation Method of Induction Motor based on Improved Fuzzy Kalman Filtering

  • Chen, Dezhi;Bai, Baodong;Du, Ning;Li, Baopeng;Wang, Jiayin
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.3 no.3
    • /
    • pp.272-275
    • /
    • 2014
  • An improved fuzzy Kalman filtering speed estimation scheme was proposed by means of measuring stator side voltage and current value based on vector control state equation of induction motor. The designed fuzzy adaptive controller conducted recursive online correction of measurement noise covariance matrix by monitoring the ratio of theory residuals and actual residuals to make it approach real noise level gradually, allowing the filter to perform optimal estimation to improve estimation accuracy of EKF. Meanwhile, co-simulation scheme based on MATLAB and Ansoft was proposed in order to improve simulation accuracy. Field-circuit coupling problems of induction motor under the action of vector control were solved and the parameter optimization accuracy was improved dramatically. The simulation and experimental results show that this algorithm has a strong ability to inhibit the random measurement noise. It is able to estimate motor speed accurately, and has superior static and dynamic characteristics.

Estimation of Convolutional Interleaver Parameters using Linear Characteristics of Channel Codes (채널 부호의 선형성을 이용한 길쌈 인터리버의 파라미터 추정)

  • Lee, Ju-Byung;Jeong, Jeong-Hoon;Kim, Sang-Goo;Kim, Tak-Kyu;Yoon, Dong-Weon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.48 no.4
    • /
    • pp.15-23
    • /
    • 2011
  • An interleaver rearranges a channel-encoded data in the symbol unit to spread burst errors occurred in channels into random errors. Thus, the interleaving process makes it difficult for a receiver, who does not have information of the interleaver parameters used in the transmitter, to de-interleave an unknown interleaved signal. Recently, various researches on the reconstruction of an unknown interleaved signal have been studied in many places of literature by estimating the interleaver parameters. They, however, have been mainly focused on the estimation of the block interleaver parameters required to reconstruct the de-interleaver. In this paper, as an extension of the previous researches, we estimate the convolutional interleaver parameters, e.g., the number of shift registers, a shift register depth, and a codeword length, required to de-interleave the unknown data stream, and propose the de-interleaving procedure by reconstructing the de-interleaver.

Probabilistic Modeling of Photovoltaic Power Systems with Big Learning Data Sets (대용량 학습 데이터를 갖는 태양광 발전 시스템의 확률론적 모델링)

  • Cho, Hyun Cheol;Jung, Young Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.412-417
    • /
    • 2013
  • Analytical modeling of photovoltaic power systems has been receiving significant attentions in recent years in that it is easy to apply for prediction of its dynamics and fault detection and diagnosis in advanced engineering technologies. This paper presents a novel probabilistic modeling approach for such power systems with a big data sequence. Firstly, we express input/output function of photovoltaic power systems in which solar irradiation and ambient temperature are regarded as input variable and electric power is output variable respectively. Based on this functional relationship, conditional probability for these three random variables(such as irradiation, temperature, and electric power) is mathematically defined and its estimation is accomplished from ratio of numbers of all sample data to numbers of cases related to two input variables, which is efficient in particular for a big data sequence of photovoltaic powers systems. Lastly, we predict the output values from a probabilistic model of photovoltaic power systems by using the expectation theory. Two case studies are carried out for testing reliability of the proposed modeling methodology in this paper.

Standard Operating Procedure of Tongue-image Analysis System to Improve the Reliability (설진기 시스템의 혀 영상 획득과정에 대한 표준운영절차 제안)

  • Lee, Hyun-Joo;Kim, Su-Ryun;Nam, Dong-Hyun
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.20 no.2
    • /
    • pp.51-65
    • /
    • 2016
  • Objectives This study was conducted to suggest a standard operating procedure (SOP) in order to improve the reliability of tongue-image analysis system. Methods An affecting-factors list was made, which may affect the diagnostic parameters of the tongue-image analysis system. They were sub-classified into two groups: controllable and uncontrollable. Only the controllable factors, which could affect the results and easily set-up, were included into the SOP draft. Affecting factors control experiment was performed to investigate the effects of controllable factors, whose influence on diagnostic parameters of the tongue-image analysis system is ambiguous: rehearsal for tongue extrusion; alignment of camera axis; and presentation of a guideline. Three subjects volunteered for this experiment. Each of three variables was implemented twice in a random order by two operators on the subjects. Finally, 96 tongue images obtained in the aggregate. The diagnostic parameter set as a primary outcome in this experiment was the percentage of tongue coating. Results All of the control variables were not significant in both operators; however, the presentation of a guideline was relatively more affect than two other variables. Interaction effects among the variables were also insignificant. Therefore, the presentation of a guideline was included in the final SOP and the other variables were not included. Conclusions We suggested the SOP which could be used for both experimenter and subject. Moreover, Each of the SOPs applied to various types of tongue-image analysis system should be developed in order to improve the reliability.

Uncertainty Assessment of Gas Flow Measurement Using Multi-Point Pitot Tubes (다점 피토관을 이용한 기체 유량 측정의 불확도 평가)

  • Yang, Inyoung;Lee, Bo-Hwa
    • The KSFM Journal of Fluid Machinery
    • /
    • v.19 no.2
    • /
    • pp.5-10
    • /
    • 2016
  • Gas flow measurement in a closed duct was performed using multi-point Pitot tubes. Measurement uncertainty was assessed for this measurement method. The method was applied for the measurement of air flow into a gas turbine engine in an altitude engine test facility. 46 Pitot tubes, 15 total temperature Kiel probes and 9 static pressure tabs were installed in the engine inlet duct of inner diameter of 264 mm. Five tests were done in an airflow range of 2~10 kg/s. The flow was compressible and the Reynolds numbers were between 450,000 and 2,220,000. The measurement uncertainty was the highest as 6.1% for the lowest flow rate, and lowest as 0.8% for the highest flow rate. This is because the difference between the total and static pressures, which is also related to the flow velocity, becomes almost zero for low flow rate cases. It was found that this measurement method can be used only when the flow velocity is relatively high, e.g., 50 m/s. Static pressure was the most influencing parameter on the flow rate measurement uncertainty. Temperature measurement uncertainty was not very important. Measurement of boundary layer was found to be important for this type of flow rate measurement method. But measurement of flow non-uniformity was not very important provided that the non-uniformity has random behavior in the duct.

A Critical Evaluation of Dichotomous Choice Responses in Contingent Valuation Method (양분선택형 조건부가치측정법 응답자료의 실증적 쟁점분석)

  • Eom, Young Sook
    • Environmental and Resource Economics Review
    • /
    • v.20 no.1
    • /
    • pp.119-153
    • /
    • 2011
  • This study reviews various aspects of model formulating processes of dichotomous choice responses of the contingent valuation method (CVM), which has been increasingly used in the preliminary feasibility test of Korea public investment projects. The theoretical review emphasizes the consistency between WTP estimation process and WTP measurement process. The empirical analysis suggests that two common parametric models for dichotmous choice responses (RUM and RWTP) and two commonly used probability distributions of random components (probit and logit) resulted in all most the same empirical WTP distributions, as long as the WTP functions are specified to be a linear function of the bid amounts. However, the efficiency gain of DB response compared to SB response were supported on the ground that the two CV responses are derived from the same WTP distribution. Moreover for the exponential WTP function which guarantees the non-negative WTP measures, sample mean WTP were quite different from median WTP if the scale parameter of WTP function turned out to be large.

  • PDF

CLUSTERING DNA MICROARRAY DATA BY STOCHASTIC ALGORITHM

  • Shon, Ho-Sun;Kim, Sun-Shin;Wang, Ling;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.438-441
    • /
    • 2007
  • Recently, due to molecular biology and engineering technology, DNA microarray makes people watch thousands of genes and the state of variation from the tissue samples of living body. With DNA Microarray, it is possible to construct a genetic group that has similar expression patterns and grasp the progress and variation of gene. This paper practices Cluster Analysis which purposes the discovery of biological subgroup or class by using gene expression information. Hence, the purpose of this paper is to predict a new class which is unknown, open leukaemia data are used for the experiment, and MCL (Markov CLustering) algorithm is applied as an analysis method. The MCL algorithm is based on probability and graph flow theory. MCL simulates random walks on a graph using Markov matrices to determine the transition probabilities among nodes of the graph. If you look at closely to the method, first, MCL algorithm should be applied after getting the distance by using Euclidean distance, then inflation and diagonal factors which are tuning modulus should be tuned, and finally the threshold using the average of each column should be gotten to distinguish one class from another class. Our method has improved the accuracy through using the threshold, namely the average of each column. Our experimental result shows about 70% of accuracy in average compared to the class that is known before. Also, for the comparison evaluation to other algorithm, the proposed method compared to and analyzed SOM (Self-Organizing Map) clustering algorithm which is divided into neural network and hierarchical clustering. The method shows the better result when compared to hierarchical clustering. In further study, it should be studied whether there will be a similar result when the parameter of inflation gotten from our experiment is applied to other gene expression data. We are also trying to make a systematic method to improve the accuracy by regulating the factors mentioned above.

  • PDF

An Evaluation of Sampling Design for Estimating an Epidemiologic Volume of Diabetes and for Assessing Present Status of Its Control in Korea (우리나라 당뇨병의 역학적 규모와 당뇨병 관리현황 파악을 위한 표본설계의 평가)

  • Lee, Ji-Sung;Kim, Jai-Yong;Baik, Sei-Hyun;Park, Ie-Byung;Lee, June-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.42 no.2
    • /
    • pp.135-142
    • /
    • 2009
  • Objectives : An appropriate sampling strategy for estimating an epidemiologic volume of diabetes has been evaluated through a simulation. Methods : We analyzed about 250 million medical insurance claims data submitted to the Health Insurance Review & Assessment Service with diabetes as principal or subsequent diagnoses, more than or equal to once per year, in 2003. The database was re-constructed to a 'patient-hospital profile' that had 3,676,164 cases, and then to a 'patient profile' that consisted of 2,412,082 observations. The patient profile data was then used to test the validity of a proposed sampling frame and methods of sampling to develop diabetic-related epidemiologic indices. Results : Simulation study showed that a use of a stratified two-stage cluster sampling design with a total sample size of 4,000 will provide an estimate of 57.04%(95% prediction range, 49.83 - 64.24%) for a treatment prescription rate of diabetes. The proposed sampling design consists, at first, stratifying the area of the nation into "metropolitan/city/county" and the types of hospital into "tertiary/secondary/primary/clinic" with a proportion of 5:10:10:75. Hospitals were then randomly selected within the strata as a primary sampling unit, followed by a random selection of patients within the hospitals as a secondly sampling unit. The difference between the estimate and the parameter value was projected to be less than 0.3%. Conclusions : The sampling scheme proposed will be applied to a subsequent nationwide field survey not only for estimating the epidemiologic volume of diabetes but also for assessing the present status of nationwide diabetes control.

Opto-Digital Implementation of Multiple Information Hiding & Real-time Extraction System (다중 정보 은폐 및 실시간 추출 시스템의 광-디지털적 구현)

  • 김정진;최진혁;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.1C
    • /
    • pp.24-31
    • /
    • 2003
  • In this paper, a new opto-digital multiple information hiding and real-time extracting system is implemented. That is, multiple information is hidden in a cover image by using the stego keys which are generated by combined use of random sequence(RS) and Hadamard matrix(HM) and these hidden information is extracted in real-time by using a new optical correlator-based extraction system. In the experiment, 3 kinds of information, English alphabet of "N", "R", "L" having 512$\times$512 pixels, are formulated 8$\times$8 blocks and each of these information is multiplied with the corresponding stego keys having 64$\times$64 pixels one by one. And then, by adding these modulated data to a cover image of "Lena"having 512$\times$512 pixels, a stego image is finally generated. In this paper, as an extraction system, a new optical nonlinear joint transform correlator(NJTC) is introduced to extract the hidden data from a stego image in real-time, in which optical correlation between the stego image and each of the stego keys is performed and from these correlation outputs the hidden data can be asily exacted in real-time. Especially, it is found that the SNRs of the correlation outputs in the proposed optical NJTC-based extraction system has been improved to 7㏈ on average by comparison with those of the conventional JTC system under the condition of having a nonlinear parameter less than k=0.4. This good experimental results might suggest a possibility of implementation of an opto-digital multiple information hiding and real-time extracting system.

Probabilistic Analysis of Repairing Cost Considering Random Variables of Durability Design Parameters for Chloride Attack (염해-내구성 설계 변수에 변동성에 따른 확률론적 보수비용 산정 분석)

  • Lee, Han-Seung;Kwon, Seung-Jun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.1
    • /
    • pp.32-39
    • /
    • 2018
  • Repairing timing and the extended service life with repairing are very important for cost estimation during operation. Conventionally used model for repair cost shows a step-shaped cost elevation without consideration of variability of extended service life due to repairing. In the work, RC(Reinforced Concrete) Column is considered for probabilistic evaluation of repairing number and cost. Two mix proportions are prepared and chloride behavior is evaluated with quantitative exterior conditions. The repairing frequency and cost are investigated with varying service life and the extended service life with repairing which were derived from the chloride behavior analysis. The effect of COV(Coefficient of Variation) on repairing frequency is small but the 1st repairing timing is shown to be major parameter. The probabilistic model for repairing cost is capable of reducing the number of repairing with changing the intended service life unlike deterministic model of repairing cost since it can provide continuous repair cost with time.