• Title/Summary/Keyword: Type-I error

Search Result 330, Processing Time 0.027 seconds

A Study on the Development of a Simulation Model for Predicting Soil Moisture Content and Scheduling Irrigation (토양수분함량 예측 및 계획관개 모의 모형 개발에 관한 연구(I))

  • 김철회;고재군
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4279-4295
    • /
    • 1977
  • Two types of model were established in order to product the soil moisture content by which information on irrigation could be obtained. Model-I was to represent the soil moisture depletion and was established based on the concept of water balance in a given soil profile. Model-II was a mathematical model derived from the analysis of soil moisture variation curves which were drawn from the observed data. In establishing the Model-I, the method and procedure to estimate parameters for the determination of the variables such as evapotranspirations, effective rainfalls, and drainage amounts were discussed. Empirical equations representing soil moisture variation curves were derived from the observed data as the Model-II. The procedure for forecasting timing and amounts of irrigation under the given soil moisture content was discussed. The established models were checked by comparing the observed data with those predicted by the model. Obtained results are summarized as follows: 1. As a water balance model of a given soil profile, the soil moisture depletion D, could be represented as the equation(2). 2. Among the various empirical formulae for potential evapotranspiration (Etp), Penman's formula was best fit to the data observed with the evaporation pans and tanks in Suweon area. High degree of positive correlation between Penman's predicted data and observed data with a large evaporation pan was confirmed. and the regression enquation was Y=0.7436X+17.2918, where Y represents evaporation rate from large evaporation pan, in mm/10days, and X represents potential evapotranspiration rate estimated by use of Penman's formula. 3. Evapotranspiration, Et, could be estimated from the potential evapotranspiration, Etp, by introducing the consumptive use coefficient, Kc, which was repre sensed by the following relationship: Kc=Kco$.$Ka+Ks‥‥‥(Eq. 6) where Kco : crop coefficient Ka : coefficient depending on the soil moisture content Ks : correction coefficient a. Crop coefficient. Kco. Crop coefficients of barley, bean, and wheat for each growth stage were found to be dependent on the crop. b. Coefficient depending on the soil moisture content, Ka. The values of Ka for clay loam, sandy loam, and loamy sand revealed a similar tendency to those of Pierce type. c. Correction coefficent, Ks. Following relationships were established to estimate Ks values: Ks=Kc-Kco$.$Ka, where Ks=0 if Kc,=Kco$.$K0$\geq$1.0, otherwise Ks=1-Kco$.$Ka 4. Effective rainfall, Re, was estimated by using following relationships : Re=D, if R-D$\geq$0, otherwise, Re=R 5. The difference between rainfall, R, and the soil moisture depletion D, was taken as drainage amount, Wd. {{{{D= SUM from { {i }=1} to n (Et-Re-I+Wd)}}}} if Wd=0, otherwise, {{{{D= SUM from { {i }=tf} to n (Et-Re-I+Wd)}}}} where tf=2∼3 days. 6. The curves and their corresponding empirical equations for the variation of soil moisture depending on the soil types, soil depths are shown on Fig. 8 (a,b.c,d). The general mathematical model on soil moisture variation depending on seasons, weather, and soil types were as follow: {{{{SMC= SUM ( { C}_{i }Exp( { - lambda }_{i } { t}_{i } )+ { Re}_{i } - { Excess}_{i } )}}}} where SMC : soil moisture content C : constant depending on an initial soil moisture content $\lambda$ : constant depending on season t : time Re : effective rainfall Excess : drainage and excess soil moisture other than drainage. The values of $\lambda$ are shown on Table 1. 7. The timing and amount of irrigation could be predicted by the equation (9-a) and (9-b,c), respectively. 8. Under the given conditions, the model for scheduling irrigation was completed. Fig. 9 show computer flow charts of the model. a. To estimate a potential evapotranspiration, Penman's equation was used if a complete observed meteorological data were available, and Jensen-Haise's equation was used if a forecasted meteorological data were available, However none of the observed or forecasted data were available, the equation (15) was used. b. As an input time data, a crop carlender was used, which was made based on the time when the growth stage of the crop shows it's maximum effective leaf coverage. 9. For the purpose of validation of the models, observed data of soil moiture content under various conditions from May, 1975 to July, 1975 were compared to the data predicted by Model-I and Model-II. Model-I shows the relative error of 4.6 to 14.3 percent which is an acceptable range of error in view of engineering purpose. Model-II shows 3 to 16.7 percent of relative error which is a little larger than the one from the Model-I. 10. Comparing two models, the followings are concluded: Model-I established on the theoretical background can predict with a satisfiable reliability far practical use provided that forecasted meteorological data are available. On the other hand, Model-II was superior to Model-I in it's simplicity, but it needs long period and wide scope of observed data to predict acceptable soil moisture content. Further studies are needed on the Model-II to make it acceptable in practical use.

  • PDF

Comparison of methods for the proportion of true null hypotheses in microarray studies

  • Kang, Joonsung
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.1
    • /
    • pp.141-148
    • /
    • 2020
  • We consider estimating the proportion of true null hypotheses in multiple testing problems. A traditional multiple testing rate, family-wise error rate is too conservative and old to control type I error in multiple testing setups; however, false discovery rate (FDR) has received significant attention in many research areas such as GWAS data, FMRI data, and signal processing. Identify differentially expressed genes in microarray studies involves estimating the proportion of true null hypotheses in FDR procedures. However, we need to account for unknown dependence structures among genes in microarray data in order to estimate the proportion of true null hypothesis since the genuine dependence structure of microarray data is unknown. We compare various procedures in simulation data and real microarray data. We consider a hidden Markov model for simulated data with dependency. Cai procedure (2007) and a sliding linear model procedure (2011) have a relatively smaller bias and standard errors, being more proper for estimating the proportion of true null hypotheses in simulated data under various setups. Real data analysis shows that 5 estimation procedures among 9 procedures have almost similar values of the estimated proportion of true null hypotheses in microarray data.

A Kinetic Monte Carlo Simulation of Individual Site Type of Ethylene and α-Olefins Polymerization

  • Zarand, S.M. Ghafelebashi;Shahsavar, S.;Jozaghkar, M.R.
    • Journal of the Korean Chemical Society
    • /
    • v.62 no.3
    • /
    • pp.191-202
    • /
    • 2018
  • The aim of this work is to study Monte Carlo simulation of ethylene (co)polymerization over Ziegler-Natta catalyst as investigated by Chen et al. The results revealed that the Monte Carlo simulation was similar to sum square error (SSE) model to prediction of stage II and III of polymerization. In the case of activation stage (stage I) both model had slightly deviation from experimental results. The modeling results demonstrated that in homopolymerization, SSE was superior to predict polymerization rate in current stage while for copolymerization, Monte Carlo had preferable prediction. The Monte Carlo simulation approved the SSE results to determine role of each site in total polymerization rate and revealed that homopolymerization rate changed from site to site and order of center was different compared to copolymerization. The polymer yield was reduced by addition of hydrogen amount however there was no specific effect on uptake curve which was predicted by Monte Carlo simulation with good accuracy. In the case of copolymerization it was evolved that monomer chain length and monomer concentration influenced the rate of polymerization as rate of polymerization reduced from 1-hexene to 1-octene and increased when monomer concentration proliferate.

A Study on the Improvement of Driving of Educational Robots with OID Sensors (OID센서로 주행하는 교육용 로봇의 주행 개선을 위한 연구)

  • Song, Hyun-Joo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.549-557
    • /
    • 2021
  • In this research, we will use the existing OID sensor environment for smart robots, which are a type of educational robot, but we would like to propose that the problem of running be handled by a program. Maybe you have driving information We are building a driving test environment focusing on environment, position recognition, route planning, obstacle avoidance and path reset, and it is not the average final error rate, but the time when the error increases The experiment was conducted by a household that catches the moment of recalibration. Through the process, stable running results were obtained compared to the previous experiment. In this research, I think that it will be a development method that can improve the running performance of educational robots equipped with low-cost sensors currently on the market.

Analysis of Confidence Interval of Design Wave Height Estimated Using a Finite Number of Data (한정된 자료로 추정한 설계파고의 신뢰구간 분석)

  • Jeong, Weon-Mu;Cho, Hong-Yeon;Kim, Gunwoo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.4
    • /
    • pp.191-199
    • /
    • 2013
  • It is estimated and analyzed that the design wave height and the confidence interval (hereafter CI) according to the return period using the fourteen-year wave data obtained at Pusan New Port. The functions used in the extreme value analysis are the Gumbel function, the Weibull function, and the Kernel function. The CI of the estimated wave heights was predicted using one of the Monte-Carlo simulation methods, the Bootstrap method. The analysis results of the estimated CI of the design wave height indicate that over 150 years of data is necessary in order to satisfy an approximately ${\pm}$10% CI. Also, estimating the number of practically possible data to be around 25~50, the allowable error was found to be approximately ${\pm}$16~22% for Type I PDF and ${\pm}$18~24% for Type III PDF. Whereas, the Kernel distribution method, a typical non-parametric method, shows that the CI of the method is below 40% in comparison with the CI of the other methods and the estimated design wave height is 1.2~1.6 m lower than that of the other methods.

The Development and It′s Characteristics of New Film Dosimetry Algorithm for Personal Dosimetry (개인피폭 선량 측정을 위한 필름 배지 선량계의 새로운 알고리즘 개발 및 특성)

  • 이병용;장혜숙;봉정균;권수일
    • Progress in Medical Physics
    • /
    • v.6 no.2
    • /
    • pp.35-40
    • /
    • 1995
  • Purpose: We have developed new film dosimetry algorithm for personal dosimetry and examined its characteristics. Materials and methods: Agfagaevart personal monitoring 2/10 films are used. Films which are in the film badges filtered with Cu 0.3mm, plastic 1.5mm, Aluminum 0.6mm and tin 0.8mm, were exposed by standard dosimetry laboratory. Irradiated energy categories are ANSI N13.1l Category III, and IV. Manual type film precessor and X-rite film densitometor was used. Filtered densities to energy relations and does to transformed densities relations can be obtained ofter transformation of H&D curves to linear shape by polynomal fitting. Reults : Personal dose be determined within 25% error for category m and 15% for category IV. And we are able to evaluate the exposed energy. Conclusion : New algorithm developed in this study is good for personal dosimetry within 30% error range for catergory III and IV. It is expectd to be complete personal dosimetry algorithm with further study for categrory, I, Dand II V.

  • PDF

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Image Rejection Method with Circular Trajectory Characteristic of Single-Frequency Continuous-Wave Signal (단일 주파수 연속파 신호의 원형 궤도 특성을 이용한 영상 제거 방법)

  • Park, Hyung-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.148-156
    • /
    • 2009
  • This paper presents a new image rejection algorithm based on the analysis of the distortion of a single-frequency continuous-wave (CW) signal due to the I/Q mismatch. Existing methods estimated the gain mismatch and phase mismatch on RF receivers and compensated them However, this paper shows that the circular trajectory of a single-frequency CW signal is distorted elliptic-type trajectory due to the I/Q mismatch. Utilizing the analysis, we propose a I/Q mismatch compensation method. It has two processing steps. In the first processing step, the generated signal is rotated to align the major axis of the elliptic-type trajectory diagram with the x-axis. In the second processing step, the Q-channel signal in the regenerated signal is scaled to align the regenerated signal with the transmitted single-frequency CW signal. Simulation results show that a receiver using the proposed image rejection algorithm can achieve an image rejection ratio of more than 70dB. And, simulation results show that the bit error rate performances of receivers using the proposed image rejection algorithm are almost the same as those of conventional coherent demodulators, even in fading channels.

Why A Multimedia Approach to English Education\ulcorner

  • Keem, Sung-uk
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.176-178
    • /
    • 1997
  • To make a long story short I made up my mind to experiment with a multimedia approach to my classroom presentations two years ago because my ways of giving instructions bored the pants off me as well as my students. My favorite ways used to be sometimes referred to as classical or traditional ones, heavily dependent on the three elements: teacher's mouth, books, and chalk. Some call it the 'MBC method'. To top it off, I tried audio-visuals such as tape recorders, cassette players, VTR, pictures, and you name it, that could help improve my teaching method. And yet I have been unhappy about the results by a trial and error approach. I was determined to look for a better way that would ensure my satisfaction in the first place. What really turned me on was a multimedia CD ROM title, ELLIS (English Language Learning Instructional Systems) developed by Dr. Frank Otto. This is an integrated system of learning English based on advanced computer technology. Inspired by the utility and potential of such a multimedia system for regular classroom or lab instructions, I designed a simple but practical multimedia language learning laboratory in 1994 for the first time in Korea(perhaps for the first time in the world). It was high time that the conventional type of language laboratory(audio-passive) at Hahnnam be replaced because of wear and tear. Prior to this development, in 1991, I put a first CALL(Computer Assisted Language Learning) laboratory equipped with 35 personal computers(286), where students were encouraged to practise English typing, word processing and study English grammar, English vocabulary, and English composition. The first multimedia language learning laboratory was composed of 1) a multimedia personal computer(486DX2 then, now 586), 2) VGA multipliers that enable simultaneous viewing of the screen at control of the instructor, 3) an amplifIer, 4) loud speakers, 5)student monitors, 6) student tables to seat three students(a monitor for two students is more realistic, though), 7) student chairs, 8) an instructor table, and 9) cables. It was augmented later with an Internet hookup. The beauty of this type of multimedia language learning laboratory is the economy of furnishing and maintaining it. There is no need of darkening the facilities, which is a must when an LCD/beam projector is preferred in the laboratory. It is headset free, which proved to make students exasperated when worn more than- twenty minutes. In the previous semester I taught three different subjects: Freshman English Lab, English Phonetics, and Listening Comprehension Intermediate. I used CD ROM titles like ELLIS, Master Pronunciation, English Tripple Play Plus, English Arcade, Living Books, Q-Steps, English Discoveries, Compton's Encyclopedia. On the other hand, I managed to put all teaching materials into PowerPoint, where letters, photo, graphic, animation, audio, and video files are orderly stored in terms of slides. It takes time for me to prepare my teaching materials via PowerPoint, but it is a wonderful tool for the sake of presentations. And it is worth trying as long as I can entertain my students in such a way. Once everything is put into the computer, I feel relaxed and a bit excited watching my students enjoy my presentations. It appears to be great fun for students because they have never experienced this type of instruction. This is how I freed myself from having to manipulate a cassette tape player, VTR, and write on the board. The student monitors in front of them seem to help them concentrate on what they see, combined with what they hear. All I have to do is to simply click a mouse to give presentations and explanations, when necessary. I use a remote mouse, which prevents me from sitting at the instructor table. Instead, I can walk around in the room and enjoy freer interactions with students. Using this instrument, I can also have my students participate in the presentation. In particular, I invite my students to manipulate the computer using the remote mouse from the student's seat not from the instructor's seat. Every student appears to be fascinated with my multimedia approach to English teaching because of its unique nature as a new teaching tool as we face the 21st century. They all agree that the multimedia way is an interesting and fascinating way of learning to satisfy their needs. Above all, it helps lighten their drudgery in the classroom. They feel other subjects taught by other teachers should be treated in the same fashion. A multimedia approach to education is impossible without the advent of hi-tech computers, of which multi functions are integrated into a unified system, i.e., a personal computer. If you have computer-phobia, make quick friends with it; the sooner, the better. It can be a wonderful assistant to you. It is the Internet that I pay close attention to in conjunction with the multimedia approach to English education. Via e-mail system, I encourage my students to write to me in English. I encourage them to enjoy chatting with people all over the world. I also encourage them to visit the sites where they offer study courses in English conversation, vocabulary, idiomatic expressions, reading, and writing. I help them search any subject they want to via World Wide Web. Some day in the near future it will be the hub of learning for everybody. It will eventually free students from books, teachers, libraries, classrooms, and boredom. I will keep exploring better ways to give satisfying instructions to my students who deserve my entertainment.

  • PDF

A Selection of Threshold for the Generalized Hough Transform: A Probabilistic Approach (일반화된 허프변환의 임계값 선택을 위한 확률적 접근방식)

  • Chang, Ji Y.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.1
    • /
    • pp.161-171
    • /
    • 2014
  • When the Hough transform is applied to identify an instance of a given model, the output is typically a histogram of votes cast by a set of image features into a parameter space. The next step is to threshold the histogram of counts to hypothesize a given match. The question is "What is a reasonable choice of the threshold?" In a standard implementation of the Hough transform, the threshold is selected heuristically, e.g., some fraction of the highest cell count. Setting the threshold too low can give rise to a false alarm of a given shape(Type I error). On the other hand, setting the threshold too high can result in mis-detection of a given shape(Type II error). In this paper, we derive two conditional probability functions of cell counts in the accumulator array of the generalized Hough transform(GHough), that can be used to select a scientific threshold at the peak detection stage of the Ghough.