• Title/Summary/Keyword: Error level

Search Result 2,511, Processing Time 0.031 seconds

Sample Size Determination for O/D Estimation under Budget Constraint (예산제약하에서 O/D 추정을 위한 최소표본율 결정)

  • Sin, Hui-Cheol;Lee, Hyang-Suk
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.3 s.89
    • /
    • pp.7-15
    • /
    • 2006
  • A large sample can Provide more information about the Population. As the sample size Increases, analysts will be more confident about the survey results. On the other hand, the costs for survey will increase in time and manpower. Therefore, determination of the sample size is a trade-off between the required accuracy and the cost. In addition, permitted error and significance level should be considered. Sample size determination in surveys for O/D estimation is also connected with confidence of survey result. However, the past methods were usually too simple to consider confidence. Therefore, a new method for O/D surveys was Proposed and it was accurate enough, but it has too large sample size when we have current budget constraint. In this research, several minimum sample size determination methods for origin-destination survey under budget constraint were proposed. Each method decreased sample size, but has its own advantages. Selection of the sample size will depend on the study Purpose and budget constraint.

Development of Shock Wave Delay Estimation Model for Mixed Traffic at Unsaturated Signalized Intersection (충격파를 이용한 신호교차로 지체산정 모형 개발 (비포화 2차로 신호교차로 상에서의 버스혼합교통류 지체산정모형))

  • Kim, Won-Gyu;Kim, Byeong-Jong;Park, Myeong-Gyu
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.6
    • /
    • pp.75-84
    • /
    • 2010
  • Controlled traffic intersection is critical point in terms of transportation network performance, where the most of traffic congestion arises. One of the most important and favorable measure of effectiveness in the signal controlled intersection is approach delay. Although lots of efforts to develop traffic delay estimation models have been made throughout the years, most of them were focusing on homogeneous traffic flow. The purpose of this research is to develop a traffic delay estimation model for traffic flow mixed with bus based on the horizontal shockwave theory. Traffic simulation is performed to test the adaptation level of the model in generic environment. The result shows that the delay increases with increasing bus traffic. Overall model accuracy comparing simulation result is acceptable, that shows the error range around 10 percent.

Comparison of the Family Based Association Test and Sib Transmission Disequilibrium Test for Dichotomous Trait (이산형 형질에 대한 가족자료 연관성 검정법 FBAT와 형제 전달 불균형 연관성 검정법 S-TDT의 비교)

  • Kim, Han-Sang;Oh, Young-Sin;Song, Hae-Hiang
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.6
    • /
    • pp.1103-1113
    • /
    • 2010
  • An extensively used approach for family based association test(FBAT) is compared with the sib transmission/disequilibrium test(S-TDT), and in particular the adjusted S-TDT, in which the covariance among related siblings is taken into consideration, can provide a more sensitive test statistic for association. A simulation study comparing the three test statistics demonstrates that the type I error rates of all three tests are larger than the prespecified significance level and the power of the FBAT is lower than those of the other two tests. More detailed studies are required in order to assess the influence of the assumed conditions in FBAT on the efficiency of the test.

A Study of Compensation Algorithm for Localization based on Equivalent Distance Rate using Estimated Location Coordinator Searching Scheme (예상 위치좌표 탐색기법을 적용한 균등거리비율 기반 위치인식 보정 알고리즘 연구)

  • Kwon, Seong-Ki;Lee, Dong-Myung;Lee, Chang-Bum
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.9
    • /
    • pp.3571-3577
    • /
    • 2010
  • The estimated location coordinator exploration scheme and the E&E(Equivalent distance rate & Estimated location coordinator exploration) compensation algorithm for localization is proposed, and the performance of the E&E is analyzed in this paper. The proposed scheme is adapted to the AEDR(Algorithm for localization using the concept of Equivalent Distance Rate). From several experiments, it is confirmed that the performance of the localization compensation in SDS-TWR is improved from 0.60m to 0.34m in four experimental scenarios, and the performance of the localization compensation ratio of the E&E is also better than that of the AEDR as a level of maximum 15%. It can be thought that the proposed localization compensation algorithm E&E can be sufficiently applicable to various localization applications because the performance of the localization error rate of the E&E is measured as less than 1m in 99% of the total performance experiments.

Effect of Touch-key Sizes on Usability of Driver Information Systems and Driving Safety (터치키 크기가 운전자 정보 시스템의 사용성과 운전의 안전성에 미치는 영향 분석)

  • Kim, Hee-Hin;Kwon, Sung-Hyuk;Heo, Ji-Yoon;Chung, Min-K.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.37 no.1
    • /
    • pp.30-40
    • /
    • 2011
  • In recent years, driver information systems (DIS's) became popular and the use of DIS's increased significantly. A majority of DIS's provides touch-screen interfaces because of intuitiveness of the interaction and the flexibility of interface design. In many cases, touch-screen interfaces are mainly manipulated by fingers. In this case, investigating the effect of touch-key sizes on usability is known to be one of the most important research issues, and lots of studies address the effect of touch-key size for mobile devices or kiosks. However, there is few study on DIS's. The importance of touch-key size study for DIS's should be emphasized because it is closely related to safety issues besides usability issues. In this study, we investigated the effect of touch-key sizes of DIS's while simulated driving (0, 50, and 100km/h) considering driving safety (lateral deviation, velocity deviation, total glance time, mean glance time, total time between glances, mean number of glances) and usability of DIS's (task completion time, error rate, subjective preference, NASA TLX) simultaneously. As a result, both of driving safety and usability of DIS's increased as driving speed decreased and touch-key size increased. However, there were no significant differences when touch-key size is larger than a certain level (in this study : 17.5mm).

Implementation of the Classification using Neural Network in Diagnosis of Liver Cirrhosis (간 경변 진단시 신경망을 이용한 분류기 구현)

  • Park, Byung-Rae
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.17-33
    • /
    • 2005
  • This paper presents the proposed a classifier of liver cirrhotic step using MR(magnetic resonance) imaging and hierarchical neural network. The data sets for classification of each stage, which were normal, 1type, 2type and 3type, were analysis in the number of data was 231. We extracted liver region and nodule region from T1-weight MR liver image. Then objective interpretation classifier of liver cirrhotic steps. Liver cirrhosis classifier implemented using hierarchical neural network which gray-level analysis and texture feature descriptors to distinguish normal liver and 3 types of liver cirrhosis. Then proposed Neural network classifier learned through error back-propagation algorithm. A classifying result shows that recognition rate of normal is $100\%$, 1type is $82.8\%$, 2type is $87.1\%$, 3type is $84.2\%$. The recognition ratio very high, when compared between the result of obtained quantified data to that of doctors decision data and neural network classifier value. If enough data is offered and other parameter is considered this paper according to we expected that neural network as well as human experts and could be useful as clinical decision support tool for liver cirrhosis patients.

  • PDF

Variable Length Optimum Convergence Factor Algorithm for Adaptive Filters (적응 필터를 위한 가변 길이 최적 수렴 인자 알고리듬)

  • Boo, In-Hyoung;Kang, Chul-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.77-85
    • /
    • 1994
  • In this study an adaptive algorithm with optimum convergence factor for steepest descent method is proposed, which controls automatically the filter order to take the appropriate level. So far, fixed order filters have been used when adaptive filter is employed according to the priori knowledge or experience in various adaptive signal processing applications. But, it is so difficult to know the filter order needed in real implementations that high order filters have to be performed. As a result, redundant calculations are increased in the case of high order filters. The proposed variable length optimum convergence factor (VLOCF) algorithm takes the appropriated filter order within the given one so that the redundant calculation is decreased to get the enhancement of convergence speed and smaller convergence error during the steady state. The proposed algorithm is evaluated to prove the validity by computer simulation for system Identification.

  • PDF

Ovarian Cancer Microarray Data Classification System Using Marker Genes Based on Normalization (표준화 기반 표지 유전자를 이용한 난소암 마이크로어레이 데이타 분류 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.9
    • /
    • pp.2032-2037
    • /
    • 2011
  • Marker genes are defined as genes in which the expression level characterizes a specific experimental condition. Such genes in which the expression levels differ significantly between different groups are highly informative relevant to the studied phenomenon. In this paper, first the system can detect marker genes that are selected by ranking genes according to statistics after normalizing data with methods that are the most widely used among several normalization methods proposed the while, And it compare and analyze a performance of each of normalization methods with mult-perceptron neural network layer. The Result that apply Multi-Layer perceptron algorithm at Microarray data set including eight of marker gene that are selected using ANOVA method after Lowess normalization represent the highest classification accuracy of 99.32% and the lowest prediction error estimate.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

A Packet Loss Control Scheme based on Network Conditions and Data Priority (네트워크 상태와 데이타 중요도에 기반한 패킷 손실 제어 기법)

  • Park, Tae-Uk;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.1
    • /
    • pp.1-10
    • /
    • 2004
  • This study discusses Application-layer FEC using erasure codes. Because of the simple decoding process, erasure codes are used effectively in Application-layer FEC to deal with Packet-level errors. The large number of parity packets makes the loss rate to be small, but causes the network congestion to be worse. Thus, a redundancy control algorithm that can adjust the number of parity packets depending on network conditions is necessary. In addition, it is natural that high-priority frames such as I frames should produce more parity packets than low-priority frames such as P and B frames. In this paper, we propose a redundancy control algorithm that can adjust the amount of redundancy depending on the network conditions and depending on data priority, and test the performance in simple links and congestion links.