• Title/Summary/Keyword: Estimate Information

Search Result 5,082, Processing Time 0.036 seconds

A New Compensated Criterion in Testing Trained Codebooks

  • Kim, Dong-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7A
    • /
    • pp.1052-1063
    • /
    • 1999
  • In designing the quantizer of a coding scheme using a training sequence (TS), the training algorithm tries to find a quantizer that minimizes the distortion measured in the TS. In order to evaluate the trained quantizer or compare the coding scheme, we can observe the minimized distortion. However, the minimized distortion is a biased estimate of the minimal distortion for the input distribution. Hence, we could often overestimate a quantizer or make a wrong comparison even if we use a validating sequence. In this paper, by compensating the minimized distortion for the TS, a new estimate is proposed. Compensating term is a function of the training ratio, the TS size to the codebook size. Several numerical results are also introduced for the proposed estimate.

  • PDF

A Novel Posterior Probability Estimation Method for Multi-label Naive Bayes Classification

  • Kim, Hae-Cheon;Lee, Jaesung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • A multi-label classification is to find multiple labels associated with the input pattern. Multi-label classification can be achieved by extending conventional single-label classification. Common extension techniques are known as Binary relevance, Label powerset, and Classifier chains. However, most of the extended multi-label naive bayes classifier has not been able to accurately estimate posterior probabilities because it does not reflect the label dependency. And the remaining extended multi-label naive bayes classifier has a problem that it is unstable to estimate posterior probability according to the label selection order. To estimate posterior probability well, we propose a new posterior probability estimation method that reflects the probability between all labels and labels efficiently. The proposed method reflects the correlation between labels. And we have confirmed through experiments that the extended multi-label naive bayes classifier using the proposed method has higher accuracy then the existing multi-label naive bayes classifiers.

Analysis of Effect of an Additional Edge on Eigenvector Centrality of Graph

  • Han, Chi-Geun;Lee, Sang-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.25-31
    • /
    • 2016
  • There are many methods to describe the importance of a node, centrality, in a graph. In this paper, we focus on the eigenvector centrality. In this paper, an analytical method to estimate the difference of centrality with an additional edge in a graph is proposed. In order to validate the analytical method to estimate the centrality, two problems, to decide an additional edge that maximizes the difference of all centralities of all nodes in the graph and to decide an additional edge that maximizes the centrality of a specific node, are solved using three kinds of random graphs and the results of the estimated edge and observed edge are compared. Though the estimated centrality difference is slightly different from the observed real centrality in some cases, it is shown that the proposed method is effective to estimate the centrality difference with a short running time.

Genotype-Calling System for Somatic Mutation Discovery in Cancer Genome Sequence (암 유전자 배열에서 체세포 돌연변이 발견을 위한 유전자형 조사 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.3009-3015
    • /
    • 2013
  • Next-generation sequencing (NGS) has enabled whole genome and transcriptome single nucleotide variant (SNV) discovery in cancer and method of the most fundamental being determining an individual's genotype from multiple aligned short read sequences at a position. Bayesian algorithm estimate parameter using posterior genotype probabilities and other method, EM algorithm, estimate parameter using maximum likelihood estimate method in observed data. Here, we propose a novel genotype-calling system and compare and analyze the effect of sample size(S = 50, 100 and 500) on posterior estimate of sequencing error rate, somatic mutation status and genotype probability. The result is that estimate applying Bayesian algorithm even for 50 of small sample size approached real parameter than estimate applying EM algorithm in small sample more accurately.

Comparative analysis of Bayesian and maximum likelihood estimators in change point problems with Poisson process

  • Kitabo, Cheru Atsmegiorgis;Kim, Jong Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.1
    • /
    • pp.261-269
    • /
    • 2015
  • Nowadays the application of change point analysis has been indispensable in a wide range of areas such as quality control, finance, environmetrics, medicine, geographics, and engineering. Identification of times where process changes would help minimize the consequences that might happen afterwards. The main objective of this paper is to compare the change-point detection capabilities of Bayesian estimate and maximum likelihood estimate. We applied Bayesian and maximum likelihood techniques to formulate change points having a step change and multiple number of change points in a Poisson rate. After a signal from c-chart and Poisson cumulative sum control charts have been detected, Monte Carlo simulation has been applied to investigate the performance of Bayesian and maximum likelihood estimation. Change point detection capacities of Bayesian and maximum likelihood estimation techniques have been investigated through simulation. It has been found that the Bayesian estimates outperforms standard control charts well specially when there exists a small to medium size of step change. Moreover, it performs convincingly well in comparison with the maximum like-lihood estimator and remains good choice specially in confidence interval statistical inference.

Performance Comparison of Background Estimation in the Video (영상에서의 배경추정알고리즘 성능 비교)

  • Do, Jin-Kyu;Kim, Gyu-Yeong;Park, Jang-Sik;Kim, Hyun-Tae;Yu, Yun-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.808-810
    • /
    • 2011
  • The background estimation algorithms had a significant impact on the performance of image processing and recognition. In this paper, background estimation algorithms were analysis of complexity and performance as preprocessing of image recognition. It was evaluated the performance of Gaussian Running Average, Mixture of Gaussian, and KDE algorithm. The simulation results show that KDE algorithm outperforms compared to the other algorithms.

  • PDF

A FFP-based Model to Estimate Software Development Cost (소프트웨어 개발비용을 추정하기 위한 FFP 기반 모델)

  • Park, Ju-Seok;Chong, Ki-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1137-1144
    • /
    • 2003
  • The existing Function Point method to estimate the software size has been utilized frequently with the management information system. Due to the expanding usage of the real-time and embedded system, the Full Function Point method is being proposed. However, despite many research is being carried out relation to the software size, the research on the model to estimate the development cost from the measured software size is inadequate. This paper analyzed the linear regression model and power regression model which estimate the development cost from the software FFP The power model is selected, which shows its estimation is most adequate.

Method for estimating workability of self-compacting concrete using mixing process images

  • Li, Shuyang;An, Xuehui
    • Computers and Concrete
    • /
    • v.13 no.6
    • /
    • pp.781-798
    • /
    • 2014
  • Estimating the workability of self-compacting concrete (SCC) is very important both in laboratories and on construction site. A method using visual information during the mixing process was proposed in this paper to estimate the workability of SCC. First, fourteen specimens of concrete were produced by a single-shaft mixer. A digital camera was used to record all the mixing processes. Second, employing the digital image processing, the visual information from mixing process images was extracted. The concrete pushed by the rotating blades forms two boundaries in the images. The shape of the upper boundary and the vertical distance between the upper and lower boundaries were used as two visual features. Thirdly, slump flow test and V-funnel test were carried out to estimate the workability of each SCC. Finally, the vertical distance between the upper and lower boundaries andthe shape of the upper boundary were used as indicators to estimate the workability of SCC. The vertical distance between the upper and lower boundaries was related to the slump flow, the shape of the upper boundary was related to the V-funnel flow time. Based on these relationships, the workability of SCC could be estimated using the mixing process images. This estimating method was verified by three more experiments. The experimental results indicate that the proposed method could be used to automatically estimate SCC workability.

Color Constancy Algorithm using the Maximum Luminance Surface (최대휘도표면을 이용한 색 항상성 알고리즘)

  • 안강식;조석제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.276-283
    • /
    • 2002
  • This paper proposes a new color constancy algorithm using the maximum luminance surface. This method uses a linear model which represents the characteristics of human visual system. The most important process of linear model is the estimation of the spectral distributions of illumination from an input image. To estimate of the spectral distributions of illumination from an input image, we first estimate spectral distribution functions of reflected light on the brightest surface. Then, we estimate surface reflectance functions corresponding to the maximum luminance surface using a principal component analysis of the given munsell chips. We finally estimate the spectral distributions of illumination in an image. Using an estimated illumination, we recover an image by scaling it regularly for the lightness calibration. From the experimental results, the proposed method was effective in recovering the color images compared with others.

IoT-based Indoor Localization Scheme (IoT 기반의 실내 위치 추정 기법)

  • Kim, Tae-Kook
    • Journal of Internet of Things and Convergence
    • /
    • v.2 no.4
    • /
    • pp.35-39
    • /
    • 2016
  • This paper is about IoT(Internet of Things)-based indoor localization scheme. GPS and WiFi are widely used to estimate the location of things. However, GPS has drawback of poor reception and radio disturbance in doors. To estimate the location in WiFi-based method, the user collects the information by scanning nearby WiFi(s) and transferring the information to WiFi database server. This is a fingerprint method with disadvantage of having an additional DB server. IoT is the internetworking of things, and this is on rapid rise. I propose the IoT-based indoor localization scheme. Under the proposed method, a device internetworking with another device with its own location information like GPS coordinate can estimate its own location through RSSI. With more devices localizing its own, the localization accuracy goes high. The proposed method allows the user to estimate the location without GPS and WiFi DB server.