• Title/Summary/Keyword: consistent algorithms

Search Result 99, Processing Time 0.023 seconds

Imputation Accuracy from Low to Moderate Density Single Nucleotide Polymorphism Chips in a Thai Multibreed Dairy Cattle Population

  • Jattawa, Danai;Elzo, Mauricio A.;Koonawootrittriron, Skorn;Suwanasopee, Thanathip
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.29 no.4
    • /
    • pp.464-470
    • /
    • 2016
  • The objective of this study was to investigate the accuracy of imputation from low density (LDC) to moderate density SNP chips (MDC) in a Thai Holstein-Other multibreed dairy cattle population. Dairy cattle with complete pedigree information (n = 1,244) from 145 dairy farms were genotyped with GeneSeek GGP20K (n = 570), GGP26K (n = 540) and GGP80K (n = 134) chips. After checking for single nucleotide polymorphism (SNP) quality, 17,779 SNP markers in common between the GGP20K, GGP26K, and GGP80K were used to represent MDC. Animals were divided into two groups, a reference group (n = 912) and a test group (n = 332). The SNP markers chosen for the test group were those located in positions corresponding to GeneSeek GGP9K (n = 7,652). The LDC to MDC genotype imputation was carried out using three different software packages, namely Beagle 3.3 (population-based algorithm), FImpute 2.2 (combined family- and population-based algorithms) and Findhap 4 (combined family- and population-based algorithms). Imputation accuracies within and across chromosomes were calculated as ratios of correctly imputed SNP markers to overall imputed SNP markers. Imputation accuracy for the three software packages ranged from 76.79% to 93.94%. FImpute had higher imputation accuracy (93.94%) than Findhap (84.64%) and Beagle (76.79%). Imputation accuracies were similar and consistent across chromosomes for FImpute, but not for Findhap and Beagle. Most chromosomes that showed either high (73%) or low (80%) imputation accuracies were the same chromosomes that had above and below average linkage disequilibrium (LD; defined here as the correlation between pairs of adjacent SNP within chromosomes less than or equal to 1 Mb apart). Results indicated that FImpute was more suitable than Findhap and Beagle for genotype imputation in this Thai multibreed population. Perhaps additional increments in imputation accuracy could be achieved by increasing the completeness of pedigree information.

Uncertainty Analysis on the Simulations of Runoff and Sediment Using SWAT-CUP (SWAT-CUP을 이용한 유출 및 유사모의 불확실성 분석)

  • Kim, Minho;Heo, Tae-Young;Chung, Sewoong
    • Journal of Korean Society on Water Environment
    • /
    • v.29 no.5
    • /
    • pp.681-690
    • /
    • 2013
  • Watershed models have been increasingly used to support an integrated management of land and water, non-point source pollutants, and implement total daily maximum load policy. However, these models demand a great amount of input data, process parameters, a proper calibration, and sometimes result in significant uncertainty in the simulation results. For this reason, uncertainty analysis is necessary to minimize the risk in the use of the models for an important decision making. The objectives of this study were to evaluate three different uncertainty analysis algorithms (SUFI-2: Sequential Uncertainty Fitting-Ver.2, GLUE: Generalized Likelihood Uncertainty Estimation, ParaSol: Parameter Solution) that used to analyze the sensitivity of the SWAT(Soil and Water Assessment Tool) parameters and auto-calibration in a watershed, evaluate the uncertainties on the simulations of runoff and sediment load, and suggest alternatives to reduce the uncertainty. The results confirmed that the parameters which are most sensitive to runoff and sediment simulations were consistent in three algorithms although the order of importance is slightly different. In addition, there was no significant difference in the performance of auto-calibration results for runoff simulations. On the other hand, sediment calibration results showed less modeling efficiency compared to runoff simulations, which is probably due to the lack of measurement data. It is obvious that the parameter uncertainty in the sediment simulation is much grater than that in the runoff simulation. To decrease the uncertainty of SWAT simulations, it is recommended to estimate feasible ranges of model parameters, and obtain sufficient and reliable measurement data for the study site.

Sea Ice Extents and global warming in Okhotsk Sea and surrounding Ocean - sea ice concentration using airborne microwave radiometer -

  • Nishio, Fumihiko
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.76-82
    • /
    • 1998
  • Increase of greenhouse gas due to $CO_2$ and CH$_4$ gases would cause the global warming in the atmosphere. According to the global circulation model, it is pointed out in the Okhotsk Sea that the large increase of atmospheric temperature might be occurredin this region by global warming due to the doubling of greenhouse effectgases. Therefore, it is very important to monitor the sea ice extents in the Okhotsk Sea. To improve the sea ice extents and concentration with more highly accuracy, the field experiments have begun to comparewith Airborne Microwave Radiometer (AMR) and video images installed on the aircraft (Beach-200). The sea ice concentration is generally proportional to the brightness temperature and accurate retrieval of sea ice concentration from the brightness temperature is important because of the sensitivity of multi-channel data with the amount of open water in the sea ice pack. During the field experiments of airborned AMR the multi-frequency data suggest that the sea ice concentration is slightly dependending on the sea ice types since the brightness temperature is different between the thin and small piece of sea ice floes, and a large ice flow with different surface signatures. On the basis of classification of two sea ice types, it is cleary distinguished between the thin ice and the large ice floe in the scatter plot of 36.5 and 89.0GHz, but it does not become to make clear of the scatter plot of 18.7 and 36.5GHz Two algorithms that have been used for deriving sea ice concentrations from airbomed multi-channel data are compared. One is the NASA Team Algorithm and the other is the Bootstrap Algorithm. Intrercomparison on both algorithms with the airborned data and sea ice concentration derived from video images bas shown that the Bootstrap Algorithm is more consistent with the binary maps of video images.

  • PDF

Fast Content-preserving Seam Estimation for Real-time High-resolution Video Stitching (실시간 고해상도 동영상 스티칭을 위한 고속 콘텐츠 보존 시접선 추정 방법)

  • Kim, Taeha;Yang, Seongyeop;Kang, Byeongkeun;Lee, Hee Kyung;Seo, Jeongil;Lee, Yeejin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.1004-1012
    • /
    • 2020
  • We present a novel content-preserving seam estimation algorithm for real-time high-resolution video stitching. Seam estimation is one of the fundamental steps in image/video stitching. It is to minimize visual artifacts in the transition areas between images. Typical seam estimation algorithms are based on optimization methods that demand intensive computations and large memory. The algorithms, however, often fail to avoid objects and results in cropped or duplicated objects. They also lack temporal consistency and induce flickering between frames. Hence, we propose an efficient and temporarily-consistent seam estimation algorithm that utilizes a straight line. The proposed method also uses convolutional neural network-based instance segmentation to locate seam at out-of-objects. Experimental results demonstrate that the proposed method produces visually plausible stitched videos with minimal visual artifacts in real-time.

Parallelism and Straightness Measurement of a Pair of Rails for Ultra Precision Guide-ways (초정밀 안내면 레일의 평행도 및 진직도 동시측정)

  • Hwang, Joo-Ho;Park, Chun-Hong;Wei, Gao;Kim, Seung-Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.3 s.192
    • /
    • pp.117-123
    • /
    • 2007
  • This paper describes a three-probe system that can be used to measure the parallelism and straightness of a pair of rails simultaneously. The parallelism is measured using a modified reversal method, while the straightness is measured using a sequential two-point method. The measurement algorithms were analyzed numerically using a pair of functionally defined rails to validate the three-probe system. Tests were also performed on a pair of straightedge rails with a length of 250 mm and a maximum straightness deviation of $0.05{\mu}m$, as certified by the supplier. The experimental results demonstrated that the parallelism-measurement algorithm had a cancellation effect on the probe stage motion error. They also confirmed that the proposed system could measure the slope of a pair of rails about $0.06{\mu}rad$. Therefore, by combining this technique with a sequential differential method to measure the straightness of the rails simultaneously, the surface profiles could be determined accurately and eliminate the stage error. The measured straightness deviation of each straight edge was less than $0.05{\mu}m$, consistent with the certified value.

Comprehensive evaluation of structural geometrical nonlinear solution techniques Part I: Formulation and characteristics of the methods

  • Rezaiee-Pajand, M.;Ghalishooyan, M.;Salehi-Ahmadabad, M.
    • Structural Engineering and Mechanics
    • /
    • v.48 no.6
    • /
    • pp.849-878
    • /
    • 2013
  • This paper consists of two parts, which broadly examines solution techniques abilities for the structures with geometrical nonlinear behavior. In part I of the article, formulations of several well-known approaches will be presented. These solution strategies include different groups, such as: residual load minimization, normal plane, updated normal plane, cylindrical arc length, work control, residual displacement minimization, generalized displacement control, modified normal flow, and three-parameter ellipsoidal, hyperbolic, and polynomial schemes. For better understanding and easier application of the solution techniques, a consistent mathematical notation is employed in all formulations for correction and predictor steps. Moreover, other features of these approaches and their algorithms will be investigated. Common methods of determining the amount and sign of load factor increment in the predictor step and choosing the correct root in predictor and corrector step will be reviewed. The way that these features are determined is very important for tracing of the structural equilibrium path. In the second part of article, robustness and efficiency of the solution schemes will be comprehensively evaluated by performing numerical analyses.

A Video Deblurring Algorithm based on Sharpness Metric for Uniform Sharpness between Frames (프레임 간 선명도 균일화를 위한 선명도 메트릭 기반의 동영상 디블러링 알고리즘)

  • Lee, Byung-Ju;Lee, Dong-Bok;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.127-136
    • /
    • 2013
  • This paper proposes a video deblurring algorithm which maintains uniform sharpness between frames. Unlike the previous algorithms using fixed parameters, the proposed algorithm keeps uniform sharpness by adjusting parameters for each frame. First, we estimate the initial blur kernel and perform deconvolution, then measure the sharpness of the deblurred image. In order to maintain uniform sharpness, we adjust the regularization parameter and kernel according to the examined sharpness, and perform deconvolution again. The experimental results show that the proposed algorithm achieves outstanding deblurring results while providing consistent sharpness.

A Study on Tool for Software Architecture Design (소프트웨어 구조 설계 지원 도구 개발에 관한 연구)

  • 강병도;이미경
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.15-22
    • /
    • 2002
  • As the size and complexity of software systems increase, the design and specification of overall system structure become more significant issues than the choice of algorithms and data structures of computation. Software architecture serves as a framework for understanding system components and their interrelationships. Software architectures can be reusable assets to achieve low costs, high productivity, and consistent quality. We have developed a software architecture design environment, called Happy Work. In this paper, we would like to present the structure and functions of Happy Work. Happy Work has two main functions. First, it Provides a graphic editor for modeling of software architecture diagram. Second it provides an ADL, called HWL(Happy Work language). HWL is a language that describes software architect

  • PDF

A Conveyor Algorithm for Complete Consistency of Materialized View in a Self-Maintenance (실체 뷰의 자기관리에서 완전일관성을 위한 컨베이어 알고리듬)

  • Hong, In-Hoon;Kim, Yon-Soo
    • IE interfaces
    • /
    • v.16 no.2
    • /
    • pp.229-239
    • /
    • 2003
  • The On-Line Analytical Processing (OLAP) tools access data from the data warehouse for complex data analysis, such as multidimensional data analysis, and decision support activities. Current research has lead to new developments in all aspects of data warehousing, however, there are still a number of problems that need to be solved for making data warehousing effective. View maintenance, one of them, is to maintain view in response to updates in source data. Keeping the view consistent with updates to the base relations, however, can be expensive, since it may involve querying external sources where the base relations reside. In order to reduce maintenance costs, it is possible to maintain the views using information that is strictly local to the data warehouse. This process is usually referred to as "self-maintenance of views". A number of algorithm have been proposed for self maintenance of views where they keep some additional information in data warehouse in the form of auxiliary views. But those algorithms did not consider a consistency of materialized views using view self-maintenance. The purpose of this paper is to research consistency problem when self-maintenance of views is implemented. The proposed "conveyor algorithm" will resolved a complete consistency of materialized view using self-maintenance with considering network delay. The rationale for conveyor algorithm and performance characteristics are described in detail.

The Research About Free Piston Linear Engine with Artificial Neural Network (인공 신경망을 이용한 프리피스톤 리니어 엔진의 연구)

  • AHMED, TUSHAR;HUNG, NGUYEN BA;LIM, OCKTAECK
    • Transactions of the Korean hydrogen and new energy society
    • /
    • v.26 no.3
    • /
    • pp.294-299
    • /
    • 2015
  • Free piston linear engine (FPLE) is a promising concept being explored in the mid-20th century. On the other hand, Arficial neural networks (ANNs) are non-linear computer algorithms and can model the behavior of complicated non-linear processes. Some researchers already studied this method to predict internal combustion engine characteristics. However, no investigation to predict the performance of a FPLE using ANN approach appears to have been published in the literature to date. In this study, the ability of an artificial neural network model, using a back propagation learning algorithm has been used to predict the in-cylinder pressure, frequency, maximum stroke length of a free piston linear engine. It is advised that, well-trained neural network models can provide fast and consistent results, making it an easy-to-use tool in preliminary studies for such thermal engineering problems.