• 제목/요약/키워드: set-partitioning model

검색결과 33건 처리시간 0.02초

이중분기 확장을 통한 등치선 삼각화의 다중분기 알고리즘 (A Multiple Branching Algorithm of Contour Triangulation by Cascading Double Branching Method)

  • 최영규
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제27권2호
    • /
    • pp.123-134
    • /
    • 2000
  • 등치선(wire-frame contour)으로 표현된 물체의 볼륨정보에서부터 3차원 표면을 재구성하는 방법을 제안한다. 등치선 삼각화법(contour triangulation)이라고도 하는 이 방법에서 가장 문제가 되는 것이 인접 단층사이에서 표면이 분기하는 경우에 발생하는데, 이것은 하나의 등치선이 인접한 층의 두 개이상의 등치선과 연결되는 형태로 나타나며, 표면 생성시의 많은 모호성을 발생시킨다. 본 논문에서는 이러한 분기문제를 가장 일반적으로 발생하는 이중분기문제와 그 이상의 다중분기문제로 구분하고, 먼저 이중 분기 알고리즘을 제안하였으며, 다중분기문제를 다수의 이중분기문제로 단순화하는 다중분기 알고리즘을 제안하였다. 제안된 이중분기 알고리즘은 모 등치선을 분할하는 방법을 이용하였는데, 먼저 해협다각형을 정의하고 이를 삼각분할하여 분할선을 구하는 것에 바탕을 두고 있다. 이 방법은 이중분기가 매우 복잡하게 나타나는 경우에도 잘 적용이 되며, 분할선의 레벨을 조절함으로써 매우 사실적인 표면을 만들어 낼 수 있다는 장점이 있다. 또한 다중분기문제를 단층 간격의 문제로 규정하고, 인접한 두 층 사이에 가상의 등치선을 추가하여 가지 등치선을 연속적으로 병합하는 방법으로 해결하였다. 제안된 방법은 등치선 삼각화의 가장 큰 문제인 분기문제를 해결하기 위한 매우 구조적인 접근방법으로, 다양한 실제 등치선 데이타에 적용한 결과 좋은 성능을 나타냈다.

  • PDF

마이크로어레이 유전자 발현 자료에 대한 군집 방법 비교 (Comparison of clustering methods of microarray gene expression data)

  • 임진수;임동훈
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권1호
    • /
    • pp.39-51
    • /
    • 2012
  • 군집분석은 마이크로어레이 발현자료에서 유전자 혹은 표본들의 유사한 특성을 갖는 연관구조를 조사하는데 중요한 도구이다. 본 논문에서는 마이크로어레이 자료에서 계층적 군집방법, K-평균법, PAM (partitioning around medoids), SOM (self-organizing maps) 그리고 모형기반 군집방법 들의 성능을 3가지 군집 타당성 측도인 내적 측도, 안정적 측도 그리고 생물학적 측도를 가지고 비교분석하고자 한다. 모의실험을 통해 생성된 자료와 실제 SRBCT (small round blue cell tumor) 자료를 가지고 여러 가지 군집방법들의 성능을 비교하였으며 그 결과 모의실험 자료에서는 거의 모든 방법들이 3가지 군집측도에서 원래 자료와 일치하는 좋은 군집 결과를 나타내었고 SRBCT 자료에서는 모의실험 자료처럼 명확한 군집화 결과를 보여주지는 않으나 내적측도의 실루엣 너비 (Silhouette width) 관점에서는 PAM 방법, SOM, 모형기반 군집방법 그리고 생물학적 측도에서는 PAM 방법과 모형기반 군집방법이 모의실험 결과와 비슷한 결과를 얻었고 안정적 측도에서 모형기반 군집방법이 다른 방법들보다 좋은 군집결과를 보여주었다.

군집 특정 변량효과를 포함한 유한 혼합 모형의 베이지안 분석 (Bayesian analysis of finite mixture model with cluster-specific random effects)

  • 이혜진;경민정
    • 응용통계연구
    • /
    • 제30권1호
    • /
    • pp.57-68
    • /
    • 2017
  • 대량의 데이터에 있어 전반적인 특성 및 구조를 파악하는데 유용하기 때문에 다양한 분야에서 군집분석을 사용하고 있다. Dempster 등 (1977)에서 정의된 expectation-maximization(EM) 알고리즘은 가장 보편적으로 사용되는 군집분석 방법이다. 선형모형의 유한혼합물(finite mixture of linear model) 기법 또한 군집분석 방법 중 많이 사용되는 방법이며 베이지안 군집방법은 Bernardo와 Giron (1988)이 군집에 대한 가중치 확률만 모를 경우 처음 적용하였다. 우리는 이 연구에서 일반적인 선형모형의 유한혼합물이 아닌 군집특정(cluster-specific) 변량효과를 모형에 포함하여 베이지안 분석방법인 깁스표집법(Gibbs sampling)을 사용한다. 제안한 모형의 특성 및 표집법에 대하여 설명하였고 모의실험 및 실제 데이터 분석을 통하여 모형의 유용성을 파악하였다. Hurn 등 (2003)의 CO2 데이터에 모형을 적용하여 변량효과가 없는 모형, 개체특정(subject-specific) 변량효과 모형과 비교하였다.

A New Method to Retrieve Sensible Heat and Latent Heat Fluxes from the Remote Sensing Data

  • Liou Yuei-An;Chen Yi-Ying;Chien Tzu-Chieh;Chang Tzu-Yin
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.415-417
    • /
    • 2005
  • In order to retrieve the latent and sensible heat fluxes, high-resolution airborne imageries with visible, near infrared, and thermal infrared bands and ground-base meteorology measurements are utilized in this paper. The retrieval scheme is based on the balance of surface energy budget and momentum equations. There are three basic surface parameters including surface albedo $(\alpha)$, normalized difference vegetation index (NOVI) and surface kinetic temperature (TO). Lowtran 7 code is used to correct the atmosphere effect. The imageries were taken on 28 April and 5 May 2003. From the scattering plot of data set, we observed the extreme dry and wet pixels to derive the fitting of dry and wet controlled lines, respectively. Then the sensible heat and latent heat fluxes are derived from through a partitioning factor A. The retrieved latent and sensible heat fluxes are compared with in situ measurements, including eddy correlation and porometer measurements. It is shown that the retrieved fluxes from our scheme match with the measurements better than those derived from the S-SEBI model.

  • PDF

관계형 데이터베이스의 물리적 설계에서 유전해법을 이용한 속성 중복 수직분할 방법 (An Attribute Replicating Vertical Partition Method by Genetic Algorithm in the Physical Design of Relational Database)

  • 유종찬;김재련
    • 산업경영시스템학회지
    • /
    • 제21권46호
    • /
    • pp.33-49
    • /
    • 1998
  • In order to improve the performance of relational databases, one has to reduce the number of disk accesses necessary to transfer data from disk to main memory. The paper proposes to reduce the number of disk I/O accesses by vertically partitioning relation into fragments and allowing attribute replication to fragments if necessary. When zero-one integer programming model is solved by the branch-and-bound method, it requires much computing time to solve a large sized problem. Therefore, heuristic solutions using genetic algorithm(GA) are presented. GA in this paper adapts a few ideas which are different from traditional genetic algorithms, for examples, a rank-based sharing fitness function, elitism and so on. In order to improve performance of GA, a set of optimal parameter levels is determined by the experiment and makes use of it. As relations are vertically partitioned allowing attribute replications and saved in disk, an attribute replicating vertical partition method by GA can attain less access cost than non-attribute-replication one and require less computing time than the branch-and-bound method in large-sized problems. Also, it can acquire a good solution similar to the optimum solution in small-sized problem.

  • PDF

Prediction of compressive strength of bacteria incorporated geopolymer concrete by using ANN and MARS

  • X., John Britto;Muthuraj, M.P.
    • Structural Engineering and Mechanics
    • /
    • 제70권6호
    • /
    • pp.671-681
    • /
    • 2019
  • This paper examines the applicability of artificial neural network (ANN) and multivariate adaptive regression splines (MARS) to predict the compressive strength of bacteria incorporated geopolymer concrete (GPC). The mix is composed of new bacterial strain, manufactured sand, ground granulated blast furnace slag, silica fume, metakaolin and fly ash. The concentration of sodium hydroxide (NaOH) is maintained at 8 Molar, sodium silicate ($Na_2SiO_3$) to NaOH weight ratio is 2.33 and the alkaline liquid to binder ratio of 0.35 and ambient curing temperature ($28^{\circ}C$) is maintained for all the mixtures. In ANN, back-propagation training technique was employed for updating the weights of each layer based on the error in the network output. Levenberg-Marquardt algorithm was used for feed-forward back-propagation. MARS model was developed by establishing a relationship between a set of predictors and dependent variables. MARS is based on a divide and conquers strategy partitioning the training data sets into separate regions; each gets its own regression line. Six models based on ANN and MARS were developed to predict the compressive strength of bacteria incorporated GPC for 1, 3, 7, 28, 56 and 90 days. About 70% of the total 84 data sets obtained from experiments were used for development of the models and remaining 30% data was utilized for testing. From the study, it is observed that the predicted values from the models are found to be in good agreement with the corresponding experimental values and the developed models are robust and reliable.

Genomic partitioning of growth traits using a high-density single nucleotide polymorphism array in Hanwoo (Korean cattle)

  • Park, Mi Na;Seo, Dongwon;Chung, Ki-Yong;Lee, Soo-Hyun;Chung, Yoon-Ji;Lee, Hyo-Jun;Lee, Jun-Heon;Park, Byoungho;Choi, Tae-Jeong;Lee, Seung-Hwan
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제33권10호
    • /
    • pp.1558-1565
    • /
    • 2020
  • Objective: The objective of this study was to characterize the number of loci affecting growth traits and the distribution of single nucleotide polymorphism (SNP) effects on growth traits, and to understand the genetic architecture for growth traits in Hanwoo (Korean cattle) using genome-wide association study (GWAS), genomic partitioning, and hierarchical Bayesian mixture models. Methods: GWAS: A single-marker regression-based mixed model was used to test the association between SNPs and causal variants. A genotype relationship matrix was fitted as a random effect in this linear mixed model to correct the genetic structure of a sire family. Genomic restricted maximum likelihood and BayesR: A priori information included setting the fixed additive genetic variance to a pre-specified value; the first mixture component was set to zero, the second to 0.0001×σ2g, the third 0.001×σ2g, and the fourth to 0.01×σ2g. BayesR fixed a priori information was not more than 1% of the genetic variance for each of the SNPs affecting the mixed distribution. Results: The GWAS revealed common genomic regions of 2 Mb on bovine chromosome 14 (BTA14) and 3 had a moderate effect that may contain causal variants for body weight at 6, 12, 18, and 24 months. This genomic region explained approximately 10% of the variance against total additive genetic variance and body weight heritability at 12, 18, and 24 months. BayesR identified the exact genomic region containing causal SNPs on BTA14, 3, and 22. However, the genetic variance explained by each chromosome or SNP was estimated to be very small compared to the total additive genetic variance. Causal SNPs for growth trait on BTA14 explained only 0.04% to 0.5% of the genetic variance Conclusion: Segregating mutations have a moderate effect on BTA14, 3, and 19; many other loci with small effects on growth traits at different ages were also identified.

Precise-Optimal Frame Length Based Collision Reduction Schemes for Frame Slotted Aloha RFID Systems

  • Dhakal, Sunil;Shin, Seokjoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권1호
    • /
    • pp.165-182
    • /
    • 2014
  • An RFID systems employ efficient Anti-Collision Algorithms (ACAs) to enhance the performance in various applications. The EPC-Global G2 RFID system utilizes Frame Slotted Aloha (FSA) as its ACA. One of the common approaches used to maximize the system performance (tag identification efficiency) of FSA-based RFID systems involves finding the optimal value of the frame length relative to the contending population size of the RFID tags. Several analytical models for finding the optimal frame length have been developed; however, they are not perfectly optimized because they lack precise characterization for the timing details of the underlying ACA. In this paper, we investigate this promising direction by precisely characterizing the timing details of the EPC-Global G2 protocol and use it to derive a precise-optimal frame length model. The main objective of the model is to determine the optimal frame length value for the estimated number of tags that maximizes the performance of an RFID system. However, because precise estimation of the contending tags is difficult, we utilize a parametric-heuristic approach to maximize the system performance and propose two simple schemes based on the obtained optimal frame length-namely, Improved Dynamic-Frame Slotted Aloha (ID-FSA) and Exponential Random Partitioning-Frame Slotted Aloha (ERP-FSA). The ID-FSA scheme is based on the tag set estimation and frame size update mechanisms, whereas the ERP-FSA scheme adjusts the contending tag population in such a way that the applied frame size becomes optimal. The results of simulations conducted indicate that the ID-FSA scheme performs better than several well-known schemes in various conditions, while the ERP-FSA scheme performs well when the frame size is small.

Development of Models for Regional Cardiac Surgery Centers

  • Park, Choon Seon;Park, Nam Hee;Sim, Sung Bo;Yun, Sang Cheol;Ahn, Hye Mi;Kim, Myunghwa;Choi, Ji Suk;Kim, Myo Jeong;Kim, Hyunsu;Chee, Hyun Keun;Oh, Sanggi;Kang, Shinkwang;Lee, Sok-Goo;Shin, Jun Ho;Kim, Keonyeop;Lee, Kun Sei
    • Journal of Chest Surgery
    • /
    • 제49권sup1호
    • /
    • pp.28-36
    • /
    • 2016
  • Background: This study aimed to develop the models for regional cardiac surgery centers, which take regional characteristics into consideration, as a policy measure that could alleviate the concentration of cardiac surgery in the metropolitan area and enhance the accessibility for patients who reside in the regions. Methods: To develop the models and set standards for the necessary personnel and facilities for the initial management plan, we held workshops, debates, and conference meetings with various experts. Results: After partitioning the plan into two parts (the operational autonomy and the functional comprehensiveness), three models were developed: the 'independent regional cardiac surgery center' model, the 'satellite cardiac surgery center within hospitals' model, and the 'extended cardiac surgery department within hospitals' model. Proposals on personnel and facility management for each of the models were also presented. A regional cardiac surgery center model that could be applied to each treatment area was proposed, which was developed based on the anticipated demand for cardiac surgery. The independent model or the satellite model was proposed for Chungcheong, Jeolla, North Gyeongsang, and South Gyeongsang area, where more than 500 cardiac surgeries are performed annually. The extended model was proposed as most effective for the Gangwon and Jeju area, where more than 200 cardiac surgeries are performed annually. Conclusion: The operation of regional cardiac surgery centers with high caliber professionals and quality resources such as optimal equipment and facility size, should enhance regional healthcare accessibility and the quality of cardiac surgery in South Korea.

정보이득 분할을 이용한 분류기법의 지배적 초월평면 생성기법 (A dominant hyperrectangle generation technique of classification using IG partitioning)

  • 이형일
    • 한국컴퓨터정보학회논문지
    • /
    • 제19권1호
    • /
    • pp.149-156
    • /
    • 2014
  • 중첩형 일반화 사례 (NGE, Nested Generalized Exemplar) 기법은 거리 기반 분류를 최적 일치 규칙으로 사용하며, 노이즈에 대한 내구력을 증가시켜 주는 동시에 모델 크기를 감소시키는 장점이 있다. NGE 학습 중 생성된 교차(cross)나 중첩(overlap) 현상은 분류성능을 저해하는 요인으로 작용한다. 따라서 본 논문은 NGE 학습 중 생성된 교차나 중첩 현상이 발생한 초월 평면에대해 상호정보가 가장 큰 구간을 분리하여, 새로운 초월평면을 구성하게 하여, 분류성능 향상시키고 초월평면의 개수를 감소시키는 기법인 DHGen(Dominant Hyperrectangle Generation) 알고리즘을 제안하였다. 제안한 DHGen은 분류성능면에서 kNN과 유사하고 NGE이론으로 구현한 EACH보다 우수함을 UCI Machine Learning Repository에서 벤치마크데이터를 발췌한 실험자료로 입증하였다.