• Title/Summary/Keyword: Random partition

Search Result 41, Processing Time 0.03 seconds

A Study on the Usage of the Color and the Pattern of Materials in Villa Savoye (빌라 사보아의 재료 패턴 및 색채 사용 연구)

  • Kim, So-Hee
    • Korean Institute of Interior Design Journal
    • /
    • v.17 no.5
    • /
    • pp.133-140
    • /
    • 2008
  • This study is purposed to understand the usage of the color and the material pattern in detail and to apply for interior architecture. Many documents have prescribed the color and the pattern at random. While the usage of the color and the materials was ignored rather than architectural form, the importance of the color and the materials stands out in modern interior space. The villa, the weekend home of the Savoye family, was built between 1928 and 1931. Particularly, the villa Savoye was focused for this study. Le Corbusier viewed case pieces for storage and wall partition for division as architecture, and he units furniture and architecture by developing partition that could be painted either the wall color to become part of the wall or in contrasting colors to stand out as wall partition. He loved white and lie believed in using it for interiors, but noted that it should also be balanced with a wall related polychromy. Color became an integral part of white structure of the villa savoye that was raised on stilt with an exterior wall at the base painted green as a visual connection with the lawn. Color was used architecturally in the interior as well, with white walls interrupted by planes of pink, blue and red ocher and it gives the space an unexpected playfulness with tile color of the finishing materials. the various usage of the color and material patterns constitute an element of great architectural richness. They have a unique principle based on emotional order and make the man move to another space and experience the spacial connection.

Salient Object Detection via Multiple Random Walks

  • Zhai, Jiyou;Zhou, Jingbo;Ren, Yongfeng;Wang, Zhijian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1712-1731
    • /
    • 2016
  • In this paper, we propose a novel saliency detection framework via multiple random walks (MRW) which simulate multiple agents on a graph simultaneously. In the MRW system, two agents, which represent the seeds of background and foreground, traverse the graph according to a transition matrix, and interact with each other to achieve a state of equilibrium. The proposed algorithm is divided into three steps. First, an initial segmentation is performed to partition an input image into homogeneous regions (i.e., superpixels) for saliency computation. Based on the regions of image, we construct a graph that the nodes correspond to the superpixels in the image, and the edges between neighboring nodes represent the similarities of the corresponding superpixels. Second, to generate the seeds of background, we first filter out one of the four boundaries that most unlikely belong to the background. The superpixels on each of the three remaining sides of the image will be labeled as the seeds of background. To generate the seeds of foreground, we utilize the center prior that foreground objects tend to appear near the image center. In last step, the seeds of foreground and background are treated as two different agents in multiple random walkers to complete the process of salient object detection. Experimental results on three benchmark databases demonstrate the proposed method performs well when it against the state-of-the-art methods in terms of accuracy and robustness.

Feature selection and prediction modeling of drug responsiveness in Pharmacogenomics (약물유전체학에서 약물반응 예측모형과 변수선택 방법)

  • Kim, Kyuhwan;Kim, Wonkuk
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.153-166
    • /
    • 2021
  • A main goal of pharmacogenomics studies is to predict individual's drug responsiveness based on high dimensional genetic variables. Due to a large number of variables, feature selection is required in order to reduce the number of variables. The selected features are used to construct a predictive model using machine learning algorithms. In the present study, we applied several hybrid feature selection methods such as combinations of logistic regression, ReliefF, TurF, random forest, and LASSO to a next generation sequencing data set of 400 epilepsy patients. We then applied the selected features to machine learning methods including random forest, gradient boosting, and support vector machine as well as a stacking ensemble method. Our results showed that the stacking model with a hybrid feature selection of random forest and ReliefF performs better than with other combinations of approaches. Based on a 5-fold cross validation partition, the mean test accuracy value of the best model was 0.727 and the mean test AUC value of the best model was 0.761. It also appeared that the stacking models outperform than single machine learning predictive models when using the same selected features.

A Study on Predictive Modeling of I-131 Radioactivity Based on Machine Learning (머신러닝 기반 고용량 I-131의 용량 예측 모델에 관한 연구)

  • Yeon-Wook You;Chung-Wun Lee;Jung-Soo Kim
    • Journal of radiological science and technology
    • /
    • v.46 no.2
    • /
    • pp.131-139
    • /
    • 2023
  • High-dose I-131 used for the treatment of thyroid cancer causes localized exposure among radiology technologists handling it. There is a delay between the calibration date and when the dose of I-131 is administered to a patient. Therefore, it is necessary to directly measure the radioactivity of the administered dose using a dose calibrator. In this study, we attempted to apply machine learning modeling to measured external dose rates from shielded I-131 in order to predict their radioactivity. External dose rates were measured at 1 m, 0.3 m, and 0.1 m distances from a shielded container with the I-131, with a total of 868 sets of measurements taken. For the modeling process, we utilized the hold-out method to partition the data with a 7:3 ratio (609 for the training set:259 for the test set). For the machine learning algorithms, we chose linear regression, decision tree, random forest and XGBoost. To evaluate the models, we calculated root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE) to evaluate accuracy and R2 to evaluate explanatory power. Evaluation results are as follows. Linear regression (RMSE 268.15, MSE 71901.87, MAE 231.68, R2 0.92), decision tree (RMSE 108.89, MSE 11856.92, MAE 19.24, R2 0.99), random forest (RMSE 8.89, MSE 79.10, MAE 6.55, R2 0.99), XGBoost (RMSE 10.21, MSE 104.22, MAE 7.68, R2 0.99). The random forest model achieved the highest predictive ability. Improving the model's performance in the future is expected to contribute to lowering exposure among radiology technologists.

Near infrared spectroscopy for classification of apples using K-mean neural network algorism

  • Muramatsu, Masahiro;Takefuji, Yoshiyasu;Kawano, Sumio
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1131-1131
    • /
    • 2001
  • To develop a nondestructive quality evaluation technique of fruits, a K-mean algorism is applied to near infrared (NIR) spectroscopy of apples. The K-mean algorism is one of neural network partition methods and the goal is to partition the set of objects O into K disjoint clusters, where K is assumed to be known a priori. The algorism introduced by Macqueen draws an initial partition of the objects at random. It then computes the cluster centroids, assigns objects to the closest of them and iterates until a local minimum is obtained. The advantage of using neural network is that the spectra at the wavelengths having absorptions against chemical bonds including C-H and O-H types can be selected directly as input data. In conventional multiple regression approaches, the first wavelength is selected manually around the absorbance wavelengths as showing a high correlation coefficient between the NIR $2^{nd}$ derivative spectrum and Brix value with a single regression. After that, the second and following wavelengths are selected statistically as the calibration equation shows a high correlation. Therefore, the second and following wavelengths are selected not in a NIR spectroscopic way but in a statistical way. In this research, the spectra at the six wavelengths including 900, 904, 914, 990, 1000 and 1016nm are selected as input data for K-mean analysis. 904nm is selected because the wavelength shows the highest correlation coefficients and is regarded as the absorbance wavelength. The others are selected because they show relatively high correlation coefficients and are revealed as the absorbance wavelengths against the chemical structures by B. G. Osborne. The experiment was performed with two phases. In first phase, a reflectance was acquired using fiber optics. The reflectance was calculated by comparing near infrared energy reflected from a Teflon sphere as a standard reference, and the $2^{nd}$ derivative spectra were used for K-mean analysis. Samples are intact 67 apples which are called Fuji and cultivated in Aomori prefecture in Japan. In second phase, the Brix values were measured with a commercially available refractometer in order to estimate the result of K-mean approach. The result shows a partition of the spectral data sets of 67 samples into eight clusters, and the apples are classified into samples having high Brix value and low Brix value. Consequently, the K-mean analysis realized the classification of apples on the basis of the Brix values.

  • PDF

Single-step genomic evaluation for growth traits in a Mexican Braunvieh cattle population

  • Jonathan Emanuel Valerio-Hernandez;Agustin Ruiz-Flores;Mohammad Ali Nilforooshan;Paulino Perez-Rodriguez
    • Animal Bioscience
    • /
    • v.36 no.7
    • /
    • pp.1003-1009
    • /
    • 2023
  • Objective: The objective was to compare (pedigree-based) best linear unbiased prediction (BLUP), genomic BLUP (GBLUP), and single-step GBLUP (ssGBLUP) methods for genomic evaluation of growth traits in a Mexican Braunvieh cattle population. Methods: Birth (BW), weaning (WW), and yearling weight (YW) data of a Mexican Braunvieh cattle population were analyzed with BLUP, GBLUP, and ssGBLUP methods. These methods are differentiated by the additive genetic relationship matrix included in the model and the animals under evaluation. The predictive ability of the model was evaluated using random partitions of the data in training and testing sets, consistently predicting about 20% of genotyped animals on all occasions. For each partition, the Pearson correlation coefficient between adjusted phenotypes for fixed effects and non-genetic random effects and the estimated breeding values (EBV) were computed. Results: The random contemporary group (CG) effect explained about 50%, 45%, and 35% of the phenotypic variance in BW, WW, and YW, respectively. For the three methods, the CG effect explained the highest proportion of the phenotypic variances (except for YW-GBLUP). The heritability estimate obtained with GBLUP was the lowest for BW, while the highest heritability was obtained with BLUP. For WW, the highest heritability estimate was obtained with BLUP, the estimates obtained with GBLUP and ssGBLUP were similar. For YW, the heritability estimates obtained with GBLUP and BLUP were similar, and the lowest heritability was obtained with ssGBLUP. Pearson correlation coefficients between adjusted phenotypes for non-genetic effects and EBVs were the highest for BLUP, followed by ssBLUP and GBLUP. Conclusion: The successful implementation of genetic evaluations that include genotyped and non-genotyped animals in our study indicate a promising method for use in genetic improvement programs of Braunvieh cattle. Our findings showed that simultaneous evaluation of genotyped and non-genotyped animals improved prediction accuracy for growth traits even with a limited number of genotyped animals.

Partitioning likelihood method in the analysis of non-monotone missing data

  • Kim Jae-Kwang
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.1-8
    • /
    • 2004
  • We address the problem of parameter estimation in multivariate distributions under ignorable non-monotone missing data. The factoring likelihood method for monotone missing data, termed by Robin (1974), is extended to a more general case of non-monotone missing data. The proposed method is algebraically equivalent to the Newton-Raphson method for the observed likelihood, but avoids the burden of computing the first and the second partial derivatives of the observed likelihood Instead, the maximum likelihood estimates and their information matrices for each partition of the data set are computed separately and combined naturally using the generalized least squares method. A numerical example is also presented to illustrate the method.

  • PDF

The System of Non-Linear Detector over Wireless Communication (무선통신에서의 Non-Linear Detector System 설계)

  • 공형윤
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.106-109
    • /
    • 1998
  • Wireless communication systems, in particular, must operate in a crowded electro-magnetic environmnet where in-band undesired signals are treated as noise by the receiver. These interfering signals are often random but not Gaussian Due to nongaussian noise, the distribution of the observables cannot be specified by a finite set of parameters; instead r-dimensioal sample space (pure noise samples) is equiprobably partitioned into a finite number of disjointed regions using quantiles and a vector quantizer based on training samples. If we assume that the detected symbols are correct, then we can observe the pure noise samples during the training and transmitting mode. The algorithm proposed is based on a piecewise approximation to a regression function based on quantities and conditional partition moments which are estimated by a RMSA (Robbins-Monro Stochastic Approximation) algorithm. In this paper, we develop a diversity combiner with modified detector, called Non-Linear Detector, and the receiver has a differential phase detector in each diversity branch and at the combiner each detector output is proportional to the second power of the envelope of branches. Monte-Carlo simulations were used as means of generating the system performance.

  • PDF

Improving the Performances of the Neural Network for Optimization by Optimal Estimation of Initial States (초기값의 최적 설정에 의한 최적화용 신경회로망의 성능개선)

  • 조동현;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.8
    • /
    • pp.54-63
    • /
    • 1993
  • This paper proposes a method for improving the performances of the neural network for optimization by an optimal estimation of initial states. The optimal initial state that leads to the global minimum is estimated by using the stochastic approximation. And then the update rule of Hopfield model, which is the high speed deterministic algorithm using the steepest descent rule, is applied to speed up the optimization. The proposed method has been applied to the tavelling salesman problems and an optimal task partition problems to evaluate the performances. The simulation results show that the convergence speed of the proposed method is higher than conventinal Hopfield model. Abe's method and Boltzmann machine with random initial neuron output setting, and the convergence rate to the global minimum is guaranteed with probability of 1. The proposed method gives better result as the problem size increases where it is more difficult for the randomized initial setting to give a good convergence.

  • PDF

A Fast Partition Structure Decision Method in a Coding Tree Block of HEVC (HEVC 코딩 트리 블록 분할 구조 고속 결정 방법)

  • Jung, Soon-heung;Kim, Hui Yong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.53-56
    • /
    • 2016
  • 본 논문에서는 HEVC 부호화시 코딩 트리 블록의 분할 구조를 고속 결정하는 방법을 제안한다. 코딩 트리블록은 다양한 크기의 코딩 블록으로 구성되어 부호화 효율을 향상시키지만, 구성되는 코딩 블록을 결정하기 위한 과정에서 많은 계산량을 필요로 하게 되어 부호화 시간을 증가시킨다. 제안하는 방법에서는 부호화 과정에서 복원된 잔차신호와 코딩 트리 블록의 분할 구조의 상관성을 이용하여 코딩 트리 블록의 분할 구조를 고속으로 결정하는 방법을 제시한다. 실험 결과를 통해 제안된 방법이 HM16.0 에 비해 random-access configuration 에서 50.98%, low-delay configuration 에서 43.77%의 부호화 시간을 감소시키는 것을 확인하였다. 이때, $BD-rate_{YUV}$ 증가는 각각 2.42%와 2.35%로 부호화 효율에는 미치는 영향은 낮았다.

  • PDF