• Title/Summary/Keyword: Random sample

Search Result 1,024, Processing Time 0.026 seconds

A New Heuristic for the Generalized Assignment Problem

  • 주재훈
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.14 no.1
    • /
    • pp.31-31
    • /
    • 1989
  • The Generalized Assignment Problem(GAP) determines the minimum assignment of n tasks to m workstations such that each task is assigned to exactly one workstation, subject to the capacity of a workstation. In this paper, we presented a new heuristic search algorithm for GAPs. Then we tested it on 4 different benchmark sample sets of random problems generated according to uniform distribution on a microcomputer.

Estimation of Radial Spectrum for Orographic Storm (산지성호우의 환상스팩트럼 추정)

  • Lee, Jae Hyoung;Sonu, Jung Ho;Kim, Min Hwan;Shim, Myung Pil
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.10 no.4
    • /
    • pp.53-66
    • /
    • 1990
  • Rainfall is a phenomenon that shows a high variability both in space and time, Hy drologists are usually interested in the description of spatial distribution of rainfall over watershed. The theory of Kriging, generalized covariance technique using nonstationary mean in the regions under orographic effect, was chosen to construct random surface of total storm depth. For the constructed random surface, the double Fourier analysis of the total storm depths was performed, and the principal harmonics of storm were determined. The local component, or storm residuals was obtained by subtracting the periodic component of the storm from total storm depths. It is assumed that the residuals are a sample function of a homogeneous random field. This random field can be characterized by an isotropic one dimensional autocorrelation function or its corresponding spectral density function. Under this assumption, this study proposed a theorectical model for spectral density function adapted to two watersheds.

  • PDF

A Clustering Approach for Feature Selection in Microarray Data Classification Using Random Forest

  • Aydadenta, Husna;Adiwijaya, Adiwijaya
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1167-1175
    • /
    • 2018
  • Microarray data plays an essential role in diagnosing and detecting cancer. Microarray analysis allows the examination of levels of gene expression in specific cell samples, where thousands of genes can be analyzed simultaneously. However, microarray data have very little sample data and high data dimensionality. Therefore, to classify microarray data, a dimensional reduction process is required. Dimensional reduction can eliminate redundancy of data; thus, features used in classification are features that only have a high correlation with their class. There are two types of dimensional reduction, namely feature selection and feature extraction. In this paper, we used k-means algorithm as the clustering approach for feature selection. The proposed approach can be used to categorize features that have the same characteristics in one cluster, so that redundancy in microarray data is removed. The result of clustering is ranked using the Relief algorithm such that the best scoring element for each cluster is obtained. All best elements of each cluster are selected and used as features in the classification process. Next, the Random Forest algorithm is used. Based on the simulation, the accuracy of the proposed approach for each dataset, namely Colon, Lung Cancer, and Prostate Tumor, achieved 85.87%, 98.9%, and 89% accuracy, respectively. The accuracy of the proposed approach is therefore higher than the approach using Random Forest without clustering.

Efficient Prediction in the Semi-parametric Non-linear Mixed effect Model

  • So, Beong-Soo
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.2
    • /
    • pp.225-234
    • /
    • 1999
  • We consider the following semi-parametric non-linear mixed effect regression model : y\ulcorner=f($\chi$\ulcorner;$\beta$)+$\sigma$$\mu$($\chi$\ulcorner)+$\sigma$$\varepsilon$\ulcorner,i=1,…,n,y*=f($\chi$;$\beta$)+$\sigma$$\mu$($\chi$) where y'=(y\ulcorner,…,y\ulcorner) is a vector of n observations, y* is an unobserved new random variable of interest, f($\chi$;$\beta$) represents fixed effect of known functional form containing unknown parameter vector $\beta$\ulcorner=($\beta$$_1$,…,$\beta$\ulcorner), $\mu$($\chi$) is a random function of mean zero and the known covariance function r(.,.), $\varepsilon$'=($\varepsilon$$_1$,…,$\varepsilon$\ulcorner) is the set of uncorrelated measurement errors with zero mean and unit variance and $\sigma$ is an unknown dispersion(scale) parameter. On the basis of finite-sample, small-dispersion asymptotic framework, we derive an absolute lower bound for the asymptotic mean squared errors of prediction(AMSEP) of the regular-consistent non-linear predictors of the new random variable of interest y*. Then we construct an optimal predictor of y* which attains the lower bound irrespective of types of distributions of random effect $\mu$(.) and measurement errors $\varepsilon$.

  • PDF

Monte Carlo simulation for the response analysis of long-span suspended cables under wind loads

  • Di Paola, M.;Muscolino, G.;Sofi, A.
    • Wind and Structures
    • /
    • v.7 no.2
    • /
    • pp.107-130
    • /
    • 2004
  • This paper presents a time-domain approach for analyzing nonlinear random vibrations of long-span suspended cables under transversal wind. A consistent continuous model of the cable, fully accounting for geometrical nonlinearities inherent in cable behavior, is adopted. The effects of spatial correlation are properly included by modeling wind velocity fluctuation as a random function of time and of a single spatial variable ranging over cable span, namely as a one-variate bi-dimensional (1V-2D) random field. Within the context of a Galerkin's discretization of the equations governing cable motion, a very efficient Monte Carlo-based technique for second-order analysis of the response is proposed. This procedure starts by generating sample functions of the generalized aerodynamic loads by using the spectral decomposition of the cross-power spectral density function of wind turbulence field. Relying on the physical meaning of both the spectral properties of wind velocity fluctuation and the mode shapes of the vibrating cable, the computational efficiency is greatly enhanced by applying a truncation procedure according to which just the first few significant loading and structural modal contributions are retained.

RADIO VARIABILITY AND RANDOM WALK NOISE PROPERTIES OF FOUR BLAZARS

  • PARK, JONG-HO;TRIPPE, SASCHA
    • Publications of The Korean Astronomical Society
    • /
    • v.30 no.2
    • /
    • pp.433-437
    • /
    • 2015
  • We show the results of a time series analysis of the long-term light curves of four blazars. 3C 279, 3C 345, 3C 446, and BL Lacertae. We used densely sampled light curves spanning 32 years at three frequency bands (4.8, 8, 14.5 GHz), provided by the University of Michigan Radio Astronomy Observatory monitoring program. The spectral indices of our sources are mostly flat or inverted (-0.5 < ${\alpha}$ < 0), which is consistent with optically thick emission. Strong variability was seen in all light curves on various time scales. From the analyses of time lags between the light curves from different frequency bands and the evolution of the spectral indices with time, we find that we can distinguish high-peaking flares and low-peaking flares according to the Valtaoja et al. classification. The periodograms (temporal power spectra) of the light curves are in good agreement with random-walk power-law noise without any indication of (quasi-)periodic variability. We note that random-walk noise light curves can originate from multiple shocks in jets. The fact that all our sources are in agreement with being random-walk noise emitters at radio wavelengths suggests that such behavior is a general property of blazars. We are going to generalize our approach by applying our methodology to a much larger blazar sample in the near future.

A Comparative Case Study on Sampling Methods for Cost-Effective Forest Inventory: Focused on Random, Systematic and Line Sampling (비용 효율적 표준지 조사를 위한 표본추출방법 비교 사례연구: 임의추출법, 계통추출법, 선상추출법을 중심으로)

  • Park, Joowon;Cho, Seungwan;Kim, Dong-geun;Jung, Geonhwi;Kim, Bomi;Woo, Heesung
    • Journal of Korean Society of Forest Science
    • /
    • v.109 no.3
    • /
    • pp.291-299
    • /
    • 2020
  • The purpose of this study was to propose the most cost-effective sampling method, by analyzing the cost of forest resource investigation per sampling method for the planned harvesting area of in Chunyang-myeon, Byeonghwa-gun, Gyeongsangbuk-do, Korea. For this study, three sampling methods were selected: random sampling method, systematic sampling method, and line transect method. For each method, sample size, hourly wage, number of sample points, survey time, travel time, the sample error rate of the estimated average volume, and the desired sampling error rate were used to calculate the cost of forest resource inventories. Thus, 10 sampling points were extracted for each sampling method, and the factors required for cost analysis were calculated via a field survey. As a result, the field survey cost per ha using the random sampling method was found to be have the lowest cost, regardless of the desired sampling error rate, followed by the systematic sampling method, and the line transect method.

A Method for Improving Object Recognition Using Pattern Recognition Filtering (패턴인식 필터링을 적용한 물체인식 성능 향상 기법)

  • Park, JinLyul;Lee, SeungGi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.122-129
    • /
    • 2016
  • There have been a lot of researches on object recognition in computer vision. The SURF(Speeded Up Robust Features) algorithm based on feature detection is faster and more accurate than others. However, this algorithm has a shortcoming of making an error due to feature point mismatching when extracting feature points. In order to increase a success rate of object recognition, we have created an object recognition system based on SURF and RANSAC(Random Sample Consensus) algorithm and proposed the pattern recognition filtering. We have also presented experiment results relating to enhanced the success rate of object recognition.

Histogram of Gradient based Efficient Image Quality Assessment (그래디언트 히스토그램 기반의 효율적인 영상 품질 평가)

  • No, Se-Yong;Ahn, Sang-Woo;Chong, Jong-Wha
    • Journal of IKEEE
    • /
    • v.16 no.3
    • /
    • pp.182-188
    • /
    • 2012
  • Here we propose an image quality assessment (IQA) based on histogram of oriented gradients (HOG). This method makes use of the characteristic that the histogram of gradient image describes the state of input image. In the proposed method, the image quality is derived by the slope of the HOG obtained from the target image. The line representing the HOG is measured by a random sample consensus (RANSAC) on the HOG. Simulation results based on the LIVE image quality assessment database suggest that the proposed method aligns better with how the human visual system perceives image quality than several state-of-the-art IQAs.

Error Correction of Interested Points Tracking for Improving Registration Accuracy of Aerial Image Sequences (항공연속영상 등록 정확도 향상을 위한 특징점추적 오류검정)

  • Sukhee, Ochirbat;Yoo, Hwan-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.93-97
    • /
    • 2010
  • This paper presents the improved KLT(Kanade-Lucas-Tomasi) of registration of Image sequence captured by camera mounted on unmanned helicopter assuming without camera attitude information. It consists of following procedures for the proposed image registration. The initial interested points are detected by characteristic curve matching via dynamic programming which has been used for detecting and tracking corner points thorough image sequence. Outliers of tracked points are then removed by using Random Sample And Consensus(RANSAC) robust estimation and all remained corner points are classified as inliers by homography algorithm. The rectified images are then resampled by bilinear interpolation. Experiment shows that our method can make the suitable registration of image sequence with large motion.