• Title/Summary/Keyword: resampling method

Search Result 143, Processing Time 0.024 seconds

Resampling Method to Improve Performance of Point Cloud Registration (포인트 클라우드 정합 성능 향상을 위한 리샘플링 방법)

  • Kim, Jongwook;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.187-189
    • /
    • 2020
  • 본 논문에서는 포인트 클라우드 정합 성능 향상을 위해 기하적 복잡도가 낮은 정점들의 영향을 최소화하는 포인트 클라우드 리샘플링 방법을 제안한다. 3 차원 특징 기술자(3D feature descriptor)를 기반으로 하는 포인트 클라우드 정합은 정점 법선 벡터의 변화량을 특징으로 사용한다. 따라서 강건한 특징은 대부분 정점 법선 벡터의 변화량이 큰 영역에서 추출된다. 반면에 정점 법선 벡터의 변화량이 거의 없는 평면 영역은 정합 수행 시에 이상점(outlier)으로 작용할 수 있으므로 해당 정점들이 정합 과정에 미치는 영향을 최소화해야 한다. 제안하는 방법은 모델 포인트 클라우드의 기하적 복잡도를 고려한 리샘플링을 통해 전체 정점의 수 대비 복잡도가 낮은 정점들의 비율을 낮추어 이상점이 정합 과정에 미치는 영향을 최소화하고 정합 성능을 향상시켰다.

  • PDF

A Method for Real-Time Face Detection through Optical Flow and Scale Resampling (광학 흐름과 스케일 리샘플링을 통한 실시간 얼굴 탐지 기법)

  • Sang-Jeong Kim;Dong-Gun Lee;Yeong-Seok Seo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.862-863
    • /
    • 2024
  • 기존의 딥러닝 모델을 활용한 얼굴 탐지 시스템은 영상을 처리할 때 이미지의 양이 과도하여 추론 속도가 영상 재생 속도보다 느려지게 되고, 이로 인해 지연 현상이 발생한다. 본 논문은 이미지 크기 조정 및 광학 흐름을 활용하여 얼굴 탐지에 필요한 추론량을 줄이는 기법을 제안한다. 제안된 기법은 세 단계의 처리 과정으로 구성된다. 첫 번째 단계에서는 프레임의 크기를 줄여 프레임 처리 속도를 효과적으로 향상시킨다. 두 번째 단계에서는 비탐지 구간이 아닌 프레임만을 배치 처리하여 딥러닝 모델로 추론하여 처리 시간을 단축시킨다. 세 번째 단계에서는 광학 흐름 알고리즘을 이용하여 비탐지 구간에서 얼굴 추적을 함으로써 정확도는 유지하면서 탐지 시간을 단축한다. 본 논문에서 제안하는 이미지 크기 조정 및 광학 흐름 알고리즘 기반 얼굴 탐지 시스템은 처리 시간을 수십 배 이상 단축하여 영상에서의 얼굴 탐지에 있어서 우수한 성능을 입증하였다.

A Procedure for Indentifying Outliers in Multivariate Data (다변량 자료에서 다수 이상치 인식의 절차)

  • Yum, Joon-Keun;Park, Jong-Goo;Kim, Jong-Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.23 no.4
    • /
    • pp.28-41
    • /
    • 1995
  • We consider the problem of identifying multiple outliers in linear model. The available regression diagnostic methods often do not succeed in detecting multiple outliers because of the masking and swamping effect. Recently, among the various robust estimator of reducing the effect of outliers, LMS(Least Meadian Square) estimator has been to be a suitable method proposed to expose outliers and leverage points. However, as you know it, the data analysis method with LMS estimator is to be taken the median of the squared residuals in the sample which is extracted the sample space. Then this model causes the trouble, for the number of the chosen sample is nCp, i.e. as the size of sample space n is increasing, the number is increasing fastly. And the covariance matrix may be the singular matrix, so that matrix is approching collinearity. Thus we propose a procedure ELMS for the resampling in LMS method and study the size of the effective elementary set in this algorithm.

  • PDF

Analysis of Albedo by Level-2 Land Use Using VIIRS and MODIS Data (VIIRS와 MODIS 자료를 활용한 중분류 토지이용별 알베도 분석)

  • Lee, Yonggwan;Chung, Jeehun;Jang, Wonjin;Kim, Jinuk;Kim, Seongjoon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1385-1394
    • /
    • 2022
  • This study was to analyze the change in albedo by level-2 land cover map for 20 years(2002-2021) using MODerate resolution Imaging Spectroradiometer (MODIS) data. Also, the difference from the MODIS data was analyzed using the 10-year (2012-2021) data of Visible Infrared Imaging Radiometer Suite (VIIRS). For the albedo data of MODIS and VIIRS, daily albedo data, MCD43A3 and VNP43IA, of 500 m spatial resolution of sinusoidal tile grid produced by Bidirectional Reflectance Distribution Function (BRDF) model were prepared for the South Korea range. Reprojection was performed using the code written based on Python 3.9, and the nearest neighbor was applied as the resampling method. White sky albedo and black sky albedo of shortwave were used for analysis. As a result of 20-year albedo analysis using MODIS data, the albedo tends to rise in all land use. Compared to the 2000s (2002-2011), the average albedo of the 2010s (2012-2021) showed the most significant increase of 0.0027 in the forest area, followed by the grass increase of 0.0024. As a result of comparing the albedo of VIIRS and MODIS, it was found that the albedo of VIIRS was larger from 0.001 to 0.1, which was considered to be due to differences in the surface reflectivity according to the time of image capture and sensor characteristics.

Preoperative Prediction for Early Recurrence Can Be as Accurate as Postoperative Assessment in Single Hepatocellular Carcinoma Patients

  • Dong Ik Cha;Kyung Mi Jang;Seong Hyun Kim;Young Kon Kim;Honsoul Kim;Soo Hyun Ahn
    • Korean Journal of Radiology
    • /
    • v.21 no.4
    • /
    • pp.402-412
    • /
    • 2020
  • Objective: To evaluate the performance of predicting early recurrence using preoperative factors only in comparison with using both pre-/postoperative factors. Materials and Methods: We retrospectively reviewed 549 patients who had undergone curative resection for single hepatcellular carcinoma (HCC) within Milan criteria. Multivariable analysis was performed to identify pre-/postoperative high-risk factors of early recurrence after hepatic resection for HCC. Two prediction models for early HCC recurrence determined by stepwise variable selection methods based on Akaike information criterion were built, either based on preoperative factors alone or both pre-/postoperative factors. Area under the curve (AUC) for each receiver operating characteristic curve of the two models was calculated, and the two curves were compared for non-inferiority testing. The predictive models of early HCC recurrence were internally validated by bootstrap resampling method. Results: Multivariable analysis on preoperative factors alone identified aspartate aminotransferase/platelet ratio index (OR, 1.632; 95% CI, 1.056-2.522; p = 0.027), tumor size (OR, 1.025; 95% CI, 0.002-1.049; p = 0.031), arterial rim enhancement of the tumor (OR, 2.350; 95% CI, 1.297-4.260; p = 0.005), and presence of nonhypervascular hepatobiliary hypointense nodules (OR, 1.983; 95% CI, 1.049-3.750; p = 0.035) on gadoxetic acid-enhanced magnetic resonance imaging as significant factors. After adding postoperative histopathologic factors, presence of microvascular invasion (OR, 1.868; 95% CI, 1.155-3.022; p = 0.011) became an additional significant factor, while tumor size became insignificant (p = 0.119). Comparison of the AUCs of the two models showed that the prediction model built on preoperative factors alone was not inferior to that including both pre-/postoperative factors {AUC for preoperative factors only, 0.673 (95% confidence interval [CI], 0.623-0.723) vs. AUC after adding postoperative factors, 0.691 (95% CI, 0.639-0.744); p = 0.0013}. Bootstrap resampling method showed that both the models were valid. Conclusion: Risk stratification solely based on preoperative imaging and laboratory factors was not inferior to that based on postoperative histopathologic risk factors in predicting early recurrence after curative resection in within Milan criteria single HCC patients.

Land-cover Change detection on Korean Peninsula using NOAA AVHRR data (NOAA AVHRR 자료를 이용한 한반도 토지피복 변화 연구)

  • 김의홍;이석민
    • Spatial Information Research
    • /
    • v.4 no.1
    • /
    • pp.13-20
    • /
    • 1996
  • This study has been on detection of land-cover change on Korean peninsula (including the area of north Korean territory) between May of 1990 year and that of 1995 year using NOAA AVHRR data. It was necessary that imagery data should be registered to each other and should not be deviated much in seasonal variation in order to recognize land - cover change. Atmosphic effect such as clould and dirt was erased by maximum NDVI(Normalized Difference Vegetation Index) method the equation of which was as following $$NDVI(i,j,d)=\frac{ch2(j,j,d)-ch1(i,j,d)}{ch2(i,j,d)+ch1(i.j,d)}$$ Each image of maximum NDVI of '90 year and '95 year was c1assifed onto 8 categories ,using iso-clustering method each of which was water, wet barren and urban, crop field, field, mixed vegetation, shrub, forest and evergreen.

  • PDF

유전자 알고리즘을 활용한 데이터 불균형 해소 기법의 조합적 활용

  • Jang, Yeong-Sik;Kim, Jong-U;Heo, Jun
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.05a
    • /
    • pp.309-320
    • /
    • 2007
  • The data imbalance problem which can be uncounted in data mining classification problems typically means that there are more or less instances in a class than those in other classes. It causes low prediction accuracy of the minority class because classifiers tend to assign instances to major classes and ignore the minor class to reduce overall misclassification rate. In order to solve the data imbalance problem, there has been proposed a number of techniques based on resampling with replacement, adjusting decision thresholds, and adjusting the cost of the different classes. In this paper, we study the feasibility of the combination usage of the techniques previously proposed to deal with the data imbalance problem, and suggest a combination method using genetic algorithm to find the optimal combination ratio of the techniques. To improve the prediction accuracy of a minority class, we determine the combination ratio based on the F-value of the minority class as the fitness function of genetic algorithm. To compare the performance with those of single techniques and the matrix-style combination of random percentage, we performed experiments using four public datasets which has been generally used to compare the performance of methods for the data imbalance problem. From the results of experiments, we can find the usefulness of the proposed method.

  • PDF

Generation of Simulation input Stream using Threshold Bootstrap (임계값 부트스트랩을 사용한 시뮬레이션 입력 시나리오의 생성)

  • Kim Yun Bae;Kim Jae Bum
    • Korean Management Science Review
    • /
    • v.22 no.1
    • /
    • pp.15-26
    • /
    • 2005
  • The bootstrap is a method of computational inference that simulates the creation of new data by resampling from a single data set. We propose a new job for the bootstrap: generating inputs from one historical trace using Threshold Bootstrap. In this regard, the most important quality of bootstrap samples is that they be functionally indistinguishable from independent samples of the same stochastic process. We describe a quantitative measure of difference between two time series, and demonstrate the sensitivity of this measure for discriminating between two data generating processes. Utilizing this distance measure for the task of generating inputs, we show a way of tuning the bootstrap using a single observed trace. This application of the threshold bootstrap will be a powerful tool for Monte Carlo simulation. Monte Carlo simulation analysis relies on built-in input generators. These generators make unrealistic assumptions about independence and marginal distributions. The alternative source of inputs, historical trace data, though realistic by definition, provides only a single input stream for simulation. One benefit of our method would be expanding the number of inputs achieving reality by driving system models with actual historical input series. Another benefit might be the automatic generation of lifelike scenarios for the field of finance.

Practice of causal inference with the propensity of being zero or one: assessing the effect of arbitrary cutoffs of propensity scores

  • Kang, Joseph;Chan, Wendy;Kim, Mi-Ok;Steiner, Peter M.
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.1
    • /
    • pp.1-20
    • /
    • 2016
  • Causal inference methodologies have been developed for the past decade to estimate the unconfounded effect of an exposure under several key assumptions. These assumptions include, but are not limited to, the stable unit treatment value assumption, the strong ignorability of treatment assignment assumption, and the assumption that propensity scores be bounded away from zero and one (the positivity assumption). Of these assumptions, the first two have received much attention in the literature. Yet the positivity assumption has been recently discussed in only a few papers. Propensity scores of zero or one are indicative of deterministic exposure so that causal effects cannot be defined for these subjects. Therefore, these subjects need to be removed because no comparable comparison groups can be found for such subjects. In this paper, using currently available causal inference methods, we evaluate the effect of arbitrary cutoffs in the distribution of propensity scores and the impact of those decisions on bias and efficiency. We propose a tree-based method that performs well in terms of bias reduction when the definition of positivity is based on a single confounder. This tree-based method can be easily implemented using the statistical software program, R. R code for the studies is available online.

Text-independent Speaker Identification by Bagging VQ Classifier

  • Kyung, Youn-Jeong;Park, Bong-Dae;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2E
    • /
    • pp.17-24
    • /
    • 2001
  • In this paper, we propose the bootstrap and aggregating (bagging) vector quantization (VQ) classifier to improve the performance of the text-independent speaker recognition system. This method generates multiple training data sets by resampling the original training data set, constructs the corresponding VQ classifiers, and then integrates the multiple VQ classifiers into a single classifier by voting. The bagging method has been proven to greatly improve the performance of unstable classifiers. Through two different experiments, this paper shows that the VQ classifier is unstable. In one of these experiments, the bias and variance of a VQ classifier are computed with a waveform database. The variance of the VQ classifier is compared with that of the classification and regression tree (CART) classifier[1]. The variance of the VQ classifier is shown to be as large as that of the CART classifier. The other experiment involves speaker recognition. The speaker recognition rates vary significantly by the minor changes in the training data set. The speaker recognition experiments involving a closed set, text-independent and speaker identification are performed with the TIMIT database to compare the performance of the bagging VQ classifier with that of the conventional VQ classifier. The bagging VQ classifier yields improved performance over the conventional VQ classifier. It also outperforms the conventional VQ classifier in small training data set problems.

  • PDF