• Title/Summary/Keyword: Resampling

Search Result 253, Processing Time 0.021 seconds

Effects of Spatial Resolution on PSO Target Detection Results of Airplane and Ship (항공기와 선박의 PSO 표적탐지 결과에 공간해상도가 미치는 영향)

  • Yeom, Jun Ho;Kim, Byeong Hee;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.1
    • /
    • pp.23-29
    • /
    • 2014
  • The emergence of high resolution satellite images and the evolution of spatial resolution facilitate various studies using high resolution satellite images. Above all, target detection algorithms are effective for monitoring of traffic flow and military surveillance and reconnaissance because vehicles, airplanes, and ships on broad area could be detected easily using high resolution satellite images. Recently, many satellites are launched from global countries and the diversity of satellite images are also increased. On the contrary, studies on comparison about the spatial resolution or target detection, especially, are insufficient in domestic and foreign countries. Therefore, in this study, effects of spatial resolution on target detection are analyzed using the PSO target detection algorithm. The resampling techniques such as nearest neighbor, bilinear, and cubic convolution are adopted to resize the original image into 0.5m, 1m, 2m, 4m spatial resolutions. Then, accuracy of target detection is assessed according to not only spatial resolution but also resampling method. As a result of the study, the resolution of 0.5m and nearest neighbor among the resampling methods have the best accuracy. Additionally, it is necessary to satisfy the criteria of 2m and 4m resolution for the detection of airplane and ship, respectively. The detection of airplane need more high spatial resolution than ship because of their complexity of shape. This research suggests the appropriate spatial resolution for the plane and ship target detection and contributes to the criteria of satellite sensor design.

Real-Time 3D Volume Deformation and Visualization by Integrating NeRF, PBD, and Parallel Resampling (NeRF, PBD 및 병렬 리샘플링을 결합한 실시간 3D 볼륨 변형체 시각화)

  • Sangmin Kwon;Sojin Jeon;Juni Park;Dasol Kim;Heewon Kye
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.189-198
    • /
    • 2024
  • Research combining deep learning-based models and physical simulations is making important advances in the medical field. This extracts the necessary information from medical image data and enables fast and accurate prediction of deformation of the skeleton and soft tissue based on physical laws. This study proposes a system that integrates Neural Radiance Fields (NeRF), Position-Based Dynamics (PBD), and Parallel Resampling to generate 3D volume data, and deform and visualize them in real-time. NeRF uses 2D images and camera coordinates to produce high-resolution 3D volume data, while PBD enables real-time deformation and interaction through physics-based simulation. Parallel Resampling improves rendering efficiency by dividing the volume into tetrahedral meshes and utilizing GPU parallel processing. This system renders the deformed volume data using ray casting, leveraging GPU parallel processing for fast real-time visualization. Experimental results show that this system can generate and deform 3D data without expensive equipment, demonstrating potential applications in engineering, education, and medicine.

몬테칼로 베이지안 분석과 응용 사례

  • 강승호;박태성
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.1
    • /
    • pp.169-177
    • /
    • 1996
  • 본 논문에서는 한 유명 농구선수의 과거의 연도별 평균득점과 평균 야투율을 기초로 앞으로의 경기에 대한 평균득점과 평균야투율을 추정하기 위해 몬테칼로 베이지안 분석법 중의 하나인 Sampling-Important-Resampling (SIR) 알고리즘을 이용하였다. 즉 과거의 자료로부터 평균득점과 평균야투율에 대한 사전밀도함수를 설정하고 SIR 알고리즘을 이용하여 사후 밀도함수를 구한 후에 이를 기초로 베이지안 추론을 하였다.

  • PDF

Confidence Interval for Capability Process Indices by the Resampling Method (재표집방법에 의한 공정관리지수의 신뢰구간)

  • 남경현
    • Journal of Applied Reliability
    • /
    • v.1 no.1
    • /
    • pp.55-63
    • /
    • 2001
  • In this paper, we utilize the asymptotic variance of $C_{pk}$ to propose a two-sided confidence interval based on percentile-t bootstrap method. This confidence interval is compared with the ones based on the standard and percentile bootstrap methods. Simulation results show that percentile-t bootstrap method is preferred to other methods for constructing the confidence interval.l.

  • PDF

Visual Attention Model Based on Particle Filter

  • Liu, Long;Wei, Wei;Li, Xianli;Pan, Yafeng;Song, Houbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3791-3805
    • /
    • 2016
  • The visual attention mechanism includes 2 attention models, the bottom-up (B-U) and the top-down (T-D), the physiology of which have not yet been accurately described. In this paper, the visual attention mechanism is regarded as a Bayesian fusion process, and a visual attention model based on particle filter is proposed. Under certain particular assumed conditions, a calculation formula of Bayesian posterior probability is deduced. The visual attention fusion process based on the particle filter is realized through importance sampling, particle weight updating, and resampling, and visual attention is finally determined by the particle distribution state. The test results of multigroup images show that the calculation result of this model has better subjective and objective effects than that of other models.

Incorporation of Scene Geometry in Least Squares Correlation Matching for DEM Generation from Linear Pushbroom Images

  • Kim, Tae-Jung;Yoon, Tae-Hun;Lee, Heung-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.182-187
    • /
    • 1999
  • Stereo matching is one of the most crucial parts in DEM generation. Naive stereo matching algorithms often create many holes and blunders in a DEM and therefore a carefully designed strategy must be employed to guide stereo matching algorithms to produce “good” 3D information. In this paper, we describe one such a strategy designed by the use of scene geometry, in particular, the epipolarity for generation of a DEM from linear pushbroom images. The epipolarity for perspective images is a well-known property, i.e., in a stereo image pair, a point in the reference image will map to a line in the search image uniquely defined by sensor models of the image pair. This concept has been utilized in stereo matching by applying epipolar resampling prior to matching. However, the epipolar matching for linear pushbroom images is rather complicated. It was found that the epipolarity can only be described by a Hyperbola- shaped curve and that epipolar resampling cannot be applied to linear pushbroom images. Instead, we have developed an algorithm of incorporating such epipolarity directly in least squares correlation matching. Experiments showed that this approach could improve the quality of a DEM.

  • PDF

Developing a Molecular Prognostic Predictor of a Cancer based on a Small Sample

  • Kim Inyoung;Lee Sunho;Rha Sun Young;Kim Byungsoo
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.195-198
    • /
    • 2004
  • One Important problem in a cancer microarray study is to identify a set of genes from which a molecular prognostic indicator can be developed. In parallel with this problem is to validate the chosen set of genes. We develop in this note a K-fold cross validation procedure by combining a 'pre-validation' technique and a bootstrap resampling procedure in the Cox regression . The pre-validation technique predicts the microarray predictor of a case without having seen the true class level of the case. It was suggested by Tibshirani and Efron (2002) to avoid the possible over-fitting in the regression in which a microarray based predictor is employed. The bootstrap resampling procedure for the Cox regression was proposed by Sauerbrei and Schumacher (1992) as a means of overcoming the instability of a stepwise selection procedure. We apply this K-fold cross validation to the microarray data of 92 gastric cancers of which the experiment was conducted at Cancer Metastasis Research Center, Yonsei University. We also share some of our experience on the 'false positive' result due to the information leak.

  • PDF

RPC-based epipolar image resampling of Kompsat-2 across-track stereos (RPC를 기반으로 한 아리랑 2호 에피폴라 영상제작)

  • Oh, Jae-Hong;Lee, Hyo-Seong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.2
    • /
    • pp.157-164
    • /
    • 2011
  • As high-resolution satellite images have enabled large scale topographic mapping and monitoring on global scale with short revisit time, agile sensor orientation, and large swath width, many countries make effort to secure the satellite image information. In Korea, KOMPSAT-2 (KOrea Multi-Purpose SATellite-2) was launched in July 28 2006 with high specification. These satellites have stereo image acquisition capability for 3D mapping and monitoring. To efficiently handle stereo images such as stereo display and monitoring, the accurate epipolar image generation process is prerequisite. However, the process was highly limited due to complexity in epipolar geometry of pushbroom sensor. Recently, the piecewise approach to generate epipolar images using RPC was developed and tested for in-track IKONOS stereo images. In this paper, the piecewise approach was tested for KOMPSAT-2 across-track stereo images to see how accurately KOMPSAT-2 epipolar images can be generated for 3D geospatial applications. In the experiment, two across-track stereo sets from three KOMPSAT-2 images of different dates were tested using RPC as the sensor model. The test results showed that one-pixel level of y-parallax was achieved for manually measured tie points.

ALGORITHM OF REVISED-OTFTOOL

  • Chung Eun-Jung;Kim Hyor-Young;Rhee Myung-Hyun
    • Journal of Astronomy and Space Sciences
    • /
    • v.23 no.3
    • /
    • pp.269-288
    • /
    • 2006
  • We revised the OTFTOOL which was developed in Five College Radio Astronomy Observatory (FCRAO) for the On-The-Fly (OTF) observation. Besides the improvement of data resampling function of conventional OTFTOOL, we added a new SELF referencing mode and data pre-reduction function. Since OTF observation data have a large redundancy, we can choose and use only good quality samples excluding bad samples. Sorting out the bad samples is based on the floating level, rms level, antenna trajectory, elevation, $T_{sys}$, and number of samples. And, spikes are also removed. Referencing method can be chosen between CLASSICAL mode in which the references are taken from the OFFs observation and ELLIPSOIDAL mode in which the references are taken from the inner source free region (this is named as SELF reference). Baseline is subtracted with the source free channel windows and the baseline order chosen by the user. Passing through these procedures, the raw OTF data will be an FITS datacube. The revised-OTFTOOL maximizes the advantages of OTF observation by sorting out the bad samples in the earliest stage. And the new self-referencing method, the ELLIPSOIDAL mode, is very powerful to reduce the data. Moreover since it is possible to see the datacube at once without moving them into other data reduction programs, it is very useful and convenient to check whether the data resampling works well or not. We expect that the revised-OTFTOOL can be applied to the facilities of the OTF observation like SRAO, NRAO, and FCRAO.

Jensen's Alpha Estimation Models in Capital Asset Pricing Model

  • Phuoc, Le Tan
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.5 no.3
    • /
    • pp.19-29
    • /
    • 2018
  • This research examined the alternatives of Jensen's alpha (α) estimation models in the Capital Asset Pricing Model, discussed by Treynor (1961), Sharpe (1964), and Lintner (1965), using the robust maximum likelihood type m-estimator (MM estimator) and Bayes estimator with conjugate prior. According to finance literature and practices, alpha has often been estimated using ordinary least square (OLS) regression method and monthly return data set. A sample of 50 securities is randomly selected from the list of the S&P 500 index. Their daily and monthly returns were collected over a period of the last five years. This research showed that the robust MM estimator performed well better than the OLS and Bayes estimators in terms of efficiency. The Bayes estimator did not perform better than the OLS estimator as expected. Interestingly, we also found that daily return data set would give more accurate alpha estimation than monthly return data set in all three MM, OLS, and Bayes estimators. We also proposed an alternative market efficiency test with the hypothesis testing Ho: α = 0 and was able to prove the S&P 500 index is efficient, but not perfect. More important, those findings above are checked with and validated by Jackknife resampling results.