• Title/Summary/Keyword: 리샘플링

Search Result 106, Processing Time 0.026 seconds

Particle filter for Correction of GPS location data of a mobile robot (이동로봇의 GPS위치 정보 보정을 위한 파티클 필터 방법)

  • Noh, Sung-Woo;Kim, Tae-Gyun;Ko, Nak-Yong;Bae, Young-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.2
    • /
    • pp.381-389
    • /
    • 2012
  • This paper proposes a method which corrects location data of GPS for navigation of outdoor mobile robot. The method uses a Bayesian filter approach called the particle filter(PF). The method iterates two procedures: prediction and correction. The prediction procedure calculates robot location based on translational and rotational velocity data given by the robot command. It incorporates uncertainty into the predicted robot location by adding uncertainty to translational and rotational velocity command. Using the sensor characteristics of the GPS, the belief that a particle assumes true location of the robot is calculated. The resampling from the particles based on the belief constitutes the correction procedure. Since usual GPS data includes abrupt and random noise, the robot motion command based on the GPS data suffers from sudden and unexpected change, resulting in jerky robot motion. The PF reduces corruption on the GPS data and prevents unexpected location error. The proposed method is used for navigation of a mobile robot in the 2011 Robot Outdoor Navigation Competition, which was held at Gwangju on the 16-th August 2011. The method restricted the robot location error below 0.5m along the navigation of 300m length.

Mobile Finger Signature Verification Robust to Skilled Forgery (모바일환경에서 위조서명에 강건한 딥러닝 기반의 핑거서명검증 연구)

  • Nam, Seng-soo;Seo, Chang-ho;Choi, Dae-seon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.5
    • /
    • pp.1161-1170
    • /
    • 2016
  • In this paper, we provide an authentication technology for verifying dynamic signature made by finger on smart phone. In the proposed method, we are using the Auto-Encoder-based 1 class model in order to effectively distinguish skilled forgery signature. In addition to the basic dynamic signature characteristic information such as appearance and velocity of a signature, we use accelerometer value supported by most of the smartphone. Signed data is re-sampled to give the same length and is normalized to a constant size. We built a test set for evaluation and conducted experiment in three ways. As results of the experiment, the proposed acceleration sensor value and 1 class model shows 6.9% less EER than previous method.

Design of CNN-based Gastrointestinal Landmark Classifier for Tracking the Gastrointestinal Location (캡슐내시경의 위치추적을 위한 CNN 기반 위장관 랜드마크 분류기 설계)

  • Jang, Hyeon-Woong;Lim, Chang-Nam;Park, Ye-Seul;Lee, Kwang-Jae;Lee, Jung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1019-1022
    • /
    • 2019
  • 최근의 영상 처리 분야는 딥러닝 기법들의 성능이 입증됨에 따라 다양한 분야에서 이와 같은 기법들을 활용해 영상에 대한 분류, 분석, 검출 등을 수행하려는 시도가 활발하다. 그중에서도 의료 진단 보조 역할을 할 수 있는 의료 영상 분석 소프트웨어에 대한 기대가 증가하고 있는데, 본 연구에서는 캡슐내시경 영상에 주목하였다. 캡슐내시경은 주로 소장 촬영을 목표로 하며 식도부터 대장까지 약 8~10시간 동안 촬영된다. 이로 인해 CT, MR, X-ray와 같은 다른 의료 영상과 다르게 하나의 데이터 셋이 10~15만 장의 이미지를 갖는다. 일반적으로 캡슐내시경 영상을 판독하는 순서는 위장관 교차점(Z-Line, 유문판, 회맹판)을 기준으로 위장관 랜드마크(식도, 위, 소장, 대장)를 구분한 뒤, 각 랜드마크 별로 병변 정보를 찾아내는 방식이다. 그러나 워낙 방대한 영상 데이터를 가지기 때문에 의사 혹은 의료 전문가가 영상을 판독하는데 많은 시간과 노력이 소모되고 있다. 본 논문의 목적은 캡슐내시경 영상의 판독에서 모든 환자에 대해 공통으로 수행되고, 판독하는 데 많은 시간을 차지하는 위장관 랜드마크를 찾는 것에 있다. 이를 위해, 위장관 랜드마크를 식별할 수 있는 CNN 학습 모델을 설계하였으며, 더욱 효과적인 학습을 위해 전처리 과정으로 학습에 방해가 되는 학습 노이즈 영상들을 제거하고 위장관 랜드마크 별 특징 분석을 진행하였다. 총 8명의 환자 데이터를 가지고 학습된 모델에 대해 평가 및 검증을 진행하였는데, 무작위로 환자 데이터를 샘플링하여 학습한 모델을 평가한 결과, 평균 정확도가 95% 가 확인되었으며 개별 환자별로 교차 검증 방식을 진행한 결과 평균 정확도 67% 가 확인되었다.

Realistic and Fast Depth-of-Field Rendering in Direct Volume Rendering (직접 볼륨 렌더링에서 사실적인 고속 피사계 심도 렌더링)

  • Kang, Jiseon;Lee, Jeongjin;Shin, Yeong-Gil;Kim, Bohyoung
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.75-83
    • /
    • 2019
  • Direct volume rendering is a widely used method for visualizing three-dimensional volume data such as medical images. This paper proposes a method for applying depth-of-field effects to volume ray-casting to enable more realistic depth-of-filed rendering in direct volume rendering. The proposed method exploits a camera model based on the human perceptual model and can obtain realistic images with a limited number of rays using jittered lens sampling. It also enables interactive exploration of volume data by on-the-fly calculating depth-of-field in the GPU pipeline without preprocessing. In the experiment with various data including medical images, we demonstrated that depth-of-field images with better depth perception were generated 2.6 to 4 times faster than the conventional method.

Generation of optical fringe patterns using deep learning (딥러닝을 이용한 광학적 프린지 패턴의 생성)

  • Kang, Ji-Won;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1588-1594
    • /
    • 2020
  • In this paper, we discuss a data balancing method for learning a neural network that generates digital holograms using a deep neural network (DNN). Deep neural networks are based on deep learning (DL) technology and use a generative adversarial network (GAN) series. The fringe pattern, which is the basic unit of a hologram to be created through a deep neural network, has very different data types depending on the hologram plane and the position of the object. However, because the criteria for classifying the data are not clear, an imbalance in the training data may occur. The imbalance of learning data acts as a factor of instability in learning. Therefore, it presents a method for classifying and balancing data for which the classification criteria are not clear. And it shows that learning is stabilized through this.

Statistical estimation of the epochs of observation for the 28 determinative stars in the Shi Shi Xing Jing and the table in Cheonsang Yeolcha Bunyajido (석씨성경과 천상열차분야지도의 이십팔수 수거성 관측 연도의 통계적 추정)

  • Ahn, Sang-Hyeon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.61.3-61.3
    • /
    • 2019
  • The epochs of observation for the 28 determinative stars in the Shi Shi Xing Jing and Cheonsang Yeolcha Bunyajido are estimated by using two fitting methods. The coordinate values in these tables were thought to be measured with meridian instruments, and so they have the axis-misalignment errors and random errors. We adopt a Fourier method, and also we devise a least square fitting method. We do bootstrap resamplings to estimate the variance of the epochs. As results, we find that both data sets were made during the 1st century BCE or the latter period of the Former Han dynasty. The sample mean of the epoch for the SSXJ data is earlier by about 15-20 years than that for the Cheonsang Yeolcha Bunyajido. However, their variances are so large that we cannot decide whether the Shi Shi Xing Jing data was formed around 77 BCE and the Cheonsang Yeolcha Bunyajido was measured in 52 BCE. We need either more data points or data points measured with better precision. We will discuss on the other 120 coordinates of stars listed in the Shi Shi Xing Jing.

  • PDF

Directionally Adaptive Aliasing and Noise Removal Using Dictionary Learning and Space-Frequency Analysis (사전 학습과 공간-주파수 분석을 사용한 방향 적응적 에일리어싱 및 잡음 제거)

  • Chae, Eunjung;Lee, Eunsung;Cheong, Hejin;Paik, Joonki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.87-96
    • /
    • 2014
  • In this paper, we propose a directionally adaptive aliasing and noise removal using dictionary learning based on space-frequency analysis. The proposed aliasing and noise removal algorithm consists of two modules; i) aliasing and noise detection using dictionary learning and analysis of frequency characteristics from the combined wavelet-Fourier transform and ii) aliasing removal with suppressing noise based on the directional shrinkage in the detected regions. The proposed method can preserve the high-frequency details because aliasing and noise region is detected. Experimental results show that the proposed algorithm can efficiently reduce aliasing and noise while minimizing losses of high-frequency details and generation of artifacts comparing with the conventional methods. The proposed algorithm is suitable for various applications such as image resampling, super-resolution image, and robot vision.

A comparison of imputation methods using nonlinear models (비선형 모델을 이용한 결측 대체 방법 비교)

  • Kim, Hyein;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.543-559
    • /
    • 2019
  • Data often include missing values due to various reasons. If the missing data mechanism is not MCAR, analysis based on fully observed cases may an estimation cause bias and decrease the precision of the estimate since partially observed cases are excluded. Especially when data include many variables, missing values cause more serious problems. Many imputation techniques are suggested to overcome this difficulty. However, imputation methods using parametric models may not fit well with real data which do not satisfy model assumptions. In this study, we review imputation methods using nonlinear models such as kernel, resampling, and spline methods which are robust on model assumptions. In addition, we suggest utilizing imputation classes to improve imputation accuracy or adding random errors to correctly estimate the variance of the estimates in nonlinear imputation models. Performances of imputation methods using nonlinear models are compared under various simulated data settings. Simulation results indicate that the performances of imputation methods are different as data settings change. However, imputation based on the kernel regression or the penalized spline performs better in most situations. Utilizing imputation classes or adding random errors improves the performance of imputation methods using nonlinear models.

Monitoring of Groundwater quality according to groundwater use for agriculture (농업용 지하수 사용에 따른 지하수질 모니터링 평가)

  • Ha, Kyoochul;Ko, Kyung-Seok;Lee, Eunhee;Kim, Sunghyun;Park, Changhui;Kim, Gyoo-Bum
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.30-30
    • /
    • 2020
  • 본 연구에서는 여름철에 농업용수(벼농사용)로서 집중적으로 지하수를 사용하는 지역에서 시기별 지하수 사용에 따른 지하수 수질변화를 평가하기 위해 수행되었다. 연구지역은 충남 홍성군 양곡리와 신곡리 일부를 포함하는 면적 2.83 ㎢(283.3 ha)에 해당하는 지역이다. 연구지역 지하수 수질의 공간적 분포 및 시간적 변화 특성 평가를 위하여 2019년 2회(7월, 10월)에 걸쳐 지하수 관정(21개소)에 대하여 조사 및 분석을 수행하였다. 지하수 샘플은 현장에서 온도(T), pH, 용존산소(DO) 및 전기전도도(EC), 산화환원전위(Eh) 등을 측정하였고, 실험실에서 주요 양이온 및 미량원소(Ca, Mg, Na, K, Si, Sr), 음이온(F, Cl, Br, NO2, NO3, PO4, SO4), 알칼리도, 용존 유기탄소(DOC)와 용존 유기물(DOM) 등을 분석하였다. 지하수 수질조사 결과, 전체의 14~15개소(67~71%)가 Ca-HCO3 유형으로 분류되었으며, 다음으로는 Ca-Cl 유형이 4~5개소(19~24%)가 관찰되었다. 얕은 심도의 관정에서 상대적으로 심도가 깊은 관정보다 대부분 성분(TDS, Ca, Mg, Na, K, Cl, SO4, HCO3, DOC)에서 높은 농도를 나타내었다. 지하수의 수질자료를 이용하여 다변량통계분석법인 주성분분석(PCA: Principal Components Analysis)과 계층적 군집분석(HCA: Hierachical Cluster Anlaysis)를 수행한 결과, 초기 3개 주요 고유성분(eigenvalue)는 PC1 54.0%, PC2 14.2%, PC3 12.3%로 전체 분산의 88.3%를 설명할 수 있었다. PC1은 Ca, Mg, Na, K, Cl, SO4, DOC가 주요한 영향 인자였으며 PC2는 HCO3, NO3, DO에 영향 받음을 확인하였다. 계층적 군집분석 결과, 연구지역 지하수는 Na-Cl 유형의 C-3 관정을 제외하고는 크게 두 그룹으로 구분되어 졌다. 다변량통계분석의 결과에서도 수리지화학, 동위원소, 용존유기물 등의 특성에서 나타나는 것과 유사한 연구지역의 수질특성을 확인할 수 있었다. 연구지역은 차시기 동안 수질변화는 일부 관정을 제외하고는 유의할 만한 수준으로 관찰되지는 않았고, 지하수 사용에 따른 지하수위 회복도 빠르게 진행되고 있는 것으로 나타났다.

  • PDF

Spectral Characteristics of Sea Surface Height in the East Sea from Topex/Poseidon Altimeter Data (Topex/Poseidon에서 관측된 동해 해수면의 주기특성 연구)

  • 황종선;민경덕;이준우;원중선;김정우
    • Economic and Environmental Geology
    • /
    • v.34 no.4
    • /
    • pp.375-383
    • /
    • 2001
  • We extracted sea surface heights(SSH) from the TopexJPoseidon(T/P) radar altimeter data to compare with fhe SSH estimated from in-situ lide gauges(T/G) at Ulleungdo, Pohang, and SockcholMucko sites. Selection criteria such as wet/dry troposphere, ionosphere, and ocean tide were used to estimate accurate SSH. For time series analysis, the one-hour interval tide gauge SSHs were resampled al lO-day interval of the satellite SSHs. The ocean tide model applied in the altimeter data processing showed periodic aliasings of 175.5 day, 87.8 day, 62J day, 58.5 day, 49.5 day and 46.0 day, and, hence, the ZOO-day filtering was applied to reduce these spectral noises. Wavenumber correlation analysis was also applied to extract common components between the two SSHs, resulting in enhancing the correlation coefficient(CC) dramatically. The original CCs between the satenite and tide gauge SSHs are 0.46. 0.26, and 0.]5, respectively. Ulleungdo shows the largest cc bec;luase the site is far from the coast resulting in the minimun error in the satellite observations. The CCs were then increased to 0.59, 030, and 0.30, respectively, after 200.day filtering, and to 0.69, 0.63. and 0.59 after removing inversely correlative components using wavenumber correlation analysis. The CCs were greatly increased by 87, 227, and 460% when the wavenumber correlation analysis was followed by 2oo-day filtering, resulting in the final CCs of 0.86, 0.85, 0.84, respectively. It was found that the best SSHs were estimated when the two methods were applied to the original data. The low-pass filtered TIP SSHs were found to be well correlated with the TIG SSHs from tide gauges, and the best correlation results were found when we applied both low-pass filtering and spectral correlation analysis to the original SSHs.

  • PDF