• Title/Summary/Keyword: Ground truth

Search Result 298, Processing Time 0.025 seconds

A Cost Effective Reference Data Sampling Algorithm Using Fractal Analysis (프랙탈 분석을 통한 비용효과적인 기준 자료추출알고리즘에 관한 연구)

  • 김창재
    • Spatial Information Research
    • /
    • v.8 no.1
    • /
    • pp.171-182
    • /
    • 2000
  • Random sampling or systematic sampling method is commonly used to assess the accuracy of classification results. In remote sensing, with these sampling method, much time and tedious works are required to acquire sufficient ground truth data. So , a more effective sampling method that can retain the characteristics of the population is required. In this study, fractal analysis is adopted as an index for reference sampling . The fractal dimensions of the whole study area and the sub-regions are calculated to choose sub-regions that have the most similar dimensionality to that of whole-area. Then the whole -area s classification accuracy is compared to those of sub-regions, respectively, and it is verified that the accuracies of selected sub regions are similar to that of full-area . Using the above procedure, a new kind of reference sampling method is proposed. The result shows that it is possible to reduced sampling area and sample size keeping up the same results as existing methods in accuracy tests. Thus, the proposed method is proved cost-effective for reference data sampling.

  • PDF

Classification of Land Cover over the Korean Peninsula Using Polar Orbiting Meteorological Satellite Data (극궤도 기상위성 자료를 이용한 한반도의 지면피복 분류)

  • Suh, Myoung-Seok;Kwak, Chong-Heum;Kim, Hee-Soo;Kim, Maeng-Ki
    • Journal of the Korean earth science society
    • /
    • v.22 no.2
    • /
    • pp.138-146
    • /
    • 2001
  • The land cover over Korean peninsula was classified using a multi-temporal NOAA/AVHRR (Advanced Very High Resolution Radiometer) data. Four types of phenological data derived from the 10-day composited NDVI (Normalized Differences Vegetation Index), maximum and annual mean land surface temperature, and topographical data were used not only reducing the data volume but also increasing the accuracy of classification. Self organizing feature map (SOFM), a kind of neural network technique, was used for the clustering of satellite data. We used a decision tree for the classification of the clusters. When we compared the classification results with the time series of NDVI and some other available ground truth data, the urban, agricultural area, deciduous tree and evergreen tree were clearly classified.

  • PDF

A Study on Extracting the Landuse Change Information of Seoul Using LANDSAT(MSS, TM) Data (1972~1985) (LANDAST(MSS, TM) Data를 이용(利用)한 서울시(市)의 토지이용(土地利用) 경년변화(經年變化)의 추출(抽出)에 관한 연구(硏究) (1972~1985년))

  • Ahn, Chul Ho;Ahn, Ki Won;Kim, Yong Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.9 no.4
    • /
    • pp.113-124
    • /
    • 1989
  • In this study, we tried to extract the land-use change information of Seoul city using the multiple date images of the same geographic area. Multiple date image set is MSS('72, '79, '81, '93) and TM('85), and we carried out geometric correction, digitizing(due to the administrative boundary) in pre-processing process. In addition, we performed land-use classification with MLC(Maximum Likelihood Classifier) after improving the predictive accuracy of classification by filtering technique. At the stage of classification, ground truth data, topographic maps, aerial photographs were used to select the training field and statistical data of that time were compared with the classification result to prove the accuracy. As a result, urban area in Seoul has been increased('72 : 25.8 %${\rightarrow}$'81 : 43.0 %${\rightarrow}$'85 : 51.9 %) and Forest area decreased ('72 : 39.0 %${\rightarrow}$'85 : 28.4 %) as we estimated. Finally, it is concluded that the utilzation of satellite imagery is very effective, economical and helpful in the urban land-use/land-cover monitoring.

  • PDF

Bayesian Parameter Estimation for Prognosis of Crack Growth under Variable Amplitude Loading (변동진폭하중 하에서 균열성장예지를 위한 베이지안 모델변수 추정법)

  • Leem, Sang-Hyuck;An, Da-Wn;Choi, Joo-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.10
    • /
    • pp.1299-1306
    • /
    • 2011
  • In this study, crack-growth model parameters subjected to variable amplitude loading are estimated in the form of a probability distribution using the method of Bayesian parameter estimation. Huang's model is employed to describe the retardation and acceleration of the crack growth during the loadings. The Markov Chain Monte Carlo (MCMC) method is used to obtain samples of the parameters following the probability distribution. As the conventional MCMC method often fails to converge to the equilibrium distribution because of the increased complexity of the model under variable amplitude loading, an improved MCMC method is introduced to overcome this shortcoming, in which a marginal (PDF) is employed as a proposal density function. The model parameters are estimated on the basis of the data from several test specimens subjected to constant amplitude loading. The prediction is then made under variable amplitude loading for the same specimen, and validated by the ground-truth data using the estimated parameters.

Estimation of ambient PM10 and PM2.5 concentrations in Seoul, South Korea, using empirical models based on MODIS and Landsat 8 OLI imagery

  • Lee, Peter Sang-Hoon;Park, Jincheol;Seo, Jung-young
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.1
    • /
    • pp.59-66
    • /
    • 2020
  • Particulate matter (PM) is regarded as a major threat to public health and safety in urban areas. Despite a variety of efforts to systemically monitor the distribution of PM, the limited amount of sampling sites may not provide sufficient coverage over the areas where the monitoring stations are not located in close proximity. This study examined the capacity of using remotely sensed data to estimate the PM10 and PM2.5 concentrations in Seoul, South Korea. Multiple linear regression models were developed using the multispectral band data from the Moderate-resolution imaging spectro-radiometer equipped on Terra (MODIS) and Operational Land Imager equipped on Landsat 8 (Landsat 8) and meteorological parameters. Compared to MODIS-derived models (r2 = 0.25 for PM10, r2 = 0.30 for PM2.5), the Landsat 8-derived models showed improved model reliabilities (r2 = 0.17 to 0.57 for PM10, r2 = 0.47 to 0.71 for PM2.5). Landsat 8 model-derived PM concentration and ground-truth PM measurements were cross-validated to each other to examine the capability of the models for estimating the PM concentration. The modeled PM concentrations showed a stronger correlation to PM10 (r = 0.41 to 0.75) than to PM2.5 (r = 0.14 to 0.82). Overall, the results indicate that Landsat 8-derived models were more suitable in estimating the PM concentrations. Despite the day-to-day fluctuation in the model reliability, several models showed strong correspondences of the modeled PM concentrations to the PM measurements.

Heart Rate Monitoring Using Motion Artifact Modeling with MISO Filters (MISO 필터 기반의 동잡음 모델링을 이용한 심박수 모니터링)

  • Kim, Sunho;Lee, Jungsub;Kang, Hyunil;Ohn, Baeksan;Baek, Gyehyun;Jung, Minkyu;Im, Sungbin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.8
    • /
    • pp.18-26
    • /
    • 2015
  • Measuring the heart rate during exercise is important to properly control the amount of exercise. With the recent advent of smart device usage, there is a dramatic increase in interest in devices for the real-time measurement of the heart rate during exercise. During intensive exercise, accurate heart rate estimation from wrist-type photoplethysmography (PPG) signals is a very difficult problem due to motion artifact (MA). In this study, we propose an efficient algorithm for an accurate estimation of the heart rate from wrist-type PPG signals. For the twelve data sets, the proposed algorithm achieves the average absolute error of 1.38 beat per minute (BPM) and the Pearson correlation between the estimates and the ground-truth of heart rate was 0.9922. The proposed algorithm presents the strengths in an accurate estimation together with a fast computation speed, which is attractive in application to wearable devices.

A Comparison of Deep Reinforcement Learning and Deep learning for Complex Image Analysis

  • Khajuria, Rishi;Quyoom, Abdul;Sarwar, Abid
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.1-10
    • /
    • 2020
  • The image analysis is an important and predominant task for classifying the different parts of the image. The analysis of complex image analysis like histopathological define a crucial factor in oncology due to its ability to help pathologists for interpretation of images and therefore various feature extraction techniques have been evolved from time to time for such analysis. Although deep reinforcement learning is a new and emerging technique but very less effort has been made to compare the deep learning and deep reinforcement learning for image analysis. The paper highlights how both techniques differ in feature extraction from complex images and discusses the potential pros and cons. The use of Convolution Neural Network (CNN) in image segmentation, detection and diagnosis of tumour, feature extraction is important but there are several challenges that need to be overcome before Deep Learning can be applied to digital pathology. The one being is the availability of sufficient training examples for medical image datasets, feature extraction from whole area of the image, ground truth localized annotations, adversarial effects of input representations and extremely large size of the digital pathological slides (in gigabytes).Even though formulating Histopathological Image Analysis (HIA) as Multi Instance Learning (MIL) problem is a remarkable step where histopathological image is divided into high resolution patches to make predictions for the patch and then combining them for overall slide predictions but it suffers from loss of contextual and spatial information. In such cases the deep reinforcement learning techniques can be used to learn feature from the limited data without losing contextual and spatial information.

An Automated Technique for Detecting Axon Structure in Time-Lapse Neural Image Sequence (시간 경과 신경계 영상 시퀀스에서의 축삭돌기 추출 기법)

  • Kim, Nak Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.251-258
    • /
    • 2014
  • The purpose of the neural image analysis is to trace the velocities and the directions of moving mitochondria migrating through axons. This paper proposes an automated technique for detecting axon structure. Previously, the detection process has been carried out using a partially automated technique combined with some human intervention. In our algorithm, a consolidated image is built by taking the maximum intensity value on the all image frames at each pixel Axon detection is performed through vessel enhancement filtering followed by a peak detection procedure. In order to remove errors contained in ridge points, a filtering process is devised using a local reliability measure. Experiments have been performed using real neural image sequences and ground truth data extracted manually. It has been turned out that the proposed algorithm results in high detection rate and precision.

Weighted cost aggregation approach for depth extraction of stereo images (영상의 깊이정보 추출을 위한 weighted cost aggregation 기반의 스테레오 정합 기법)

  • Yoon, Hee-Joo;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.6
    • /
    • pp.1194-1199
    • /
    • 2009
  • Stereo vision system is useful method for inferring 3D depth information from two or more images. So it has been the focus of attention in this field for a long time. Stereo matching is the process of finding correspondence points in two or more images. A central problem in a stereo matching is that it is difficult to satisfy both the computation time problem and accuracy at the same time. To resolve this problem, we proposed a new stereo matching technique using weighted cost aggregation. To begin with, we extract the weight in given stereo images based on features. We compute the costs of the pixels in a given window using correlation of weighted color and brightness information. Then, we match pixels in a given window between the reference and target images of a stereo pair. To demonstrate the effectiveness of the algorithm, we provide experimental data from several synthetic and real scenes. The experimental results show the improved accuracy of the proposed method.

The Early Wittgenstein on the Theory of Types (전기 비트겐슈타인과 유형 이론)

  • Park, Jeong-il
    • Korean Journal of Logic
    • /
    • v.21 no.1
    • /
    • pp.1-37
    • /
    • 2018
  • As is well known, Wittgenstein criticizes Russell's theory of types explicitly in the Tractatus. What, then, is the point of Wittgenstein's criticism of Russell's theory of types? In order to answer this question I will consider the theory of types on its philosophical aspect and its logical aspect. Roughly speaking, in the Tractatus Wittgenstein's logical syntax is the alternative of Russell's theory of types. Logical syntax is the sign rules, in particular, formation rules of notation of the Tractatus. Wittgenstein's distinction of saying-showing is the most fundamental ground of logical syntax. Wittgenstein makes a step forward with his criticism of Russell's theory of types to the view that logical grammar is arbitrary and a priori. His criticism of Russell's theory of types is after all the challenge against Frege-Russell's conception of logic. Logic is not concerned with general truth or features of the world. Tautologies which consist of logic say nothing.