• Title/Summary/Keyword: kernel estimate

Search Result 140, Processing Time 0.03 seconds

Monitoring the presence of wild boar and land mammals using environmental DNA metabarcoding - Case study in Yangpyeong-gun, Gyeonggi-do - (환경 DNA 메타바코딩을 활용한 멧돼지 및 육상 포유류 출현 모니터링 - 경기도 양평군 일대를 중심으로 -)

  • Kim, Yong-Hwan;Han, Youn-Ha;Park, Ji-Yun;Kim, Ho Gul;Cho, Soo-Hyun;Song, Young-Keun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.24 no.6
    • /
    • pp.133-144
    • /
    • 2021
  • This study aims to estimate location of land mammals habitat by analyzing spatial data and investigate how to apply environmental DNA monitoring methodology to lotic system in Yangpyeong-gun, Gyeonggi-do. Environmental DNA sampling points are selected through spatial analysis with QGIS open source program by overlaying Kernel density of wild boar(Sus scrofa), elevation, slope and land-cover map, and 81 samples are collected. After 240 mL of water was filtered in each sample, metabarcoding technique using MiMammal universal primer was applied in order to get a whole list of mammal species whose DNA particles contained in filtered water. 8 and 22 samples showed DNA of wild boar and water deer, respectively. DNA of raccoon dog, Eurasian otter, and Siberian weasel are also detected through metabarcoding analysis. This study is valuable that conducted in outdoor lotic system. The study suggests a new wildlife monitoring methodology integrating overlayed geographic data and environmental DNA.

236U accelerator mass spectrometry with a time-of-flight and energy detection system

  • Li Zheng;Hiroyuki Matsuzaki;Takeyasu Yamagata
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4636-4643
    • /
    • 2022
  • A time-of-flight and energy (TOF-E) detection system for the measurement of 236U accelerator mass spectrometry (AMS) has been developed to improve the 236U/238U sensitivity at Micro Analysis Laboratory, Tandem accelerator (MALT), The University of Tokyo. With observing TOF distribution of 235U, 236U and 238U, this TOF-E detection system has clearly separated 236U from the interference of 235U and 238U when measuring three kinds of uranium standards. In addition, we have developed a novel method combining kernel-based density estimation method and multi-Gaussian fitting method to estimate the 236U/238U sensitivity of the TOF-E detection system. Using this new estimation method, 3.4 × 10-12 of 236U/238U sensitivity and 1.9 ns of time resolution are obtained. 236U/238U sensitivity of TOF-E detection system has improved two orders of magnitude better than that of previous gas ionization chamber. Moreover, unknown species other than uranium isotopes were also observed in the measurement of a surface soil sample, which has demonstrated that TOF-E detection system has a higher sensitivity in particle identification. With its high sensibility in mass determination, this TOF-E detection system could also be used in other heavy isotope AMS.

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Analysis of PM2.5 Impact and Human Exposure from Worst-Case of Mt. Baekdu Volcanic Eruption (백두산 분화 Worst-case로 인한 우리나라 초미세먼지(PM2.5) 영향분석 및 노출평가)

  • Park, Jae Eun;Kim, Hyerim;Sunwoo, Young
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1267-1276
    • /
    • 2020
  • To quantitatively predict the impacts of large-scale volcanic eruptions of Mt. Baekdu on air quality and damage around the Korean Peninsula, a three-dimensional chemistry-transport modeling system (Weather Research & Forecasting - Sparse Matrix Operation Kernel Emission - Comunity Multi-scale Air Quality) was adopted. A worst-case meteorology scenario was selected to estimate the direct impact on Korea. This study applied the typical worst-case scenarios that are likely to cause significant damage to Korea among worst-case volcanic eruptions of Mt. Baekdu in the past decade (2005~2014) and assumed a massive VEI 4 volcanic eruption on May 16, 2012, to analyze the concentration of PM2.5 caused by the volcanic eruption. The effects of air quality in each region-cities, counties, boroughs-were estimated, and vulnerable areas were derived by conducting an exposure assessment reflecting vulnerable groups. Moreover, the effects of cities, counties, and boroughs were analyzed with a high-resolution scale (9 km × 9 km) to derive vulnerable areas within the regions. As a result of analyzing the typical worst-case volcanic eruptions of Mt. Baekdu, a discrepancy was shown in areas between high PM2.5 concentration, high population density, and where vulnerable groups are concentrated. From the result, PM2.5 peak concentration was about 24,547 ㎍/㎥, which is estimated to be a more serious situation than the eruption of Mt. St. Helensin 1980, which is known for 540 million tons of volcanic ash. Paju, Gimpo, Goyang, Ganghwa, Sancheong, Hadong showed to have a high PM2.5 concentration. Paju appeared to be the most vulnerable area from the exposure assessment. While areas estimated with a high concentration of air pollutants are important, it is also necessary to develop plans and measures considering densely populated areas or areas with high concentrations of susceptible population or vulnerable groups. Also, establishing measures for each vulnerable area by selecting high concentration areas within cities, counties, and boroughs rather than establishing uniform measures for all regions is needed. This study will provide the foundation for developing the standards for disaster declaration and preemptive response systems for volcanic eruptions.

Graph Cut-based Automatic Color Image Segmentation using Mean Shift Analysis (Mean Shift 분석을 이용한 그래프 컷 기반의 자동 칼라 영상 분할)

  • Park, An-Jin;Kim, Jung-Whan;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.936-946
    • /
    • 2009
  • A graph cuts method has recently attracted a lot of attentions for image segmentation, as it can globally minimize energy functions composed of data term that reflects how each pixel fits into prior information for each class and smoothness term that penalizes discontinuities between neighboring pixels. In previous approaches to graph cuts-based automatic image segmentation, GMM(Gaussian mixture models) is generally used, and means and covariance matrixes calculated by EM algorithm were used as prior information for each cluster. However, it is practicable only for clusters with a hyper-spherical or hyper-ellipsoidal shape, as the cluster was represented based on the covariance matrix centered on the mean. For arbitrary-shaped clusters, this paper proposes graph cuts-based image segmentation using mean shift analysis. As a prior information to estimate the data term, we use the set of mean trajectories toward each mode from initial means randomly selected in $L^*u^*{\upsilon}^*$ color space. Since the mean shift procedure requires many computational times, we transform features in continuous feature space into 3D discrete grid, and use 3D kernel based on the first moment in the grid, which are needed to move the means to modes. In the experiments, we investigate the problems of mean shift-based and normalized cuts-based image segmentation methods that are recently popular methods, and the proposed method showed better performance than previous two methods and graph cuts-based automatic image segmentation using GMM on Berkeley segmentation dataset.

Static Worst-Case Execution Time Analysis Tool for Scheduling Primitives about Embedded OS (임베디드 운영체제의 스케줄링 프리미티브를 고려한 정적 최악실행시간 분석도구)

  • Park, Hyeon-Hui;Yang, Seung-Min;Choi, Yong-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.271-281
    • /
    • 2007
  • Real-time support of embedded OS is not optional, but essential in contemporary embedded systems. In order to achieve these system#s real-time property, it is crucial that schedulability analysis for tasks having its property have been accomplished before system execution. Acquiring Worst-Case Execution Time(WCET) of task is a core part of schedulability analysis. Because traditional WCET tools analyze only its estimation of application task(i.e. program), it is not considered that application tasks are affected by scheduling primitives(e.g. scheduler, interrupt service routine, etc.) of OS when it schedules them. In this paper, we design and implement WCET analysis tool which deliberates on scheduling primitives of system using embedded Linux widely used in embedded OSes. This tool can estimate either WCET of normal application programs or corresponding primitives which have an influence on schduling property in embedded Linux kernel. Therefore, precision of estimation about schedulability analysis is improved. We develop this tool as Eclipse#s plug-in to work properly in any platform and support convenient interface or functionality for user.

Multivariate Time Series Simulation With Component Analysis (독립성분분석을 이용한 다변량 시계열 모의)

  • Lee, Tae-Sam;Salas, Jose D.;Karvanen, Juha;Noh, Jae-Kyoung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2008.05a
    • /
    • pp.694-698
    • /
    • 2008
  • In hydrology, it is a difficult task to deal with multivariate time series such as modeling streamflows of an entire complex river system. Normal distribution based model such as MARMA (Multivariate Autorgressive Moving average) has been a major approach for modeling the multivariate time series. There are some limitations for the normal based models. One of them might be the unfavorable data-transformation forcing that the data follow the normal distribution. Furthermore, the high dimension multivariate model requires the very large parameter matrix. As an alternative, one might be decomposing the multivariate data into independent components and modeling it individually. In 1985, Lins used Principal Component Analysis (PCA). The five scores, the decomposed data from the original data, were taken and were formulated individually. The one of the five scores were modeled with AR-2 while the others are modeled with AR-1 model. From the time series analysis using the scores of the five components, he noted "principal component time series might provide a relatively simple and meaningful alternative to conventional large MARMA models". This study is inspired from the researcher's quote to develop a multivariate simulation model. The multivariate simulation model is suggested here using Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Three modeling step is applied for simulation. (1) PCA is used to decompose the correlated multivariate data into the uncorrelated data while ICA decomposes the data into independent components. Here, the autocorrelation structure of the decomposed data is still dominant, which is inherited from the data of the original domain. (2) Each component is resampled by block bootstrapping or K-nearest neighbor. (3) The resampled components bring back to original domain. From using the suggested approach one might expect that a) the simulated data are different with the historical data, b) no data transformation is required (in case of ICA), c) a complex system can be decomposed into independent component and modeled individually. The model with PCA and ICA are compared with the various statistics such as the basic statistics (mean, standard deviation, skewness, autocorrelation), and reservoir-related statistics, kernel density estimate.

  • PDF

The Estimation of Link Travel Time for the Namsan Tunnel #1 using Vehicle Detectors (지점검지체계를 이용한 남산1호터널 구간통행시간 추정)

  • Hong Eunjoo;Kim Youngchan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.1 no.1
    • /
    • pp.41-51
    • /
    • 2002
  • As Advanced Traveler Information System(ATIS) is the kernel of the Intelligent Transportation System, it is very important how to manage data from traffic information collectors on a road and have at borough grip of the travel time's change quickly and exactly for doing its part. Link travel time can be obtained by two method. One is measured by area detection systems and the other is estimated by point detection systems. Measured travel time by area detection systems has the limitation for real time information because it Is calculated by the probe which has already passed through the link. Estimated travel time by point detection systems is calculated by the data on the same time of each. section, this is, it use the characteristic of the various cars of each section to estimate travel time. For this reason, it has the difference with real travel time. In this study, Artificial Neural Networks is used for estimating link travel time concerned about the relationship with vehicle detector data and link travel time. The method of estimating link travel time are classified according to the kind of input data and the Absolute value of error between the estimated and the real are distributed within 5$\~$15minute over 90 percent with the result of testing the method using the vehicle detector data and AVI data of Namsan Tunnel $\#$1. It also reduces Time lag of the information offered time and draws late delay generation and dissolution.

  • PDF

Suspension of Sediment over Swash Zone (Swash대역에서의 해빈표사 부유거동에 관한 연구)

  • Cho, Yong Jun;Kim, Kwon Soo;Ryu, Ha Sang
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1B
    • /
    • pp.95-109
    • /
    • 2008
  • We numerically analyzed the nonlinear shoaling, a plunging breaker and its accompanying energetic suspension of sediment at a bed, and a redistribution of suspended sediments by a down rush of preceding waves and the following plunger using SPH with a Gaussian kernel function, Lagrangian Dynamic Smagorinsky model (LDS), Van Rijn's pick up function. In that process, we came to the conclusion that the conventional model for the tractive force at a bottom like a quadratic law can not accurately describe the rapidly accelerating flow over a swash zone, and propose new methodology to accurately estimate the bottom tractive force. Using newly proposed wave model in this study, we can successfully duplicate severely deformed water surface profile, free falling water particles, a queuing splash after the landing of water particles on the free surface and a wave finger due to the structured vortex on a rear side of wave crest (Narayanaswamy and Dalrymple, 2002), a circulation of suspended sediments over a swash zone, net transfer of sediments clouds suspended over a swash zone toward the offshore, which so far have been regarded very difficult features to mimic in the computational fluid mechanics.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.