• Title/Summary/Keyword: Smoothing Algorithm

Search Result 437, Processing Time 0.031 seconds

Theoretical Investigations on Compatibility of Feedback-Based Cellular Models for Dune Dynamics : Sand Fluxes, Avalanches, and Wind Shadow ('되먹임 기반' 사구 역학 모형의 호환 가능성에 대한 이론적 고찰 - 플럭스, 사면조정, 바람그늘 문제를 중심으로 -)

  • RHEW, Hosahng
    • Journal of the Korean association of regional geographers
    • /
    • v.22 no.3
    • /
    • pp.681-702
    • /
    • 2016
  • Two different modelling approaches to dune dynamics have been established thus far; continuous models that emphasize the precise representation of wind field, and feedback-based models that focus on the interactions between dunes, rather than aerodynamics. Though feedback-based models have proven their capability to capture the essence of dune dynamics, the compatibility issues on these models have less been addressed. This research investigated, mostly from the theoretical point of view, the algorithmic compatibility of three feedback-based dune models: sand slab models, Nishimori model, and de Castro model. Major findings are as follows. First, sand slab models and de Castro model are both compatible in terms of flux perspectives, whereas Nishimori model needs a tuning factor. Second, the algorithm of avalanching can be easily implemented via repetitive spatial smoothing, showing high compatibility between models. Finally, the wind shadow rule might not be a necessary component to reproduce dune patterns unlike the interpretation or assumption of previous studies. The wind shadow rule, rather, might be more important in understanding bedform-level interactions. Overall, three models show high compatibility between them, or seem to require relatively small modification, though more thorough investigation is needed.

  • PDF

Building of Prediction Model of Wind Power Generationusing Power Ramp Rate (Power Ramp Rate를 이용한 풍력 발전량 예측모델 구축)

  • Hwang, Mi-Yeong;Kim, Sung-Ho;Yun, Un-Il;Kim, Kwang-Deuk;Ryu, Keun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.211-218
    • /
    • 2012
  • Fossil fuel is used all over the world and it produces greenhouse gases due to fossil fuel use. Therefore, it cause global warming and is serious environmental pollution. In order to decrease the environmental pollution, we should use renewable energy which is clean energy. Among several renewable energy, wind energy is the most promising one. Wind power generation is does not produce environmental pollution and could not be exhausted. However, due to wind power generation has irregular power output, it is important to predict generated electrical energy accurately for smoothing wind energy supply. There, we consider use ramp characteristic to forecast accurate wind power output. The ramp increase and decrease rapidly wind power generation during in a short time. Therefore, it can cause problem of unbalanced power supply and demand and get damaged wind turbine. In this paper, we make prediction models using power ramp rate as well as wind speed and wind direction to increase prediction accuracy. Prediction model construction algorithm used multilayer neural network. We built four prediction models with PRR, wind speed, and wind direction and then evaluated performance of prediction models. The predicted values, which is prediction model with all of attribute, is nearly to the observed values. Therefore, if we use PRR attribute, we can increase prediction accuracy of wind power generation.

Improvement of Reverse-time Migration using Homogenization of Acoustic Impedance (음향 임피던스 균질화를 이용한 거꿀시간 참반사보정 성능개선)

  • Lee, Gang Hoon;Pyun, Sukjoon;Park, Yunhui;Cheong, Snons
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.2
    • /
    • pp.76-83
    • /
    • 2016
  • Migration image can be distorted due to reflected waves in the source and receiver wavefields when discontinuities of input velocity model exist in seismic imaging. To remove reflected waves coming from layer interfaces, it is a common practice to smooth the velocity model for migration. If the velocity model is smoothed, however, the subsurface image can be distorted because the velocity changes around interfaces. In this paper, we attempt to minimize the distortion by reducing reflection energy in the source and receiver wavefields through acoustic impedance homogenization. To make acoustic impedance constant, we define fake density model and use it for migration. When the acoustic impedance is constant over all layers, the reflection coefficient at normal incidence becomes zero and the minimized reflection energy results in the improvement of migration result. To verify our algorithm, we implement the reverse-time migration using cell-based finite-difference method. Through numerical examples, we can note that the migration image is improved at the layer interfaces with high velocity contrast, and it shows the marked improvement particularly in the shallow part.

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.