• Title/Summary/Keyword: random grid

Search Result 138, Processing Time 0.023 seconds

A Study on the Evaluation of Interpolation Methods in PIV (PIV에서의 보간기법의 평가에 관한 연구)

  • Choi, J.W;Cho, D.H;Choi, M.S;Lee, Y.H
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.20 no.4
    • /
    • pp.412-412
    • /
    • 1996
  • To maintain high spacial accuracy and rapid CPU time in interpolating data from grid to random position or inversely in PIV, proposed many technuques are compared and discussed mainly in terms of interpolating error and computing time. And artificial PIV atmosphere data is furnished by CFD result. First, for interpolation from grid to random position, multiquadric method gives the highest accuracy with the longest CPU time and Taylor series expansion methods give reasonable accuracy with less calculating load. Secondly, the sub-pixel resolution analysis in estimating the coordinates of the maximum correlation coefficients essential in the grey level correlation PIV reveal that 8-neighbours 2nd-order least square interpolation gives utmost accuracy in terms of the real flow conditions.

Heterogeneous Resource Management for Adaptive Grid System (적응형 그리드 시스템을 위한 이질적인 자원 관리)

  • Eui-Nam Huh;Woong-Jae Lee;Jong-Sook Lee
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.4
    • /
    • pp.51-59
    • /
    • 2003
  • Real-Time applications on Grid environment have several problems in terms of resource management addressed as follows; (1) dynamic resource allocation to provide QoS objectives, (2) heterogeneous resources that is different scale, or capacity in same unit, and (3) resource availability, and resource needs. This paper describes the techniques of resource manager (RM) handling above problems to support QoS of dynamic real-time applications on Grid. The contributions of this paper to solve problems are as follows: unification of dynamic resource requirements among heterogeneous hosts, control of resources in heterogeneous environments, and dynamic load balancing/sharing. Our heuristic allocation scheme works not only 257% better than random, 142% better than round robin, and 36.4% better than least load in QoS sensitivity, but also 38.6% better than random, 28.5% better than round robin, and 31.6% better than least load in QoS.

  • PDF

Random imperfection effect on reliability of space structures with different supports

  • Roudsari, Mehrzad Tahamouli;Gordini, Mehrdad
    • Structural Engineering and Mechanics
    • /
    • v.55 no.3
    • /
    • pp.461-472
    • /
    • 2015
  • The existence of initial imperfections in manufacturing or assembly of double-layer space structures having hundreds or thousands of members is inevitable. Many of the imperfections, such as the initial curvature of the members and residual stresses in members, are all random in nature. In this paper, the probabilistic effect of initial curvature imperfections in the load bearing capacity of double-layer grid space structures with different types of supports have been investigated. First, for the initial curvature imperfection of each member, a random number is generated from a gamma distribution. Then, by employing the same probabilistic model, the imperfections are randomly distributed amongst the members of the structure. Afterwards, the collapse behavior and the ultimate bearing capacity of the structure are determined by using nonlinear push down analysis and this procedure is frequently repeated. Ultimately, based on the maximum values of bearing capacity acquired from the analysis of different samples, structure's reliability is obtained by using Monte Carlo simulation method. The results show the sensitivity of the collapse behavior of double-layer grid space structures to the random distribution of initial imperfections and supports type.

Application of machine learning for merging multiple satellite precipitation products

  • Van, Giang Nguyen;Jung, Sungho;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.134-134
    • /
    • 2021
  • Precipitation is a crucial component of water cycle and play a key role in hydrological processes. Traditionally, gauge-based precipitation is the main method to achieve high accuracy of rainfall estimation, but its distribution is sparsely in mountainous areas. Recently, satellite-based precipitation products (SPPs) provide grid-based precipitation with spatio-temporal variability, but SPPs contain a lot of uncertainty in estimated precipitation, and the spatial resolution quite coarse. To overcome these limitations, this study aims to generate new grid-based daily precipitation using Automatic weather system (AWS) in Korea and multiple SPPs(i.e. CHIRPSv2, CMORPH, GSMaP, TRMMv7) during the period of 2003-2017. And this study used a machine learning based Random Forest (RF) model for generating new merging precipitation. In addition, several statistical linear merging methods are used to compare with the results of the RF model. In order to investigate the efficiency of RF, observed data from 64 observed Automated Synoptic Observation System (ASOS) were collected to evaluate the accuracy of the products through Kling-Gupta efficiency (KGE), probability of detection (POD), false alarm rate (FAR), and critical success index (CSI). As a result, the new precipitation generated through the random forest model showed higher accuracy than each satellite rainfall product and spatio-temporal variability was better reflected than other statistical merging methods. Therefore, a random forest-based ensemble satellite precipitation product can be efficiently used for hydrological simulations in ungauged basins such as the Mekong River.

  • PDF

Service Prediction-Based Job Scheduling Model for Computational Grid (계산 그리드를 위한 서비스 예측 기반의 작업 스케쥴링 모델)

  • Jang Sung-Ho;Lee Jong-Sik
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2005.05a
    • /
    • pp.29-33
    • /
    • 2005
  • Grid computing is widely applicable to various fields of industry including process control and manufacturing, military command and control, transportation management, and so on. In a viewpoint of application area, grid computing can be classified to three aspects that are computational grid, data grid and access grid. This paper focuses on computational grid which handles complex and large-scale computing problems. Computational grid is characterized by system dynamics which handles a variety of processors and jobs on continuous time. To solve problems of system complexity and reliability due to complex system dynamics, computational grid needs scheduling policies that allocate various jobs to proper processors and decide processing orders of allocated jobs. This paper proposes the service prediction-based job scheduling model and present its algorithm that is applicable for computational grid. The service prediction-based job scheduling model can minimize overall system execution time since the model predicts a processing time of each processing component and distributes a job to processing component with minimum processing time. This paper implements the job scheduling model on the DEVSJAVA modeling and simulation environment and simulates with a case study to evaluate its efficiency and reliability Empirical results, which are compared to the conventional scheduling policies such as the random scheduling and the round-robin scheduling, show the usefulness of service prediction-based job scheduling.

  • PDF

A New Integral Representation of the Coverage Probability of a Random Convex Hull

  • Son, Won;Ng, Chi Tim;Lim, Johan
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.1
    • /
    • pp.69-80
    • /
    • 2015
  • In this paper, the probability that a given point is covered by a random convex hull generated by independent and identically-distributed random points in a plane is studied. It is shown that such probability can be expressed in terms of an integral that can be approximated numerically by function-evaluations over the grid-points in a 2-dimensional space. The new integral representation allows such probability be computed efficiently. The computational burdens under the proposed integral representation and those in the existing literature are compared. The proposed method is illustrated through numerical examples where the random points are drawn from (i) uniform distribution over a square and (ii) bivariate normal distribution over the two-dimensional Euclidean space. The applications of the proposed method in statistics are are discussed.

Power Factor Improvement of Distribution System with EV Chargers based on SMC Method for SVC

  • Farkoush, Saeid Gholami;Kim, Chang-Hwan;Jung, Ho-Chul;Lee, Sanghyuk;Theera-Umpon, Nipon;Rhee, Sang-Bong
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1340-1347
    • /
    • 2017
  • Utilization of Electric Vehicles (EVs) have been growing popularity in recent years due to increment in fuel price and lack of natural resources. Random unexpected charging by home EV charger in distribution is predicted in the future. The power quality problems such as fluctuation of power factor in a residential distribution network was explored with random EV chargers. This paper proposes a high-performance nonlinear sliding mode controller (SMC) for an EV charging system to compensate voltage distortions and to enhance the power factor against the unbalanced EV chargers. For the verification of the proposed scheme, MATLAB-Simulink simulations are performed on 22.9-kV grid. The results show that the proposed scheme can improve the power factor of a smart grid due to the EV chargers on the grid.

Reduction in Sample Size for Efficient Monte Carlo Localization (효율적인 몬테카를로 위치추정을 위한 샘플 수의 감소)

  • Yang Ju-Ho;Song Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.450-456
    • /
    • 2006
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial pose estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated through the thinning technique, which is commonly used in image processing, is employed. The global topological map is first created from the given grid map for the environment. The robot then scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be similar to the one obtained off-line from the given grid map. Random samples are drawn near the topological edge instead of being taken with uniform distribution all over the environment, since the robot traverses along the edge. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased without adverse effects on the performance of MCL.

Comparison of Hyper-Parameter Optimization Methods for Deep Neural Networks

  • Kim, Ho-Chan;Kang, Min-Jae
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.969-974
    • /
    • 2020
  • Research into hyper parameter optimization (HPO) has recently revived with interest in models containing many hyper parameters, such as deep neural networks. In this paper, we introduce the most widely used HPO methods, such as grid search, random search, and Bayesian optimization, and investigate their characteristics through experiments. The MNIST data set is used to compare results in experiments to find the best method that can be used to achieve higher accuracy in a relatively short time simulation. The learning rate and weight decay have been chosen for this experiment because these are the commonly used parameters in this kind of experiment.

Performance Comparison of Machine Learning Models for Grid-Based Flood Risk Mapping - Focusing on the Case of Typhoon Chaba in 2016 - (격자 기반 침수위험지도 작성을 위한 기계학습 모델별 성능 비교 연구 - 2016 태풍 차바 사례를 중심으로 -)

  • Jihye Han;Changjae Kwak;Kuyoon Kim;Miran Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.771-783
    • /
    • 2023
  • This study aims to compare the performance of each machine learning model for preparing a grid-based disaster risk map related to flooding in Jung-gu, Ulsan, for Typhoon Chaba which occurred in 2016. Dynamic data such as rainfall and river height, and static data such as building, population, and land cover data were used to conduct a risk analysis of flooding disasters. The data were constructed as 10 m-sized grid data based on the national point number, and a sample dataset was constructed using the risk value calculated for each grid as a dependent variable and the value of five influencing factors as an independent variable. The total number of sample datasets is 15,910, and the training, verification, and test datasets are randomly extracted at a 6:2:2 ratio to build a machine-learning model. Machine learning used random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN) techniques, and prediction accuracy by the model was found to be excellent in the order of SVM (91.05%), RF (83.08%), and KNN (76.52%). As a result of deriving the priority of influencing factors through the RF model, it was confirmed that rainfall and river water levels greatly influenced the risk.