• Title/Summary/Keyword: Scaling Approach

Search Result 223, Processing Time 0.032 seconds

Estimation of design floods for ungauged watersheds using a scaling-based regionalization approach (스케일링 기법 기반의 지역화를 통한 미계측 유역의 설계 홍수량 산정)

  • Kim, Jin-Guk;Kim, Jin-Young;Choi, Hong-Geun;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.9
    • /
    • pp.769-782
    • /
    • 2018
  • Estimation of design floods is typically required for hydrologic design purpose. Design floods are routinely estimated for water resources planning, safety and risk of the existing water-related structures. However, the hydrologic data, especially streamflow data for the design purposes in South Korea are still very limited, and additionally the length of streamflow data is relatively short compared to the rainfall data. Therefore, this study collected a large number design flood data and watershed characteristics (e.g. area, slope and altitude) from the national river database. We further explored to formulate a scaling approach for the estimation of design flood, which is a function of the watershed characteristics. Then, this study adopted a Hierarchical Bayesian model for evaluating both parameters and their uncertainties in the regionalization approach, which models the hydrologic response of ungauged basins using regression relationships between watershed structure and model. The proposed modeling framework was validated through ungauged watersheds. The proposed approach have better performance in terms of correlation coefficient than the existing approach which is solely based on area as a predictor. Moreover, the proposed approach can provide uncertainty associated with the model parameters to better characterize design floods at ungauged watersheds.

An Efficient VM-Level Scaling Scheme in an IaaS Cloud Computing System: A Queueing Theory Approach

  • Lee, Doo Ho
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.29-34
    • /
    • 2017
  • Cloud computing is becoming an effective and efficient way of computing resources and computing service integration. Through centralized management of resources and services, cloud computing delivers hosted services over the internet, such that access to shared hardware, software, applications, information, and all resources is elastically provided to the consumer on-demand. The main enabling technology for cloud computing is virtualization. Virtualization software creates a temporarily simulated or extended version of computing and network resources. The objectives of virtualization are as follows: first, to fully utilize the shared resources by applying partitioning and time-sharing; second, to centralize resource management; third, to enhance cloud data center agility and provide the required scalability and elasticity for on-demand capabilities; fourth, to improve testing and running software diagnostics on different operating platforms; and fifth, to improve the portability of applications and workload migration capabilities. One of the key features of cloud computing is elasticity. It enables users to create and remove virtual computing resources dynamically according to the changing demand, but it is not easy to make a decision regarding the right amount of resources. Indeed, proper provisioning of the resources to applications is an important issue in IaaS cloud computing. Most web applications encounter large and fluctuating task requests. In predictable situations, the resources can be provisioned in advance through capacity planning techniques. But in case of unplanned and spike requests, it would be desirable to automatically scale the resources, called auto-scaling, which adjusts the resources allocated to applications based on its need at any given time. This would free the user from the burden of deciding how many resources are necessary each time. In this work, we propose an analytical and efficient VM-level scaling scheme by modeling each VM in a data center as an M/M/1 processor sharing queue. Our proposed VM-level scaling scheme is validated via a numerical experiment.

An Efficient Multidimensional Scaling Method based on CUDA and Divide-and-Conquer (CUDA 및 분할-정복 기반의 효율적인 다차원 척도법)

  • Park, Sung-In;Hwang, Kyu-Baek
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.427-431
    • /
    • 2010
  • Multidimensional scaling (MDS) is a widely used method for dimensionality reduction, of which purpose is to represent high-dimensional data in a low-dimensional space while preserving distances among objects as much as possible. MDS has mainly been applied to data visualization and feature selection. Among various MDS methods, the classical MDS is not readily applicable to data which has large numbers of objects, on normal desktop computers due to its computational complexity. More precisely, it needs to solve eigenpair problems on dissimilarity matrices based on Euclidean distance. Thus, running time and required memory of the classical MDS highly increase as n (the number of objects) grows up, restricting its use in large-scale domains. In this paper, we propose an efficient approximation algorithm for the classical MDS based on divide-and-conquer and CUDA. Through a set of experiments, we show that our approach is highly efficient and effective for analysis and visualization of data consisting of several thousands of objects.

Qualitative study on the scaling experience through the application of comprehensive dental hygiene care : A grounded theory approach (포괄치위생관리 과정을 적용한 스케일링 수행 경험에 관한 질적 연구 : 근거이론적 접근)

  • Park, Seon-Mi;Moon, Sang-Eun;Kim, Yun-Jeong;Kim, Seon-Yeong;Cho, Hye-Eun;Kang, Hyun-Joo
    • Journal of Korean society of Dental Hygiene
    • /
    • v.20 no.4
    • /
    • pp.395-408
    • /
    • 2020
  • Objectives: This study was performed in order to provide evidence-based data for the expected professional impact of dental hygienists, and to apply and disclose the comprehensive dental hygiene care process through an in-depth analysis of their scaling experience and investigation of the importance of an evidence-based scaling work performance. Methods: The data were collected from June 3, 2019 to October 3, 2019 by conducting in-depth individual interviews on 10 dental hygienists who are working in dental clinics and hospitals by region. The data were analyzed by using the grounded theory methodology, which is a field of qualitative research method. Results: Study results showed that the core category derived from the paradigm model and change process in this study was 'a process of becoming a mature professional outside practical work'. Conclusions: In this study, the participants were able to gain a sense of occupational accomplishment as dental hygienists by performing scaling based on the comprehensive dental hygiene care (CDHC) process, and to advance into professionals through continuous efforts and research in order to enhance their job competencies.

ISAR Cross-Range Scaling for a Maneuvering Target (기동표적에 대한 ISAR Cross-Range Scaling)

  • Kang, Byung-Soo;Bae, Ji-Hoon;Kim, Kyung-Tae;Yang, Eun-Jung
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.10
    • /
    • pp.1062-1068
    • /
    • 2014
  • In this paper, a novel approach estimating target's rotation velocity(RV) is proposed for inverse synthetic aperture radar(ISAR) cross-range scaling(CRS). Scale invariant feature transform(SIFT) is applied to two sequently generated ISAR images for extracting non-fluctuating scatterers. Considering the fact that the distance between target's rotation center(RC) and SIFT features is same, we can set a criterion for estimating RV. Then, the criterion is optimized through the proposed method based on particle swarm optimization(PSO) combined with exhaustive search method. Simulation results show that the proposed algorithm can precisely estimate RV of a scenario based maneuvering target without RC information. With the use of the estimated RV, ISAR image can be correctly re-scaled along the cross-range direction.

A study on Deep Q-Networks based Auto-scaling in NFV Environment (NFV 환경에서의 Deep Q-Networks 기반 오토 스케일링 기술 연구)

  • Lee, Do-Young;Yoo, Jae-Hyoung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.1-10
    • /
    • 2020
  • Network Function Virtualization (NFV) is a key technology of 5G networks that has the advantage of enabling building and operating networks flexibly. However, NFV can complicate network management because it creates numerous virtual resources that should be managed. In NFV environments, service function chaining (SFC) composed of virtual network functions (VNFs) is widely used to apply a series of network functions to traffic. Therefore, it is required to dynamically allocate the right amount of computing resources or instances to SFC for meeting service requirements. In this paper, we propose Deep Q-Networks (DQN)-based auto-scaling to operate the appropriate number of VNF instances in SFC. The proposed approach not only resizes the number of VNF instances in SFC composed of multi-tier architecture but also selects a tier to be scaled in response to dynamic traffic forwarding through SFC.

Performance Reengineering of Embedded Real-Time Systems (내장형 실시간 시스템의 성능 개선을 위한 리엔지니어링 기법)

  • 홍성수
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.299-306
    • /
    • 2003
  • This paper formulates a problem of embedded real-time system re-engineering, and presents its solution approach. Embedded system re-engineering is defined as a development task of meeting performance requirements newly imposed on a system after its hardware and software have been fully implemented. The performance requirements nay include a real-time throughput and an input-to-output latency. The proposed solution approach is based on a bottleneck analysis and nonlinear optimization. The inputs to the approach include a system design specified with a process network and a set of task graphs, task allocation and scheduling, and a new real-time throughput requirement specified as a system's period constraint. The solution approach works in two steps. In the first step, it determines bottleneck precesses in the process network via estimation of process latencies. In the second step, it derives a system of constraints with performance scaling factors of processing elements being variables. It then solves the constraints for the performance staling factors with an objective of minimizing the total hardware cost of the resultant system. These scaling factors suggest the minimal cost hardware upgrade to meet the new performance requirement. Since this approach does not modify carefully designed software structures, it helps reduce the re-engineering cycle.

Width Operator for Resonance Width Determination

  • 박태준
    • Bulletin of the Korean Chemical Society
    • /
    • v.17 no.2
    • /
    • pp.198-200
    • /
    • 1996
  • The resonance width may be directly determined by solving an eigenvalue equation for width operator which is derived in this work based on the method of complex scaling transformation. The width operator approach is advantageous to the conventional rotating coordinate method in twofold; 1) calculation can be done in real arithmetics and, 2) so-called θ-trajectory is not required for determining the resonance widths. Application to one- and two-dimensional model problems can be easily implemented.

Development of Scaled Explosion Logit Model Considering Reliability of Ranking Data (SP 순위 자료별 오차를 고려하는 순위로짓 모형 추정에 관한 연구)

  • Kim, Kang-Soo;Cho, Hye-Jin
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.197-206
    • /
    • 2004
  • In ranking data, respondents are required to rank a number of alternatives in order of their preferences and an exploded logit model is generally used. It assumes that each rank contains the same amount of random noise. This study investigates the reliability of ranking data and identifies whether there are different decision rules at each rank stage. The results show that there were differences in the amount of unexplained variation in different ranking stage. A single scaling parameter could not explain the difference of variations of individual coefficients between two ranking data average difference of variations. This paper also investigated the optimal explosion depth in the exploded logit model by using the suggested scaling approach. The scaling approach should be based on particular variables which have different variances rather than based on the whole data set. The empirical analysis show that an explosion depth of 2 is appropriate after scaling the second rank data set, while an explosion including the third rank is inappropriate even though the third rank data set is scaled.

Detecting outliers in multivariate data and visualization-R scripts (다변량 자료에서 특이점 검출 및 시각화 - R 스크립트)

  • Kim, Sung-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.4
    • /
    • pp.517-528
    • /
    • 2018
  • We provide R scripts to detect outliers in multivariate data and visualization. Detecting outliers is provided using three approaches 1) Robust Mahalanobis distance, 2) High Dimensional data, 3) density-based approach methods. We use the following techniques to visualize detected potential outliers 1) multidimensional scaling (MDS) and minimal spanning tree (MST) with k-means clustering, 2) MDS with fviz cluster, 3) principal component analysis (PCA) with fviz cluster. For real data sets, we use MLB pitching data including Ryu, Hyun-jin in 2013 and 2014. The developed R scripts can be downloaded at "http://www.knou.ac.kr/~sskim/ddpoutlier.html" (R scripts and also R package can be downloaded here).