• Title/Summary/Keyword: Physical Network Approach

Search Result 206, Processing Time 0.027 seconds

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.

A Metamodeling Approach for Leader Progression Model-based Shielding Failure Rate Calculation of Transmission Lines Using Artificial Neural Networks

  • Tavakoli, Mohammad Reza Bank;Vahidi, Behrooz
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.6
    • /
    • pp.760-768
    • /
    • 2011
  • The performance of transmission lines and its shielding design during a lightning phenomenon are quite essential in the maintenance of a reliable power supply to consumers. The leader progression model, as an advanced approach, has been recently developed to calculate the shielding failure rate (SFR) of transmission lines using geometrical data and physical behavior of upward and downward lightning leaders. However, such method is quite time consuming. In the present paper, an effective method that utilizes artificial neural networks (ANNs) to create a metamodel for calculating the SFR of a transmission line based on shielding angle and height is introduced. The results of investigations on a real case study reveal that, through proper selection of an ANN structure and good training, the ANN prediction is very close to the result of the detailed simulation, whereas the Processing time is by far lower than that of the detailed model.

Probabilistic approach to time varying Available Transfer Capability calculation (확률론적 기법을 이용한 시변 ATC 용량 결정)

  • Shin Dong Joon;Lee Jun Kyung;Lee Hyo Sang;Kim Jin O;Chung Hyun Soo
    • Proceedings of the KIEE Conference
    • /
    • summer
    • /
    • pp.645-647
    • /
    • 2004
  • According to NERC definition, Available Transfer Capability (ATC) is a measure of the transfer capability remaining in the physical transmission network for the future commercial activity To calculate ATC, accurate and defensible TTC, CBM and TRM should be calculated in advance. This paper proposes a method to quantify time varying ATC based on probabilistic approach. The uncertainties of power system and market are considered as complex random variables. TRM with the desired probabilistic margin is calculated based on PLF analysis, and CBM is evaluated using LOLE of the system. Suggested ATC quantification method is verified using IEEE RTS with 72 bus. The proposed method shows efficiency and flexibility for the quantification of ATC.

  • PDF

Distribution Channel Performance Measurement: Valid Measures From Customers' Perspective

  • Kim, Sang-Youl
    • Journal of Navigation and Port Research
    • /
    • v.32 no.2
    • /
    • pp.141-148
    • /
    • 2008
  • This paper is structured into three main parts and a conclusion. The main section provide definitions of efficiency, effectiveness and performance in terms of the distribution channel, followed by a review of related performance measurement, before discussing difficulties of measurement. According to the theoretical approach, it appears that key theroretical issues are centered around customer service, logistics excellence, time compression, the use of IT, and a move towards integrated logistics. The empirical approach shows that in the past, various financial performance indicators were regarded as relevant management information. However, today, management needs additional performance indicators. Therefore, external assessments of effectiveness must be performed to measure customers' satisfaction with the physical flow of product through the distribution channel network. So, what is needed is to take previous normative and explorative research and progress through a framework by developing valid measures of distribution channel's effectiveness and efficiency, and identifying research methodologies suited to the data collection requirements.

A Logical Hierarchy Architecture of Location Registers for Supporting Mobility in Wireless ATM Networks (무선 ATM 망에서 이동성 지원을 위한 위치 등록기의 논리적 계층 구조)

  • 김도현;조유제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6A
    • /
    • pp.361-370
    • /
    • 2003
  • This paper attempts to improve the existing architecture of location register for location management in Private Network to Network Interface(PNNI)-based wireless ATM networks. Our approach enhances the hierarchical architecture of location registers based on a PNNI hierarchical architecture, which is referred to as the logical hierarchy architecture of location registers. This paper introduces a logical hierarchy architecture for location registers to reduce the cost of their location management. This logical hierarchy architecture of location registers begins with the lowest level physical location registers that are organized into clusters called logical groups. These logical groups are then represented in higher layers by logical nodes. These logical nodes are again grouped into clusters that are treated as single nodes by the next higher layer. In this way, all location registers are included in this tree-type logical hierarchy architecture. Compared with the existing physical hierarchy architecture of location registers, the analysis results show that the proposed logical hierarchy architecture can reduce the number of databases and thereby the average total location management cost.

Multimedia Information and Authoring for Personalized Media Networks

  • Choi, Insook;Bargar, Robin
    • Journal of Multimedia Information System
    • /
    • v.4 no.3
    • /
    • pp.123-144
    • /
    • 2017
  • Personalized media includes user-targeted and user-generated content (UGC) exchanged through social media and interactive applications. The increased consumption of UGC presents challenges and opportunities to multimedia information systems. We work towards modeling a deep structure for content networks. To gain insights, a hybrid practice with Media Framework (MF) is presented for network creation of personalized media, which leverages the authoring methodology with user-generated semantics. The system's vertical integration allows users to audition their personalized media networks in the context of a global system network. A navigation scheme with dynamic GUI shifts the interaction paradigm for content query and sharing. MF adopts a multimodal architecture anticipating emerging use cases and genres. To model diversification of platforms, information processing is robust across multiple technology configurations. Physical and virtual networks are integrated with distributed services and transactions, IoT, and semantic networks representing media content. MF applies spatiotemporal and semantic signal processing to differentiate action responsiveness and information responsiveness. The extension of multimedia information processing into authoring enables generating interactive and impermanent media on computationally enabled devices. The outcome of this integrated approach with presented methodologies demonstrates a paradigmatic shift of the concept of UGC as personalized media network, which is dynamical and evolvable.

A comparative study on applicability and efficiency of machine learning algorithms for modeling gamma-ray shielding behaviors

  • Bilmez, Bayram;Toker, Ozan;Alp, Selcuk;Oz, Ersoy;Icelli, Orhan
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.310-317
    • /
    • 2022
  • The mass attenuation coefficient is the primary physical parameter to model narrow beam gamma-ray attenuation. A new machine learning based approach is proposed to model gamma-ray shielding behavior of composites alternative to theoretical calculations. Two fuzzy logic algorithms and a neural network algorithm were trained and tested with different mixture ratios of vanadium slag/epoxy resin/antimony in the 0.05 MeV-2 MeV energy range. Two of the algorithms showed excellent agreement with testing data after optimizing adjustable parameters, with root mean squared error (RMSE) values down to 0.0001. Those results are remarkable because mass attenuation coefficients are often presented with four significant figures. Different training data sizes were tried to determine the least number of data points required to train sufficient models. Data set size more than 1000 is seen to be required to model in above 0.05 MeV energy. Below this energy, more data points with finer energy resolution might be required. Neuro-fuzzy models were three times faster to train than neural network models, while neural network models depicted low RMSE. Fuzzy logic algorithms are overlooked in complex function approximation, yet grid partitioned fuzzy algorithms showed excellent calculation efficiency and good convergence in predicting mass attenuation coefficient.

Data Sorting-based Adaptive Spatial Compression in Wireless Sensor Networks

  • Chen, Siguang;Liu, Jincheng;Wang, Kun;Sun, Zhixin;Zhao, Xuejian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3641-3655
    • /
    • 2016
  • Wireless sensor networks (WSNs) provide a promising approach to monitor the physical environments, to prolong the network lifetime by exploiting the mutual correlation among sensor readings has become a research focus. In this paper, we design a hierarchical network framework which guarantees layered-compression. Meanwhile, a data sorting-based adaptive spatial compression scheme (DS-ASCS) is proposed to explore the spatial correlation among signals. The proposed scheme reduces the amount of data transmissions and alleviates the network congestion. It also obtains high compression performance by sorting original sensor readings and selectively discarding the small coefficients in transformed matrix. Moreover, the compression ratio of this scheme varies according to the correlation among signals and the value of adaptive threshold, so the proposed scheme is adaptive to various deploying environments. Finally, the simulation results show that the energy of sorted data is more concentrated than the unsorted data, and the proposed scheme achieves higher reconstruction precision and compression ratio as compared with other spatial compression schemes.

Characterization and Detection of Location Spoofing Attacks

  • Lee, Jeong-Heon;Buehrer, R. Michael
    • Journal of Communications and Networks
    • /
    • v.14 no.4
    • /
    • pp.396-409
    • /
    • 2012
  • With the proliferation of diverse wireless devices, there is an increasing concern about the security of location information which can be spoofed or disrupted by adversaries. This paper investigates the characterization and detection of location spoofing attacks, specifically those which are attempting to falsify (degrade) the position estimate through signal strength based attacks. Since the physical-layer approach identifies and assesses the security risk of position information based solely on using received signal strength (RSS), it is applicable to nearly any practical wireless network. In this paper, we characterize the impact of signal strength and beamforming attacks on range estimates and the resulting position estimate. It is shown that such attacks can be characterized by a scaling factor that biases the individual range estimators either uniformly or selectively. We then identify the more severe types of attacks, and develop an attack detection approach which does not rely on a priori knowledge (either statistical or environmental). The resulting approach, which exploits the dissimilar behavior of two RSS-based estimators when under attack, is shown to be effective at detecting both types of attacks with the detection rate increasing with the severity of the induced location error.

Performance Comparison between Neural Network and Genetic Programming Using Gas Furnace Data

  • Bae, Hyeon;Jeon, Tae-Ryong;Kim, Sung-Shin
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.4
    • /
    • pp.448-453
    • /
    • 2008
  • This study describes design and development techniques of estimation models for process modeling. One case study is undertaken to design a model using standard gas furnace data. Neural networks (NN) and genetic programming (GP) are each employed to model the crucial relationships between input factors and output responses. In the case study, two models were generated by using 70% training data and evaluated by using 30% testing data for genetic programming and neural network modeling. The model performance was compared by using RMSE values, which were calculated based on the model outputs. The average RMSE for training and testing were 0.8925 (training) and 0.9951 (testing) for the NN model, and 0.707227 (training) and 0.673150 (testing) for the GP model, respectively. As concern the results, the NN model has a strong advantage in model training (using the all data for training), and the GP model appears to have an advantage in model testing (using the separated data for training and testing). The performance reproducibility of the GP model is good, so this approach appears suitable for modeling physical fabrication processes.