• Title/Summary/Keyword: Value function network

Search Result 346, Processing Time 0.03 seconds

Migration and Energy Aware Network Traffic Prediction Method Based on LSTM in NFV Environment

  • Ying Hu;Liang Zhu;Jianwei Zhang;Zengyu Cai;Jihui Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.896-915
    • /
    • 2023
  • The network function virtualization (NFV) uses virtualization technology to separate software from hardware. One of the most important challenges of NFV is the resource management of virtual network functions (VNFs). According to the dynamic nature of NFV, the resource allocation of VNFs must be changed to adapt to the variations of incoming network traffic. However, the significant delay may be happened because of the reallocation of resources. In order to balance the performance between delay and quality of service, this paper firstly made a compromise between VNF migration and energy consumption. Then, the long short-term memory (LSTM) was utilized to forecast network traffic. Also, the asymmetric loss function for LSTM (LO-LSTM) was proposed to increase the predicted value to a certain extent. Finally, an experiment was conducted to evaluate the performance of LO-LSTM. The results demonstrated that the proposed LO-LSTM can not only reduce migration times, but also make the energy consumption increment within an acceptable range.

Cascaded Residual Densely Connected Network for Image Super-Resolution

  • Zou, Changjun;Ye, Lintao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2882-2903
    • /
    • 2022
  • Image super-resolution (SR) processing is of great value in the fields of digital image processing, intelligent security, film and television production and so on. This paper proposed a densely connected deep learning network based on cascade architecture, which can be used to solve the problem of super-resolution in the field of image quality enhancement. We proposed a more efficient residual scaling dense block (RSDB) and the multi-channel cascade architecture to realize more efficient feature reuse. Also we proposed a hybrid loss function based on L1 error and L error to achieve better L error performance. The experimental results show that the overall performance of the network is effectively improved on cascade architecture and residual scaling. Compared with the residual dense net (RDN), the PSNR / SSIM of the new method is improved by 2.24% / 1.44% respectively, and the L performance is improved by 3.64%. It shows that the cascade connection and residual scaling method can effectively realize feature reuse, improving the residual convergence speed and learning efficiency of our network. The L performance is improved by 11.09% with only a minimal loses of 1.14% / 0.60% on PSNR / SSIM performance after adopting the new loss function. That is to say, the L performance can be improved greatly on the new loss function with a minor loss of PSNR / SSIM performance, which is of great value in L error sensitive tasks.

Nonlinear Function Approximation by Fuzzy-neural Interpolating Networks

  • Suh, Il-Hong;Kim, Tae-Won-
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1177-1180
    • /
    • 1993
  • In this paper, a fuzzy-neural interpolating network is proposed to efficiently approximate a nonlinear function. Specifically, basis functions are first constructed by Fuzzy Membership Function based Neural Networks (FMFNN). And the fuzzy similarity, which is defined as the degree of matching between actual output value and the output of each basis function, is employed to determine initial weighting of the proposed network. Then the weightings are updated in such a way that square of the error is minimized. To show the capability of function approximation of the proposed fuzzy-neural interpolating network, a numerical example is illustrated.

  • PDF

POI Recommendation Method Based on Multi-Source Information Fusion Using Deep Learning in Location-Based Social Networks

  • Sun, Liqiang
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.352-368
    • /
    • 2021
  • Sign-in point of interest (POI) are extremely sparse in location-based social networks, hindering recommendation systems from capturing users' deep-level preferences. To solve this problem, we propose a content-aware POI recommendation algorithm based on a convolutional neural network. First, using convolutional neural networks to process comment text information, we model location POI and user latent factors. Subsequently, the objective function is constructed by fusing users' geographical information and obtaining the emotional category information. In addition, the objective function comprises matrix decomposition and maximisation of the probability objective function. Finally, we solve the objective function efficiently. The prediction rate and F1 value on the Instagram-NewYork dataset are 78.32% and 76.37%, respectively, and those on the Instagram-Chicago dataset are 85.16% and 83.29%, respectively. Comparative experiments show that the proposed method can obtain a higher precision rate than several other newer recommended methods.

Fuzzy Regression Analysis Using Fuzzy Neural Networks (퍼지 신경망에 의한 퍼지 회귀분석)

  • Kwon, Ki-Taek
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.23 no.2
    • /
    • pp.371-383
    • /
    • 1997
  • This paper propose a fuzzy regression method using fuzzy neural networks when a membership value is attached to each input-output pair. First, a method of linear fuzzy regression analysis is described by interpreting the reliability of each input-output pair as its membership values. Next, an architecture of fuzzy neural networks with fuzzy weights and fuzzy biases is shown. The fuzzy neural network maps a crisp input vector to a fuzzy output. A cost function is defined using the fuzzy output from the fuzzy neural network and the corresponding target output with a membership value. A learning algorithm is derived from the cost function. The derived learning algorithm trains the fuzzy neural network so that the level set of the fuzzy output includes the target output. Last, the proposed method is illustrated by computer simulations on numerical examples.

  • PDF

Development of a Neural network for Optimization and Its Application Traveling Salesman Problem

  • Sun, Hong-Dae;Jae, Ahn-Byoung;Jee, Chung-Won;Suck, Cho-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.169.5-169
    • /
    • 2001
  • This study proposes a neural network for solving optimization problems such as the TSP (Travelling Salesman Problem), scheduling, and line balancing. The Hopfield network has been used for solving such problems, but it frequently gives abnormal solutions or non-optimal ones. Moreover, the Hopfield network takes much time especially in solving large size problems. To overcome such disadvantages, this study adopts nodes whose outputs changes with a fixed value at every evolution. The proposed network is applied to solving a TSP, finding the shortest path for visiting all the cities, each of which is visted only once. Here, the travelling path is reflected to the energy function of the network. The proposed network evolves to globally minimize the energy function, and a ...

  • PDF

The Development of IDMLP Neural Network for the Chip Implementation and it's Application to Speech Recognition (Chip 구현을 위한 IDMLP 신경 회로망의 개발과 음성인식에 대한 응용)

  • 김신진;박정운;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.394-403
    • /
    • 1991
  • This paper described the development of input driven multilayer perceptron(IDMLP) neural network and it's application to the Korean spoken digit recognition. The IDMPLP neural network used here and the learning algorithm for this network was proposed newly. In this model, weight value is integer and transfer function in the neuron is hard limit function. According to the result of the network learning for the some kinds of input data, the number of network layers is one or more by the difficulties of classifying the inputs. We tested the recognition of binaried data for the spoken digit 0 to 9 by means of the proposed network. The experimental results are 100% and 96% for the learning data and test data, respectively.

  • PDF

A Design of Reconfigurable Neural Network Processor (재구성 가능한 신경망 프로세서의 설계)

  • 장영진;이현수
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.368-371
    • /
    • 1999
  • In this paper, we propose a neural network processor architecture with on-chip learning and with reconfigurability according to the data dependencies of the algorithm applied. For the neural network model applied, the proposed architecture can be configured into either SIMD or SRA(Systolic Ring Array) without my changing of on-chip configuration so as to obtain a high throughput. However, changing of system configuration can be controlled by user program. To process activation function, which needs amount of cycles to get its value, we design it by using PWL(Piece-Wise Linear) function approximation method. This unit has only single latency and the processing ability of non-linear function such as sigmoid gaussian function etc. And we verified the processing mechanism with EBP(Error Back-Propagation) model.

  • PDF

Improvement of Network Traffic Monitoring Performance by Extending SNMP Function

  • Youn Chun-Kyun
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.171-175
    • /
    • 2004
  • Network management for detail analysis can cause speed decline of application in case of lack band width by traffic increase of the explosive Internet. Because a manager requests MIB value for the desired objects to an agent by management policy, and then the agent responds to the manager. Such processes are repeated, so it can cause increase of network traffic. Specially, repetitious occurrence of sending-receiving information is very inefficient for a same object when a trend analysis of traffic is performed. In this paper, an efficient SNMP is proposed to add new PDUs into the existing SNMP in order to accept time function. Utilizing this PDU, it minimizes unnecessary sending-receiving message and collects information for trend management of network efficiently. This proposed SNMP is tested for compatibility with the existing SNMP and decreases amount of network traffic largely

  • PDF

A Method of Determining the Scale Parameter for Robust Supervised Multilayer Perceptrons

  • Park, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.601-608
    • /
    • 2007
  • Lee, et al. (1999) proposed a unique but universal robust objective function replacing the square objective function for the radial basis function network, and demonstrated some advantages. In this article, the robust objective function in Lee, et al. (1999) is adapted for a multilayer perceptron (MLP). The shape of the robust objective function is formed by the scale parameter. Another method of determining a proper value of that parameter is proposed.