• Title/Summary/Keyword: Stochastic Network Simulation

Search Result 109, Processing Time 0.03 seconds

Simulation of the Phase-Type Distribution Based on the Minimal Laplace Transform (최소 표현 라플라스 변환에 기초한 단계형 확률변수의 시뮬레이션에 관한 연구)

  • Sunkyo Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.1
    • /
    • pp.19-26
    • /
    • 2024
  • The phase-type, PH, distribution is defined as the time to absorption into a terminal state in a continuous-time Markov chain. As the PH distribution includes family of exponential distributions, it has been widely used in stochastic models. Since the PH distribution is represented and generated by an initial probability vector and a generator matrix which is called the Markovian representation, we need to find a vector and a matrix that are consistent with given set of moments if we want simulate a PH distribution. In this paper, we propose an approach to simulate a PH distribution based on distribution function which can be obtained directly from moments. For the simulation of PH distribution of order 2, closed-form formula and streamlined procedures are given based on the Jordan decomposition and the minimal Laplace transform which is computationally more efficient than the moment matching methods for the Markovian representation. Our approach can be used more effectively than the Markovian representation in generating higher order PH distribution in queueing network simulation.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Coverage and Energy Modeling of HetNet Under Base Station On-Off Model

  • Song, Sida;Chang, Yongyu;Wang, Xianling;Yang, Dacheng
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.450-459
    • /
    • 2015
  • Small cell networks, as an important evolution path for next-generation cellular networks, have drawn much attention. Different from the traditional base stations (BSs) always-on model, we proposed a BSs on-off model, where a new, simple expression for the probabilities of active BSs in a heterogeneous network is derived. This model is more suitable for application in practical networks. Based on this, we develop an analytical framework for the performance evaluation of small cell networks, adopting stochastic geometry theory. We derive the system coverage probability; average energy efficiency (AEE) and average uplink power consumption (AUPC) for different association strategies; maximum biased received power (MaBRP); and minimum association distance (MiAD). It is analytically shown that MaBRP is beneficial for coverage but will have some loss in energy saving. On the contrary, MiAD is not advocated from the point of coverage but is more energy efficient. The simulation results show that the use of range expansion in MaBRP helps to save energy but that this is not so in MiAD. Furthermore, we can achieve an optimal AEE by establishing an appropriate density of small cells.

Resource Allocation for D2D Communication in Cellular Networks Based on Stochastic Geometry and Graph-coloring Theory

  • Xu, Fangmin;Zou, Pengkai;Wang, Haiquan;Cao, Haiyan;Fang, Xin;Hu, Zhirui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4946-4960
    • /
    • 2020
  • In a device-to-device (D2D) underlaid cellular network, there exist two types of co-channel interference. One type is inter-layer interference caused by spectrum reuse between D2D transmitters and cellular users (CUEs). Another type is intra-layer interference caused by spectrum sharing among D2D pairs. To mitigate the inter-layer interference, we first derive the interference limited area (ILA) to protect the coverage probability of cellular users by modeling D2D users' location as a Poisson point process, where a D2D transmitter is allowed to reuse the spectrum of the CUE only if the D2D transmitter is outside the ILA of the CUE. To coordinate the intra-layer interference, the spectrum sharing criterion of D2D pairs is derived based on the (signal-to-interference ratio) SIR requirement of D2D communication. Based on this criterion, D2D pairs are allowed to share the spectrum when one D2D pair is far from another sufficiently. Furthermore, to maximize the energy efficiency of the system, a resource allocation scheme is proposed according to weighted graph coloring theory and the proposed ILA restriction. Simulation results show that our proposed scheme provides significant performance gains over the conventional scheme and the random allocation scheme.

Fluid Flow and Solute Transport in a Discrete Fracture Network Model with Nonlinear Hydromechanical Effect (비선형 hydromechanic 효과를 고려한 이산 균열망 모형에서의 유체흐름과 오염물질 이송에 관한 수치모의 실험)

  • Jeong, U-Chang
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.3
    • /
    • pp.347-360
    • /
    • 1998
  • Numerical simulations for fluid flow and solute transport in a fracture rock masses are performed by using a transient flow model, which is based on the three-dimensional stochastic and discrete fracture network model (DFN model) and is coupled hydraulic model with mechanical model. In the numerical simulations of the solute transport, we used to the particle following algorithm which is similar to an advective biased random walk. The purpose of this study is to predict the response of the tracer test between two deep bore holes (GPK1 and GPK2) implanted at Soultz sous Foret in France, in the context of the geothermal researches.l The data sets used are obtained from in situcirculating experiments during 1995. As the result of the transport simulation, the mean transit time for the non reactive particles is about 5 days between two bore holes.

  • PDF

Development of A Dynamic Departure Time Choice Model based on Heterogeneous Transit Passengers (이질적 지하철승객 기반의 동적 출발시간선택모형 개발 (도심을 목적지로 하는 단일 지하철노선을 중심으로))

  • 김현명;임용택;신동호;백승걸
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.5
    • /
    • pp.119-134
    • /
    • 2001
  • This paper proposed a dynamic transit vehicle simulation model and a dynamic transit passengers simulation model, which can simultaneously simulate the transit vehicles and passengers traveling on a transit network, and also developed an algorithm of dynamic departure time choice model based on individual passenger. The proposed model assumes that each passenger's behavior is heterogeneous based on stochastic process by relaxing the assumption of homogeneity among passengers and travelers have imperfect information and bounded rationality to more actually represent and to simulate each passenger's behavior. The proposed model integrated a inference and preference reforming procedure into the learning and decision making process in order to describe and to analyze the departure time choices of transit passengers. To analyze and evaluate the model an example transit line heading for work place was used. Numerical results indicated that in the model based on heterogeneous passengers the travelers' preference influenced more seriously on the departure time choice behavior, while in the model based on homogeneous passengers it does not. The results based on homogeneous passengers seemed to be unrealistic in the view of rational behavior. These results imply that the aggregated travel demand models such as the traditional network assignment models based on user equilibrium, assuming perfect information on the network, homogeneity and rationality, might be different from the real dynamic travel demand patterns occurred on actual network.

  • PDF

Actor-Critic Reinforcement Learning System with Time-Varying Parameters

  • Obayashi, Masanao;Umesako, Kosuke;Oda, Tazusa;Kobayashi, Kunikazu;Kuremoto, Takashi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.138-141
    • /
    • 2003
  • Recently reinforcement learning has attracted attention of many researchers because of its simple and flexible learning ability for any environments. And so far many reinforcement learning methods have been proposed such as Q-learning, actor-critic, stochastic gradient ascent method and so on. The reinforcement learning system is able to adapt to changes of the environment because of the mutual action with it. However when the environment changes periodically, it is not able to adapt to its change well. In this paper we propose the reinforcement learning system that is able to adapt to periodical changes of the environment by introducing the time-varying parameters to be adjusted. It is shown that the proposed method works well through the simulation study of the maze problem with aisle that opens and closes periodically, although the conventional method with constant parameters to be adjusted does not works well in such environment.

  • PDF

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao;Honma, Noriyasu;Abe, Kenichi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.494-494
    • /
    • 2000
  • This paper demonstrates that the largest Lyapunov exponent $\lambda$ of recurrent neural networks can be controlled by a gradient method. The method minimizes a square error $e_{\lambda}=(\lambda-\lambda^{obj})^2$ where $\lambda^{obj}$ is desired exponent. The $\lambda$ can be given as a function of the network parameters P such as connection weights and thresholds of neurons' activation. Then changes of parameters to minimize the error are given by calculating their gradients $\partial\lambda/\partialP$. In a previous paper, we derived a control method of $\lambda$via a direct calculation of $\partial\lambda/\partialP$ with a gradient collection through time. This method however is computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics. Our new method proposed in this paper is based on a stochastic relation between the complexity $\lambda$ and parameters P of the networks configuration under a restriction. Then the new method allows us to approximate the gradient collection in a fashion without time evolution. This approximation requires only $O(N^2)$ run time while our previous method needs $O(N^{5}T)$ run time for networks with N neurons and T evolution. Simulation results show that the new method can realize a "stable" control for larege-scale networks with chaotic dynamics.

  • PDF

Fuzzy Logic based Admission Control for On-grid Energy Saving in Hybrid Energy Powered Cellular Networks

  • Wang, Heng;Tang, Chaowei;Zhao, Zhenzhen;Tang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4724-4747
    • /
    • 2016
  • To efficiently reduce on-grid energy consumption, the admission control algorithm in the hybrid energy powered cellular network (HybE-Net) with base stations (BSs) powered by on-grid energy and solar energy is studied. In HybE-Net, the fluctuation of solar energy harvesting and energy consumption may result in the imbalance of solar energy utilization among BSs, i.e., some BSs may be surplus in solar energy, while others may maintain operation with on-grid energy supply. Obviously, it makes solar energy not completely useable, and on-grid energy cannot be reduced at capacity. Thus, how to control user admission to improve solar energy utilization and to reduce on-grid energy consumption is a great challenge. Motivated by this, we first model the energy flow behavior by using stochastic queue model, and dynamic energy characteristics are analyzed mathematically. Then, fuzzy logic based admission control algorithm is proposed, which comprehensively considers admission judgment parameters, e.g., transmission rate, bandwidth, energy state of BSs. Moreover, the index of solar energy utilization balancing is proposed to improve the balance of energy utilization among different BSs in the proposed algorithm. Finally, simulation results demonstrate that the proposed algorithm performs excellently in improving solar energy utilization and reducing on-grid energy consumption of the HybE-Net.

Improved AntHocNet with Bidirectional Path Setup and Loop Avoidance (양방향 경로 설정 및 루프 방지를 통한 개선된 AntHocNet)

  • Rahman, Shams ur;Nam, Jae-Choong;Khan, Ajmal;Cho, You-Ze
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.1
    • /
    • pp.64-76
    • /
    • 2017
  • Routing in mobile ad hoc networks (MANETs) is highly challenging because of the dynamic nature of network topology. AntHocNet is a bio-inspired routing protocol for MANETs that mimics the foraging behavior of ants. However, unlike many other MANET routing protocols, the paths constructed in AntHocNet are unidirectional, which requires a separate path setup if a route in the reverse direction is also required. Because most communication sessions are bidirectional, this unidirectional path setup approach is often inefficient. Moreover, AntHocNet suffers from looping problems because of its property of multiple paths and stochastic data routing. In this paper, we propose a modified path setup procedure that constructs bidirectional paths. We also propose solutions to some of the looping problems in AntHocNet. Simulation results show that performance is significantly enhanced in terms of overhead, end-to-end delay, and delivery ratio when loops are prevented. Performance is further improved, in terms of overhead, when bidirectional paths setup is employed.