• Title/Summary/Keyword: distributed learning

Search Result 591, Processing Time 0.024 seconds

Identification of nonlinear dynamical systems based on self-organized distributed networks (자율분산 신경망을 이용한 비선형 동적 시스템 식별)

  • 최종수;김형석;김성중;권오신;김종만
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.4
    • /
    • pp.574-581
    • /
    • 1996
  • The neural network approach has been shown to be a general scheme for nonlinear dynamical system identification. Unfortunately the error surface of a Multilayer Neural Networks(MNN) that widely used is often highly complex. This is a disadvantage and potential traps may exist in the identification procedure. The objective of this paper is to identify a nonlinear dynamical systems based on Self-Organized Distributed Networks (SODN). The learning with the SODN is fast and precise. Such properties are caused from the local learning mechanism. Each local network learns only data in a subregion. This paper also discusses neural network as identifier of nonlinear dynamical systems. The structure of nonlinear system identification employs series-parallel model. The identification procedure is based on a discrete-time formulation. Through extensive simulation, SODN is shown to be effective for identification of nonlinear dynamical systems. (author). 13 refs., 7 figs., 2 tabs.

  • PDF

Dynamic Resource Allocation in Distributed Cloud Computing (분산 클라우드 컴퓨팅을 위한 동적 자원 할당 기법)

  • Ahn, TaeHyoung;Kim, Yena;Lee, SuKyoung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38B no.7
    • /
    • pp.512-518
    • /
    • 2013
  • A resource allocation algorithm has a high impact on user satisfaction as well as the ability to accommodate and process services in a distributed cloud computing. In other words, service rejections, which occur when datacenters have no enough resources, degrade the user satisfaction level. Therefore, in this paper, we propose a resource allocation algorithm considering the cloud domain's remaining resources to minimize the number of service rejections. The resource allocation rate based on Q-Learning increases when the remaining resources are sufficient to allocate the maximum allocation rate otherwise and avoids the service rejection. To demonstrate, We compare the proposed algorithm with two previous works and show that the proposed algorithm has the smaller number of the service rejections.

Deep Reinforcement Learning-Based C-V2X Distributed Congestion Control for Real-Time Vehicle Density Response (실시간 차량 밀도에 대응하는 심층강화학습 기반 C-V2X 분산혼잡제어)

  • Byeong Cheol Jeon;Woo Yoel Yang;Han-Shin Jo
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.379-385
    • /
    • 2023
  • Distributed congestion control (DCC) is a technology that mitigates channel congestion and improves communication performance in high-density vehicular networks. Traditional DCC techniques operate to reduce channel congestion without considering quality of service (QoS) requirements. Such design of DCC algorithms can lead to excessive DCC actions, potentially degrading other aspects of QoS. To address this issue, we propose a deep reinforcement learning-based QoS-adaptive DCC algorithm. The simulation was conducted using a quasi-real environment simulator, generating dynamic vehicular densities for evaluation. The simulation results indicate that our proposed DCC algorithm achieves results closer to the targeted QoS compared to existing DCC algorithms.

U-Learning: An Interactive Social Learning Model

  • Caytiles, Ronnie D.;Kim, Hye-jin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.5 no.1
    • /
    • pp.9-13
    • /
    • 2013
  • This paper presents the concepts of ubiquitous computing technology to construct a ubiquitous learning environment that enables learning to take place anywhere at any time. This ubiquitous learning environment is described as an environment that supports students' learning using digital media in geographically distributed environments. The u-learning model is a web-based e-learning system that could enable learners to acquire knowledge and skills through interaction between them and the ubiquitous learning environment. Students are allowed to be in an environment of their interest. The communication between devices and the embedded computers in the environment allows learner to learn while they are moving, hence, attaching them to their learning environment.

Autonomous and Asynchronous Triggered Agent Exploratory Path-planning Via a Terrain Clutter-index using Reinforcement Learning

  • Kim, Min-Suk;Kim, Hwankuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.181-188
    • /
    • 2022
  • An intelligent distributed multi-agent system (IDMS) using reinforcement learning (RL) is a challenging and intricate problem in which single or multiple agent(s) aim to achieve their specific goals (sub-goal and final goal), where they move their states in a complex and cluttered environment. The environment provided by the IDMS provides a cumulative optimal reward for each action based on the policy of the learning process. Most actions involve interacting with a given IDMS environment; therefore, it can provide the following elements: a starting agent state, multiple obstacles, agent goals, and a cluttered index. The reward in the environment is also reflected by RL-based agents, in which agents can move randomly or intelligently to reach their respective goals, to improve the agent learning performance. We extend different cases of intelligent multi-agent systems from our previous works: (a) a proposed environment-clutter-based-index for agent sub-goal selection and analysis of its effect, and (b) a newly proposed RL reward scheme based on the environmental clutter-index to identify and analyze the prerequisites and conditions for improving the overall system.

Deep Learning Based Security Model for Cloud based Task Scheduling

  • Devi, Karuppiah;Paulraj, D.;Muthusenthil, Balasubramanian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3663-3679
    • /
    • 2020
  • Scheduling plays a dynamic role in cloud computing in generating as well as in efficient distribution of the resources of each task. The principle goal of scheduling is to limit resource starvation and to guarantee fairness among the parties using the resources. The demand for resources fluctuates dynamically hence the prearranging of resources is a challenging task. Many task-scheduling approaches have been used in the cloud-computing environment. Security in cloud computing environment is one of the core issue in distributed computing. We have designed a deep learning-based security model for scheduling tasks in cloud computing and it has been implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, Memory utilization, and execution time is compared with Round Robin (RR) and Waited Round Robin (WRR) algorithms.

On iterative learning control for some distributed parameter system

  • Kim, Won-Cheol;Lee, Kwang-Soon;Kim, Arkadii-V.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.319-323
    • /
    • 1994
  • In this paper, we discuss a design method of iterative learning control systems for parabolic linear distributed parameter systems(DPSs). First, we discuss some aspects of boundary control of the DPS, and then propose to employ the Karhunen-Loeve procedure to reduce the infinite dimensional problem to a low-order finite dimensional problem. An iterative learning control(ILC) for non-square transfer function matrix is introduced finally for the reduced order system.

  • PDF

Distributed Carrier Aggregation in Small Cell Networks: A Game-theoretic Approach

  • Zhang, Yuanhui;Kan, Chunrong;Xu, Kun;Xu, Yuhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.4799-4818
    • /
    • 2015
  • In this paper, we investigate the problem of achieving global optimization for distributed carrier aggregation (CA) in small cell networks, using a game theoretic solution. To cope with the local interference and the distinct cost of intra-band and inter-band CA, we propose a non-cooperation game which is proved as an exact potential game. Furthermore, we propose a spatial adaptive play learning algorithm with heterogeneous learning parameters to converge towards NE of the game. In this algorithm, heterogeneous learning parameters are introduced to accelerate the convergence speed. It is shown that with the proposed game-theoretic approach, global optimization is achieved with local information exchange. Simulation results validate the effectivity of the proposed game-theoretic CA approach.

Deep Reinforcement Learning-based Distributed Routing Algorithm for Minimizing End-to-end Delay in MANET (MANET에서 종단간 통신지연 최소화를 위한 심층 강화학습 기반 분산 라우팅 알고리즘)

  • Choi, Yeong-Jun;Seo, Ju-Sung;Hong, Jun-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1267-1270
    • /
    • 2021
  • In this paper, we propose a distributed routing algorithm for mobile ad hoc networks (MANET) where mobile devices can be utilized as relays for communication between remote source-destination nodes. The objective of the proposed algorithm is to minimize the end-to-end communication delay caused by transmission failure with deep channel fading. In each hop, the node needs to select the next relaying node by considering a tradeoff relationship between the link stability and forward link distance. Based on such feature, we formulate the problem with partially observable Markov decision process (MDP) and apply deep reinforcement learning to derive effective routing strategy for the formulated MDP. Simulation results show that the proposed algorithm outperforms other baseline schemes in terms of the average end-to-end delay.

Performance Analysis of Building Change Detection Algorithm (연합학습 기반 자치구별 건물 변화탐지 알고리즘 성능 분석)

  • Kim Younghyun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.233-244
    • /
    • 2023
  • Although artificial intelligence and machine learning technologies have been used in various fields, problems with personal information protection have arisen based on centralized data collection and processing. Federated learning has been proposed to solve this problem. Federated learning is a process in which clients who own data in a distributed data environment learn a model using their own data and collectively create an artificial intelligence model by centrally collecting learning results. Unlike the centralized method, Federated learning has the advantage of not having to send the client's data to the central server. In this paper, we quantitatively present the performance improvement when federated learning is applied using the building change detection learning data. As a result, it has been confirmed that the performance when federated learning was applied was about 29% higher on average than the performance when it was not applied. As a future work, we plan to propose a method that can effectively reduce the number of federated learning rounds to improve the convergence time of federated learning.