• 제목/요약/키워드: distributed learning

검색결과 588건 처리시간 0.025초

자율분산 신경망을 이용한 비선형 동적 시스템 식별 (Identification of nonlinear dynamical systems based on self-organized distributed networks)

  • 최종수;김형석;김성중;권오신;김종만
    • 대한전기학회논문지
    • /
    • 제45권4호
    • /
    • pp.574-581
    • /
    • 1996
  • The neural network approach has been shown to be a general scheme for nonlinear dynamical system identification. Unfortunately the error surface of a Multilayer Neural Networks(MNN) that widely used is often highly complex. This is a disadvantage and potential traps may exist in the identification procedure. The objective of this paper is to identify a nonlinear dynamical systems based on Self-Organized Distributed Networks (SODN). The learning with the SODN is fast and precise. Such properties are caused from the local learning mechanism. Each local network learns only data in a subregion. This paper also discusses neural network as identifier of nonlinear dynamical systems. The structure of nonlinear system identification employs series-parallel model. The identification procedure is based on a discrete-time formulation. Through extensive simulation, SODN is shown to be effective for identification of nonlinear dynamical systems. (author). 13 refs., 7 figs., 2 tabs.

  • PDF

분산 클라우드 컴퓨팅을 위한 동적 자원 할당 기법 (Dynamic Resource Allocation in Distributed Cloud Computing)

  • 안태형;김예나;이수경
    • 한국통신학회논문지
    • /
    • 제38B권7호
    • /
    • pp.512-518
    • /
    • 2013
  • 분산 클라우드 컴퓨팅에서 자원 할당 알고리즘은 사용자 만족도와 서비스 수용 및 처리 능력과 밀접한 관련을 가지기 때문에 중요하다. 즉, 분산 클라우드에서는 서비스 처리를 위해 이용가능한 자원이 없을 때 발생하는 서비스 거부는 사용자 만족도를 반감시킨다. 따라서 본 논문에서는 서비스 거부를 최소화하기 위하여 데이터센터 자원 상황을 고려한 자원 할당 알고리즘을 제안한다. 제안하는 알고리즘은 Q-Learning 기반의 자원 할당량 학습에 의해서 클라우드 데이터센터에서 최대 자원 할당량 만큼 할당을 할 수 있으면 자원 할당량이 증가하고 그렇지 못할 때는 자원 할당량이 감소하게 된다. 본 논문에서는 제안하는 알고리즘과 기존의 두 알고리즘을 평가하고 제안하는 알고리즘이 두 알고리즘 보다 낮은 서비스 거부율을 보임을 증명한다.

실시간 차량 밀도에 대응하는 심층강화학습 기반 C-V2X 분산혼잡제어 (Deep Reinforcement Learning-Based C-V2X Distributed Congestion Control for Real-Time Vehicle Density Response)

  • 전병철;양우열;조한신
    • 전기전자학회논문지
    • /
    • 제27권4호
    • /
    • pp.379-385
    • /
    • 2023
  • 분산혼잡제어는 높은 밀도의 차량 네트워크에서 채널 혼잡을 완화하고, 통신 성능을 개선하는 기술이다. 기존 분산혼잡제어 기술은 quality of service(QoS) 요구사항을 고려하지 않은 채 채널 혼잡을 줄이는 방향으로 동작한다. 이러한 분산혼잡제어 알고리즘 설계는 과도한 DCC 동작으로 인하여 다른 QoS를 저하시킬 수 있다. 이와 같은 문제를 해결하기 위해 심층강화학습 기반 QoS 적응형 DCC 알고리즘을 제안한다. 시뮬레이션은 준 실환경 시뮬레이터를 기반으로 동적인 차량 밀도를 생성하여 평가하였으며, 시뮬레이션 결과 기존 DCC 알고리즘 보다 목표 QoS에 더 근접한 결과를 확인하였다.

U-Learning: An Interactive Social Learning Model

  • Caytiles, Ronnie D.;Kim, Hye-jin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제5권1호
    • /
    • pp.9-13
    • /
    • 2013
  • This paper presents the concepts of ubiquitous computing technology to construct a ubiquitous learning environment that enables learning to take place anywhere at any time. This ubiquitous learning environment is described as an environment that supports students' learning using digital media in geographically distributed environments. The u-learning model is a web-based e-learning system that could enable learners to acquire knowledge and skills through interaction between them and the ubiquitous learning environment. Students are allowed to be in an environment of their interest. The communication between devices and the embedded computers in the environment allows learner to learn while they are moving, hence, attaching them to their learning environment.

Autonomous and Asynchronous Triggered Agent Exploratory Path-planning Via a Terrain Clutter-index using Reinforcement Learning

  • Kim, Min-Suk;Kim, Hwankuk
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.181-188
    • /
    • 2022
  • An intelligent distributed multi-agent system (IDMS) using reinforcement learning (RL) is a challenging and intricate problem in which single or multiple agent(s) aim to achieve their specific goals (sub-goal and final goal), where they move their states in a complex and cluttered environment. The environment provided by the IDMS provides a cumulative optimal reward for each action based on the policy of the learning process. Most actions involve interacting with a given IDMS environment; therefore, it can provide the following elements: a starting agent state, multiple obstacles, agent goals, and a cluttered index. The reward in the environment is also reflected by RL-based agents, in which agents can move randomly or intelligently to reach their respective goals, to improve the agent learning performance. We extend different cases of intelligent multi-agent systems from our previous works: (a) a proposed environment-clutter-based-index for agent sub-goal selection and analysis of its effect, and (b) a newly proposed RL reward scheme based on the environmental clutter-index to identify and analyze the prerequisites and conditions for improving the overall system.

Deep Learning Based Security Model for Cloud based Task Scheduling

  • Devi, Karuppiah;Paulraj, D.;Muthusenthil, Balasubramanian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권9호
    • /
    • pp.3663-3679
    • /
    • 2020
  • Scheduling plays a dynamic role in cloud computing in generating as well as in efficient distribution of the resources of each task. The principle goal of scheduling is to limit resource starvation and to guarantee fairness among the parties using the resources. The demand for resources fluctuates dynamically hence the prearranging of resources is a challenging task. Many task-scheduling approaches have been used in the cloud-computing environment. Security in cloud computing environment is one of the core issue in distributed computing. We have designed a deep learning-based security model for scheduling tasks in cloud computing and it has been implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, Memory utilization, and execution time is compared with Round Robin (RR) and Waited Round Robin (WRR) algorithms.

On iterative learning control for some distributed parameter system

  • Kim, Won-Cheol;Lee, Kwang-Soon;Kim, Arkadii-V.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1994년도 Proceedings of the Korea Automatic Control Conference, 9th (KACC) ; Taejeon, Korea; 17-20 Oct. 1994
    • /
    • pp.319-323
    • /
    • 1994
  • In this paper, we discuss a design method of iterative learning control systems for parabolic linear distributed parameter systems(DPSs). First, we discuss some aspects of boundary control of the DPS, and then propose to employ the Karhunen-Loeve procedure to reduce the infinite dimensional problem to a low-order finite dimensional problem. An iterative learning control(ILC) for non-square transfer function matrix is introduced finally for the reduced order system.

  • PDF

Distributed Carrier Aggregation in Small Cell Networks: A Game-theoretic Approach

  • Zhang, Yuanhui;Kan, Chunrong;Xu, Kun;Xu, Yuhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권12호
    • /
    • pp.4799-4818
    • /
    • 2015
  • In this paper, we investigate the problem of achieving global optimization for distributed carrier aggregation (CA) in small cell networks, using a game theoretic solution. To cope with the local interference and the distinct cost of intra-band and inter-band CA, we propose a non-cooperation game which is proved as an exact potential game. Furthermore, we propose a spatial adaptive play learning algorithm with heterogeneous learning parameters to converge towards NE of the game. In this algorithm, heterogeneous learning parameters are introduced to accelerate the convergence speed. It is shown that with the proposed game-theoretic approach, global optimization is achieved with local information exchange. Simulation results validate the effectivity of the proposed game-theoretic CA approach.

MANET에서 종단간 통신지연 최소화를 위한 심층 강화학습 기반 분산 라우팅 알고리즘 (Deep Reinforcement Learning-based Distributed Routing Algorithm for Minimizing End-to-end Delay in MANET)

  • Choi, Yeong-Jun;Seo, Ju-Sung;Hong, Jun-Pyo
    • 한국정보통신학회논문지
    • /
    • 제25권9호
    • /
    • pp.1267-1270
    • /
    • 2021
  • In this paper, we propose a distributed routing algorithm for mobile ad hoc networks (MANET) where mobile devices can be utilized as relays for communication between remote source-destination nodes. The objective of the proposed algorithm is to minimize the end-to-end communication delay caused by transmission failure with deep channel fading. In each hop, the node needs to select the next relaying node by considering a tradeoff relationship between the link stability and forward link distance. Based on such feature, we formulate the problem with partially observable Markov decision process (MDP) and apply deep reinforcement learning to derive effective routing strategy for the formulated MDP. Simulation results show that the proposed algorithm outperforms other baseline schemes in terms of the average end-to-end delay.

연합학습 기반 자치구별 건물 변화탐지 알고리즘 성능 분석 (Performance Analysis of Building Change Detection Algorithm)

  • 김영현
    • 디지털산업정보학회논문지
    • /
    • 제19권3호
    • /
    • pp.233-244
    • /
    • 2023
  • Although artificial intelligence and machine learning technologies have been used in various fields, problems with personal information protection have arisen based on centralized data collection and processing. Federated learning has been proposed to solve this problem. Federated learning is a process in which clients who own data in a distributed data environment learn a model using their own data and collectively create an artificial intelligence model by centrally collecting learning results. Unlike the centralized method, Federated learning has the advantage of not having to send the client's data to the central server. In this paper, we quantitatively present the performance improvement when federated learning is applied using the building change detection learning data. As a result, it has been confirmed that the performance when federated learning was applied was about 29% higher on average than the performance when it was not applied. As a future work, we plan to propose a method that can effectively reduce the number of federated learning rounds to improve the convergence time of federated learning.