• Title/Summary/Keyword: Q algorithm

Search Result 690, Processing Time 0.023 seconds

Efficiency Optimization Control of SynRM with ANN Speed Estimation (ANN의 속도 추정에 의한 SynRM의 효율 최적화 제어)

  • Choi, Jung-Sik;Ko, Jae-Sub;Chung, Dong-Hwa
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.55 no.3
    • /
    • pp.133-140
    • /
    • 2006
  • This paper is proposed an efficiency optimization control algorithm for a synchronous reluctance motor(SynRM) which minimizes the copper and iron losses. Also, this paper presents a speed estimated control scheme of SynRM using artificial neural network(ANN). There exists a variety of combinations of d and q-axis current which provide a specific motor torque. The objective of the efficiency optimization controller is to seek a combination of d and q-axis current components, which provides minimum losses at a certain operating point in steady state. It is shown that the current components which directly govern the torque production have been very well regulated by the efficiency optimization control scheme. The proposed algorithm allows the electromagnetic losses in variable speed and torque drives to be reduced while keeping good torque control dynamics. The control performance of ANN is evaluated by analysis for various operating conditions. Analysis results are presented to show the validity of the proposed algorithm.

A FASTER LU DECOMPOSITION FOR PARALLEL C PROGRAMS

  • Lee, Sang-Moon;Lee, Chin-Young
    • Journal of applied mathematics & informatics
    • /
    • v.3 no.2
    • /
    • pp.217-234
    • /
    • 1996
  • This report introduces a faster parallel LU decomposi-tion algorithm that gives a speedup almost equal to the number of nodes used. The new algorithm takes an advantage of an important C feature that lays out a matrix using a row major scheme and is based on the currently widely used LU decomposition algorithm with one major modification to eliminate most of the communication overhead. Empirical results are included in this report. For example solving a dense matrix that contains 100,000,000 elements gives a speedup of 50 when executed on 50 nodes of an intel Paragon in parallel.

NEWTON SCHULZ METHOD FOR SOLVING NONLINEAR MATRIX EQUATION Xp + AXA = Q

  • Kim, Hyun-Min;Kim, Young-jin;Meng, Jie
    • Journal of the Korean Mathematical Society
    • /
    • v.55 no.6
    • /
    • pp.1529-1540
    • /
    • 2018
  • The matrix equation $X^p+A^*XA=Q$ has been studied to find the positive definite solution in several researches. In this paper, we consider fixed-point iteration and Newton's method for finding the matrix p-th root. From these two considerations, we will use the Newton-Schulz algorithm (N.S.A). We will show the residual relation and the local convergence of the fixed-point iteration. The local convergence guarantees the convergence of N.S.A. We also show numerical experiments and easily check that the N.S. algorithm reduce the CPU-time significantly.

Efficiency Optimization Control for High Performance Operation of Synchronous Reluctance Motor (동기 리럭턴스 전동기의 고성능 운전을 위한 효율 최적화 제어)

  • 정동화;이정철;이홍균
    • Journal of the Korean Society of Safety
    • /
    • v.16 no.2
    • /
    • pp.51-56
    • /
    • 2001
  • This paper is proposed an efficiency optimization control algorithm for a synchronous reluctance motor (SynRM) which minimizes the copper and iron losses. fen exists a variety of combinations of d and q-axis current which provide a specific motor torque. The objective of the efficiency optimization controller is to seek a combination of d and q-axis current components, which provides minimum losses at a certain operating point in steady state. It is shown that the current components which directly govern the torque production have been very well regulated by the efficiency optimization control scheme. The proposed algorithm allows the electromagnetic losses in variable speed and torque drives to be reduced while keeping good torque control dynamics. Simulation results are presented to show the validity of the proposed algorithm.

  • PDF

Reinforcement Learning Using State Space Compression (상태 공간 압축을 이용한 강화학습)

  • Kim, Byeong-Cheon;Yun, Byeong-Ju
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.633-640
    • /
    • 1999
  • Reinforcement learning performs learning through interacting with trial-and-error in dynamic environment. Therefore, in dynamic environment, reinforcement learning method like Q-learning and TD(Temporal Difference)-learning are faster in learning than the conventional stochastic learning method. However, because many of the proposed reinforcement learning algorithms are given the reinforcement value only when the learning agent has reached its goal state, most of the reinforcement algorithms converge to the optimal solution too slowly. In this paper, we present COMREL(COMpressed REinforcement Learning) algorithm for finding the shortest path fast in a maze environment, select the candidate states that can guide the shortest path in compressed maze environment, and learn only the candidate states to find the shortest path. After comparing COMREL algorithm with the already existing Q-learning and Priortized Sweeping algorithm, we could see that the learning time shortened very much.

  • PDF

A Genetic Algorithm for Solving a QFD(Quality Function Deployment) Optimization Problem

  • Yoo, Jaewook
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.26-38
    • /
    • 2020
  • Determining the optimal levels of the technical attributes (TAs) of a product to achieve a high level of customer satisfaction is the main activity in the planning process for quality function deployment (QFD). In real applications, the number of customer requirements for developing a single product is quite large, and the number of converted TAs is also high so the size of the house of quality (HoQ) becomes huge. Furthermore, the TA levels are often discrete instead of continuous and the product market can be divided into several market segments corresponding to the number of HoQ, which also unacceptably increases the size of the QFD optimization problem and the time spent on making decisions. This paper proposed a genetic algorithm (GA) solution approach to finding the optimum set of TAs in QFD in the above situation. A numerical example is provided for illustrating the proposed approach. To assess the computational performance of the GA, tests were performed on problems of various sizes using a fractional factorial design.

Object tracking algorithm of Swarm Robot System for using SVM and Polygon based Q-learning (SVM과 다각형 기반의 Q-learning 알고리즘을 이용한 군집로봇의 목표물 추적 알고리즘)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.143-146
    • /
    • 2008
  • 본 논문에서는 군집로봇시스템에서 목표물 추적을 위하여 SVM을 이용한 12각형 기반의 Q-learning 알고리즘을 제안한다. 제안한 알고리즘의 유효성을 보이기 위해 본 논문에서는 여러대의 로봇과 장애물 그리고 하나의 목표물을 정하고, 각각의 로봇이 숨겨진 목표물을 찾아내는 실험을 가정하여 무작위, DBAM과 ABAM의 융합 모델, 그리고 마지막으로 본 논문에서 제안한 SVM과 12각형 기반의 Q-learning 알고리즘을 이용하여 실험을 수행하고, 이 3가지 방법을 비교하여 본 논문의 유효성을 검증하였다.

  • PDF

A Distributed Scheduling Algorithm based on Deep Reinforcement Learning for Device-to-Device communication networks (단말간 직접 통신 네트워크를 위한 심층 강화학습 기반 분산적 스케쥴링 알고리즘)

  • Jeong, Moo-Woong;Kim, Lyun Woo;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1500-1506
    • /
    • 2020
  • In this paper, we study a scheduling problem based on reinforcement learning for overlay device-to-device (D2D) communication networks. Even though various technologies for D2D communication networks using Q-learning, which is one of reinforcement learning models, have been studied, Q-learning causes a tremendous complexity as the number of states and actions increases. In order to solve this problem, D2D communication technologies based on Deep Q Network (DQN) have been studied. In this paper, we thus design a DQN model by considering the characteristics of wireless communication systems, and propose a distributed scheduling scheme based on the DQN model that can reduce feedback and signaling overhead. The proposed model trains all parameters in a centralized manner, and transfers the final trained parameters to all mobiles. All mobiles individually determine their actions by using the transferred parameters. We analyze the performance of the proposed scheme by computer simulation and compare it with optimal scheme, opportunistic selection scheme and full transmission scheme.

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.

Equal Energy Consumption Routing Protocol Algorithm Based on Q-Learning for Extending the Lifespan of Ad-Hoc Sensor Network (애드혹 센서 네트워크 수명 연장을 위한 Q-러닝 기반 에너지 균등 소비 라우팅 프로토콜 기법)

  • Kim, Ki Sang;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.269-276
    • /
    • 2021
  • Recently, smart sensors are used in various environments, and the implementation of ad-hoc sensor networks (ASNs) is a hot research topic. Unfortunately, traditional sensor network routing algorithms focus on specific control issues, and they can't be directly applied to the ASN operation. In this paper, we propose a new routing protocol by using the Q-learning technology, Main challenge of proposed approach is to extend the life of ASNs through efficient energy allocation while obtaining the balanced system performance. The proposed method enhances the Q-learning effect by considering various environmental factors. When a transmission fails, node penalty is accumulated to increase the successful communication probability. Especially, each node stores the Q value of the adjacent node in its own Q table. Every time a data transfer is executed, the Q values are updated and accumulated to learn to select the optimal routing route. Simulation results confirm that the proposed method can choose an energy-efficient routing path, and gets an excellent network performance compared with the existing ASN routing protocols.