• Title/Summary/Keyword: Markov Decision Processing

Search Result 27, Processing Time 0.028 seconds

Korean Speech Segmentation and Recognition by Frame Classification via GMM (GMM을 이용한 프레임 단위 분류에 의한 우리말 음성의 분할과 인식)

  • 권호민;한학용;고시영;허강인
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.18-21
    • /
    • 2003
  • In general it has been considered to be the difficult problem that we divide continuous speech into short interval with having identical phoneme quality. In this paper we used Gaussian Mixture Model (GMM) related to probability density to divide speech into phonemes, an initial, medial, and final sound. From them we peformed continuous speech recognition. Decision boundary of phonemes is determined by algorithm with maximum frequency in a short interval. Recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme divided by eye-measurement. For the experiments result we confirmed that the method we presented is relatively superior in auto-segmentation in korean speech.

  • PDF

A Study on the Design of Four-legged Walking Intelligence Robots for Overcoming Non-Planer Tomography Using Deep Learning (딥러닝을 이용한 비평탄 지형 극복용 4족 보행 지능로봇의 설계에 관한 연구)

  • Han, Seong-Min;Pak, Myeong-Suk;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.288-291
    • /
    • 2022
  • 본 논문은 4족 지능 로봇의 비평탄 지형 극복 기능을 구현하기 위해, 시뮬레이션 환경에서 제공하는 역기구학(Inverse Kinematic)과 개선된 강화 학습 방법(Partially Observable Markov Decision Process)을 분석하여 수립한 알고리즘을 동작 검증을 위한 임베디드 보드(Embedded Board)에 실제 적용하여 보았다. 이 연구를 통해 4족 보행 로봇의 효율적인 지형 극복형 보행 방식 설계 방법을 제안하며, 특히 IMU 센서의 지능적인 균형제어 방법을 평가하고 다양한 통신방식과 서보모터 제어 방식을 실험하고 구현하였다. 또한 모터 가감속 제어를 통해 보다 부드럽고 안정적인 보행을 구현한다.

Evaluating a successor representation-based reinforcement learning algorithm in the 2-stage Markov decision task (2-stage 마르코프 의사결정 상황에서 Successor Representation 기반 강화학습 알고리즘 성능 평가)

  • Kim, So-Hyeon;Lee, Jee Hang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.910-913
    • /
    • 2021
  • Successor representation (SR) 은 두뇌 내 해마의 공간 세포가 인지맵을 구성하여 환경을 학습하고, 이를 활용하여 변화하는 환경에서 유연하게 최적 전략을 수립하는 기전을 모사한 강화학습 방법이다. 특히, 학습한 환경 정보를 활용, 환경 구조 안에서 목표가 변화할 때 강인하게 대응하여 일반 model-free 강화학습에 비해 빠르게 보상 변화에 적응하고 최적 전략을 찾는 것으로 알려져 있다. 본 논문에서는 SR 기반 강화학습 알고리즘이 보상의 변화와 더불어 환경 구조, 특히 환경의 상태 천이 확률이 변화하여 보상의 변화를 유발하는 상황에서 어떠한 성능을 보이는 지 확인하였다. 벤치마크 알고리즘으로 SR 의 특성을 목적 기반 강화학습으로 통합한 SR-Dyna 를 사용하였고, 환경 상태 천이 불확실성과 보상 변화가 동시에 나타나는 2-stage 마르코프 의사결정 과제를 실험 환경으로 사용하였다. 시뮬레이션 결과, SR-Dyna 는 환경 내 상태 천이 확률 변화에 따른 보상 변화에는 적절히 대응하지 못하는 결과를 보였다. 본 결과를 통해 두뇌의 강화학습과 알고리즘 강화학습의 차이를 이해하여, 환경 변화에 강인한 강화학습 알고리즘 설계를 기대할 수 있다.

Real-time human detection method based on quadrupedal walking robot (4족 보행 로봇 기반의 실시간 사람 검출 방법)

  • Han, Seong-Min;Yu, Sang-jung;Lee, Geon;Pak, Myeong-Suk;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.468-470
    • /
    • 2022
  • 본 논문은 강화학습 POMDP(Partially Observable Markov Decision Process) 알고리즘을 사용하여 자갈밭과 같은 비평탄 지형을 극복하는 4족 보행 지능로봇을 설계하고 딥러닝 기법을 사용하여 사람을 검출한다. 로봇의 임베디드 환경에서 1단계 검출 알고리즘인 YOLO-v7과 SSD의 기본 모델, 경량 또는 네트워크 교체 모델의 성능을 비교하고 선정된 SSD MobileNet-v2의 검출 속도를 개선하기 위해 TensorRT를 사용하여 최적화를 진행하였다

A Reinforcement Learning Approach to Collaborative Filtering Considering Time-sequence of Ratings (평가의 시간 순서를 고려한 강화 학습 기반 협력적 여과)

  • Lee, Jung-Kyu;Oh, Byong-Hwa;Yang, Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.31-36
    • /
    • 2012
  • In recent years, there has been increasing interest in recommender systems which provide users with personalized suggestions for products or services. In particular, researches of collaborative filtering analyzing relations between users and items has become more active because of the Netflix Prize competition. This paper presents the reinforcement learning approach for collaborative filtering. By applying reinforcement learning techniques to the movie rating, we discovered the connection between a time sequence of past ratings and current ratings. For this, we first formulated the collaborative filtering problem as a Markov Decision Process. And then we trained the learning model which reflects the connection between the time sequence of past ratings and current ratings using Q-learning. The experimental results indicate that there is a significant effect on current ratings by the time sequence of past ratings.

Approximate Dynamic Programming Based Interceptor Fire Control and Effectiveness Analysis for M-To-M Engagement (근사적 동적계획을 활용한 요격통제 및 동시교전 효과분석)

  • Lee, Changseok;Kim, Ju-Hyun;Choi, Bong Wan;Kim, Kyeongtaek
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • As low altitude long-range artillery threat has been strengthened, the development of anti-artillery interception system to protect assets against its attacks will be kicked off. We view the defense of long-range artillery attacks as a typical dynamic weapon target assignment (DWTA) problem. DWTA is a sequential decision process in which decision making under future uncertain attacks affects the subsequent decision processes and its results. These are typical characteristics of Markov decision process (MDP) model. We formulate the problem as a MDP model to examine the assignment policy for the defender. The proximity of the capital of South Korea to North Korea border limits the computation time for its solution to a few second. Within the allowed time interval, it is impossible to compute the exact optimal solution. We apply approximate dynamic programming (ADP) approach to check if ADP approach solve the MDP model within processing time limit. We employ Shoot-Shoot-Look policy as a baseline strategy and compare it with ADP approach for three scenarios. Simulation results show that ADP approach provide better solution than the baseline strategy.

Markov's Modeling for Screening Strategies for Colorectal Cancer

  • Barouni, Mohsen;Larizadeh, Mohammad Hassan;Sabermahani, Asma;Ghaderi, Hossien
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.10
    • /
    • pp.5125-5129
    • /
    • 2012
  • Economic decision models are being increasingly used to assess medical interventions. Advances in this field are mainly due to enhanced processing capacity of computers, availability of specific software to perform the necessary tasks, and refined mathematical techniques. We here estimated the incremental cost-effectiveness of ten strategies for colon cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and profit of chemotherapy in Iran. We used a Markov model to measure the costs and quality-adjusted life expectancy of a 50-year-old average-risk Iranian without screening and with screening by each test. In this paper, we tested the model with data from the Ministry of Health and published literature. We considered costs from the perspective of a health insurance organization, with inflation to 2011, the Iranian Rial being converted into US dollars. We focused on three tests for the 10 strategies considered currently being used for population screening in some Iranians provinces (Kerman, Golestan Mazandaran, Ardabil, and Tehran): low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 39%, 60% and 76%, and mortality by 50%, 69% and 78%, respectively, compared with no screening. These approaches generated ICER (incremental cost-effectiveness ratios) of $9067, $654 and $8700 per QALY (quality-adjusted life year), respectively. Sensitivity analyses were conducted to assess the influence of various scales on the economic evaluation of screening. The results were sensitive to probabilistic sensitivity analysis. Colonoscopy every ten years yielded the greatest net health value. Screening for colon cancer is economical and cost-effective over conventional levels of WTP8.

Resource Allocation Strategy of Internet of Vehicles Using Reinforcement Learning

  • Xi, Hongqi;Sun, Huijuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.443-456
    • /
    • 2022
  • An efficient and reasonable resource allocation strategy can greatly improve the service quality of Internet of Vehicles (IoV). However, most of the current allocation methods have overestimation problem, and it is difficult to provide high-performance IoV network services. To solve this problem, this paper proposes a network resource allocation strategy based on deep learning network model DDQN. Firstly, the method implements the refined modeling of IoV model, including communication model, user layer computing model, edge layer offloading model, mobile model, etc., similar to the actual complex IoV application scenario. Then, the DDQN network model is used to calculate and solve the mathematical model of resource allocation. By decoupling the selection of target Q value action and the calculation of target Q value, the phenomenon of overestimation is avoided. It can provide higher-quality network services and ensure superior computing and processing performance in actual complex scenarios. Finally, simulation results show that the proposed method can maintain the network delay within 65 ms and show excellent network performance in high concurrency and complex scenes with task data volume of 500 kbits.

Optimal Dynamic Operating Policies for a Tandem Queueing Service System

  • Hwang, Dong-Joon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.4 no.1
    • /
    • pp.51-67
    • /
    • 1979
  • This paper considers the problem of determining an optimal dynamic operating policy for a two-stage tandem queueing service system in which the service facilities (or stages) can be operated at more than one service rate. At each period of the system's operation, the system manager must specify which of the available service rates is to be employed at each stage. The cost structure includes an operating cost for running each stage and a service facility profit earned when a service completion occurs at Stage 2. We assume that the system has a finite waiting capacity in front of each station and each customer requires two services which must be done in sequence, that is, customers must pass through Stage 1 and Stage 2 in that order. Processing must be in the order of arrival at each station. The objective is to minimize the total discounted expected cost in a two-stage tandem queueing service system, which we formulate as a Discrete-Time Markov Decision Process. We present analytical and numerical results that specify the form of the optimal dynamic operating policy for a two-stage tandem queueing service system.

  • PDF

Reinforcement Learning-based Dynamic Weapon Assignment to Multi-Caliber Long-Range Artillery Attacks (다종 장사정포 공격에 대한 강화학습 기반의 동적 무기할당)

  • Hyeonho Kim;Jung Hun Kim;Joohoe Kong;Ji Hoon Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.42-52
    • /
    • 2022
  • North Korea continues to upgrade and display its long-range rocket launchers to emphasize its military strength. Recently Republic of Korea kicked off the development of anti-artillery interception system similar to Israel's "Iron Dome", designed to protect against North Korea's arsenal of long-range rockets. The system may not work smoothly without the function assigning interceptors to incoming various-caliber artillery rockets. We view the assignment task as a dynamic weapon target assignment (DWTA) problem. DWTA is a multistage decision process in which decision in a stage affects decision processes and its results in the subsequent stages. We represent the DWTA problem as a Markov decision process (MDP). Distance from Seoul to North Korea's multiple rocket launchers positioned near the border, limits the processing time of the model solver within only a few second. It is impossible to compute the exact optimal solution within the allowed time interval due to the curse of dimensionality inherently in MDP model of practical DWTA problem. We apply two reinforcement-based algorithms to get the approximate solution of the MDP model within the time limit. To check the quality of the approximate solution, we adopt Shoot-Shoot-Look(SSL) policy as a baseline. Simulation results showed that both algorithms provide better solution than the solution from the baseline strategy.