• Title/Summary/Keyword: Q, S start and end point

Search Result 2, Processing Time 0.014 seconds

Detection of QRS Feature Based on Phase Transition Tracking for Premature Ventricular Contraction Classification (조기심실수축 분류를 위한 위상 변이 추적 기반의 QRS 특징점 검출)

  • Cho, Ik-sung;Yoon, Jeong-oh;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.427-436
    • /
    • 2016
  • In general, QRS duration represent a distance of Q start and S end point. However, since criteria of QRS duration are vague and Q, S point is not detected accurately, arrhythmia classification performance can be reduced. In this paper, we propose extraction of Q, S start and end point RS feature based on phase transition tracking method after we detected R wave that is large peak of electrocardiogram(ECG) signal. For this purpose, we detected R wave, from noise-free ECG signal through the preprocessing method. Also, we classified QRS pattern through differentiation value of ECG signal and extracted Q, S start and end point by tracking direction and count of phase based on R wave. The performance of R wave detection is evaluated by using 48 record of MIT-BIH arrhythmia database. The achieved scores indicate the average detection rate of 99.60%. PVC classification is evaluated by using 9 record of MIT-BIH arrhythmia database that included over 30 premature ventricular contraction(PVC). The achieved scores indicate the average detection rate of 94.12% in PVC.

Path Planning with Obstacle Avoidance Based on Double Deep Q Networks (이중 심층 Q 네트워크 기반 장애물 회피 경로 계획)

  • Yongjiang Zhao;Senfeng Cen;Seung-Je Seong;J.G. Hur;Chang-Gyoon Lim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.231-240
    • /
    • 2023
  • It remains a challenge for robots to learn avoiding obstacles automatically in path planning using deep reinforcement learning (DRL). More and more researchers use DRL to train a robot in a simulated environment and verify the possibility of DRL to achieve automatic obstacle avoidance. Due to the influence factors of different environments robots and sensors, it is rare to realize automatic obstacle avoidance of robots in real scenarios. In order to learn automatic path planning by avoiding obstacles in the actual scene we designed a simple Testbed with the wall and the obstacle and had a camera on the robot. The robot's goal is to get from the start point to the end point without hitting the wall as soon as possible. For the robot to learn to avoid the wall and obstacle we propose to use the double deep Q networks (DDQN) to verify the possibility of DRL in automatic obstacle avoidance. In the experiment the robot used is Jetbot, and it can be applied to some robot task scenarios that require obstacle avoidance in automated path planning.