• Title/Summary/Keyword: 데이터 기반 게이팅

Search Result 2, Processing Time 0.015 seconds

Quantitative Comparison of Motion Artifacts in PET Images using Data-Based Gating (데이터 기반 게이팅을 이용한 PET 영상의 움직임 인공물의 정량적 비교)

  • Jin Young, Kim;Gye Hwan, Jin
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.91-98
    • /
    • 2023
  • PET is used effectively for biochemical or pathological phenomena, disease diagnosis, prognosis determination after treatment, and treatment planning because it can quantify physiological indicators in the human body by imaging the distribution of various biochemical substances. However, since respiratory motion artifacts may occur due to the movement of the diaphragm due to breathing, we would like to evaluate the practical effect by using the a device-less data-driven gated (DDG) technique called MotionFree with the phase-based gating correction method called Q.static scan mode. In this study, images of changes in moving distance (0 cm, 1 cm, 2 cm, 3 cm) are acquired using a breathing-simulated moving phantom. The diameters of the six spheres in the phantom are 10 mm, 13 mm, 17 mm, 22 mm, 28 mm, and 37 mm, respectively. According to maximum standardized uptake value (SUVmax) measurements, when DDG was applied based on the moving distance, the average SUVmax of the correction effect by the moving distance was improved by 1.92, 2.48, 3.23 and 3.00, respectively. When DDG was applied based on the diameter of the phantom spheres, the average SUVmax of the correction effect by the moving distance was improved by 2.37, 2.02, 1.44, 1.20, 0.42 and 0.52 respectively.

A study on the waveform-based end-to-end deep convolutional neural network for weakly supervised sound event detection (약지도 음향 이벤트 검출을 위한 파형 기반의 종단간 심층 콘볼루션 신경망에 대한 연구)

  • Lee, Seokjin;Kim, Minhan;Jeong, Youngho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.24-31
    • /
    • 2020
  • In this paper, the deep convolutional neural network for sound event detection is studied. Especially, the end-to-end neural network, which generates the detection results from the input audio waveform, is studied for weakly supervised problem that includes weakly-labeled and unlabeled dataset. The proposed system is based on the network structure that consists of deeply-stacked 1-dimensional convolutional neural networks, and enhanced by the skip connection and gating mechanism. Additionally, the proposed system is enhanced by the sound event detection and post processings, and the training step using the mean-teacher model is added to deal with the weakly supervised data. The proposed system was evaluated by the Detection and Classification of Acoustic Scenes and Events (DCASE) 2019 Task 4 dataset, and the result shows that the proposed system has F1-scores of 54 % (segment-based) and 32 % (event-based).