• Title/Summary/Keyword: 영상 및 비디오

Search Result 993, Processing Time 0.019 seconds

The Importance of Video Fluoroscopy Swallowing Study for Nasogastric Tube Removal in Rehabilitation Patients (재활치료환자의 비위관(nasogastric tube)제거에 따른 비디오 투시연하검사(VFSS)의 중요성 평가)

  • Jung, Myoyoung;Choi, Namgil;Han, Jaebok;Song, Jongnam;Kim, Weonjin
    • Journal of the Korean Society of Radiology
    • /
    • v.9 no.1
    • /
    • pp.1-7
    • /
    • 2015
  • Acute phase patients who are unconscious and are suffering from cerebral infarction, cranial nerve disorders, or cerebral apoplexy are susceptible to aspiration pneumonia due to dysphagia. In these cases, a nasogastric tube is inserted to supply nutrients. Although bedside screening tests are administered during recovery after rehabilitation, clinical examinations may not be able to ascertain asymptomatic aspiration. Therefore, a video fluoroscopy swallowing study (VFSS) was performed in 10 patients with dysphagia after rehabilitation therapy; these patients had nasogastric tubes inserted, and a rehabilitation specialist assessed the degree of swallowing based on the patients' diet and posture. If aspiration or swallowing difficulties were observed, dysphagia rehabilitation therapy was administered. The patients were reassessed approximately 30-50 days after administration of therapy, based on the patients' condition. If aspiration is not observed, the nasogastric tube was removed. A functional dysphagia scale was used to analyze the VFSS images, and the scores were statistically calculated. The mean score of patients with nasogastric tubes was $49.79{\pm}9.431$, thereby indicating aspiration risk, whereas the group without nasogastric tubes showed a mean score of $11.20{\pm}1.932$, which indicated low risk of aspiration. These results demonstrated that a significantly low score was associated with nasogastric tube removal. Mann-Whitney's test was performed to assess the significance of both the groups, and the results were statistically significant with a P value <0.001. In conclusion, VFSS can effectively assess the movements and structural abnormalities in the oral cavity, pharynx, and esophagus. It can also be used to determine the aspiration status and ascertain the appropriate diet or swallowing posture for the patient. Therefore, VFSS can potentially be used as a reliable standard test to assess swallowing in order to determine nasogastric tube removal.

A Fast 4X4 Intra Prediction Method using Motion Vector Information and Statistical Mode Correlation between 16X16 and 4X4 Intra Prediction In H.264|MPEG-4 AVC (H.264|MPEG-4 AVC 비디오 부호화에서 움직임 벡터 정보와 16~16 및 4X4 화면 내 예측 최종 모드간 통계적 연관성을 이용한 화면 간 프레임에서의 4X4 화면 내 예측 고속화 방법)

  • Na, Tae-Young;Jung, Yun-Sik;Kim, Mun-Churl;Hahm, Sang-Jin;Park, Chang-Seob;Park, Keun-Soo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.200-213
    • /
    • 2008
  • H.264| MPEG-4 AVC is a new video codingstandard defined by JVT (Joint Video Team) which consists of ITU-T and ISO/IEC. Many techniques are adopted fur the compression efficiency: Especially, an intra prediction in an inter frame is one example but it leads to excessive amount of encoding time due to the decision of a candidate mode and a RDcost calculation. For this reason, a fast determination of the best intra prediction mode is the main issue for saving the encoding time. In this paper, by using the result of statistical relation between intra $16{\times}16$ and $4{\times}4$ intra predictions, the number of candidate modes for $4{\times}4$ intra prediction is reduced. Firstly, utilizing motion vector obtained after inter prediction, prediction of a block mode for each macroblock is made. If an intra prediction is needed, the correlation table between $16{\times}16$ and $4{\times}4$ intra predicted modes is created using the probability during each I frame-coding process. Secondly, using this result, the candidate modes for a $4{\times}4$ intra prediction that reaches a predefined specific probability value are only considered in the same GOP For the experiments, JM11.0, the reference software of H.264|MPEG-4 AVC is used and the experimental results show that the encoding time could be reduced by 51.24% in maximum with negligible amounts of PSNR drop and bitrate increase.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.