• Title/Summary/Keyword: Video Training

Search Result 409, Processing Time 0.022 seconds

Links to Prosocial Factors and Alpha Asymmetry in Adolescents during Violent and Non-Violent Video Game Play

  • Lianekhammy, Joann;Werner-Wilson, Ronald
    • Child Studies in Asia-Pacific Contexts
    • /
    • v.5 no.2
    • /
    • pp.63-81
    • /
    • 2015
  • The present study examined electrical brain activations in participants playing three different video games. Forty-five adolescents between the ages of 13-17 (M=14.3 years, SD=1.5) were randomly assigned to play either a violent game, non-violent game, or brain training game. Electroencephalography (EEG) was recorded during video game play. Following game play, participants completed a questionnaire measuring prosocial personality. Results show an association between prosocial personality factors and differential patterns of brain activation in game groups. Adolescents with higher empathy playing the brain training game were positively correlated with frontal asymmetry scores, while empathy scores for those in non-violent and violent game groups were negatively linked to frontal asymmetric activation scores. Those with higher scores in helpfulness in the non-violent game group showed a positive association to left hemisphere activation. Implications behind these findings are discussed in the manuscript.

Training-Based Noise Reduction Method Considering Noise Correlation for Visual Quality Improvement of Recorded Analog Video (녹화된 아날로그 영상의 화질 개선을 위한 잡음 연관성을 고려한 학습기반 잡음개선 기법)

  • Kim, Sung-Deuk;Lim, Kyoung-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.28-38
    • /
    • 2010
  • In order to remove the noise contained in recorded analog video, it is important to recognize the real characteristics and strength of the noise. This paper presents an efficient training-based noise reduction method for recorded analog video after analyzing the noise characteristics of analog video captured in a real broadcasting system. First we show that there is non-negligible noise correlation in recorded analog video and describe the limitations of the traditional noise estimation and reduction methods based on additive white Gaussian noise (AWGN) model. In addition, we show that auto-regressive (AR) model considering noise correlation can be successfully utilized to estimate and synthesize the noise contained in the recorded analog video, and the estimated AR parameters are utilized in the training-based noise reduction scheme to reduce the video noise. Experiment results show that the proposed method can be efficiently applied for noise reduction of recorded analog video with non-negligible noise correlation.

The necessity for education on endotracheal intubation through video laryngoscope - A focused on paramedic students - (비디오 후두경을 통한 기관내 삽관 교육의 필요성 - 응급구조과 학생을 중심으로 -)

  • Ham, Young-Lim;Kim, Jin-Hwa;Lee, Jae-Gook
    • The Korean Journal of Emergency Medical Services
    • /
    • v.23 no.1
    • /
    • pp.7-17
    • /
    • 2019
  • Purpose: The aim of this study was to verify the necessity of endotracheal intubation through video laryngoscope and to provide basic data to inform the provision of video laryngoscope education. Methods: Eighty paramedic students participated in this study. A survey was conducted from November 5, 2018 to December 7, 2018. Data were analyzed with independent t-tests, and the chi-squared test. Results: The video laryngoscope is a highly usable instrument that can easily be applied during training. The instrument provides better visual evaluation of the normal airway (p=.004), the airway in case of cervical collar and head fixation (p=.000), and the airway in case of tongue edema (p=.000). The time of endotracheal intubation in the normal airway was significantly less with the video laryngoscope compared with the direct laryngoscope. The success rate of tracheal intubation was significantly higher in the video laryngoscope group than in the direct laryngoscope. Conclusion: This study suggests the necessity of education on endotracheal intubation through video laryngoscope in the professional airway maintenance training course of emergency department students. The video laryngoscope is easier to apply than the direct laryngoscope in cases of intubation in various clinical situations.

Comparison of Postural Control Ability according to the Various Video Contents during Action Observations

  • Goo, Bon Wook;Lee, Mi Young
    • The Journal of Korean Physical Therapy
    • /
    • v.33 no.1
    • /
    • pp.16-20
    • /
    • 2021
  • Purpose: This study examined the effects of the type of video contents used for action observations on the ability to control posture. Methods: The participants were 48 healthy adults. The two hands of the participants were crossed on both shoulders, and the other foot was placed in a straight line in front of the target to allow them to watch a video of the monitor. The videos were presented in random order with three video contents (natural, stable balance posture, and unstable balance posture) consisting of 30 seconds each. A 15-second resting time was given between each video. During action observation using various video content forms, the posture control ability was measured using a TekScan MetScan® system. Results: The results revealed statistically significant differences in the area of movement and the distance by COP and distance by the type of action-observation videos, and the distance by the anteroposterior and mediolateral sides (p<0.05). The stable balance posture and unstable balance posture video showed significant differences in the distance by the COP, anteroposterior, and mediolateral distance. (p<0.05) Conclusion: This study suggests that choosing the contents of the videos is important during action-observation training, and action-observation training can help improve postural control.

The Effects of Communication Training Applying Gagné's Nine Events of Instruction and Video Clips upon Communication Competency and Interpersonal Relations in Nursing Students (가네의 9가지 수업사태와 비디오 클립을 적용한 의사소통 훈련이 간호대학생의 의사소통 능력과 대인관계에 미치는 효과)

  • Cha, Jin-Gyung;Kim, Hyang-Ha
    • Journal of East-West Nursing Research
    • /
    • v.23 no.2
    • /
    • pp.115-123
    • /
    • 2017
  • Purpose: The purpose of this study was to determine the effects of teaching communication training using $Gagn{\acute{e}}^{\prime}s$ 9 events of instruction and video clip on communication competency and interpersonal relations of nursing students. Methods: The participants were 79 nursing students (41 in the intervention group, 38 in the control group). A nonequivalent control group design was used. The research was carried out from September 7 to November 20, 2015. Teaching communication skills using $Gagn{\acute{e}}^{\prime}s$ 9 events of instruction and video clip were provided to the intervention group 180 minutes/session once a week for 9 weeks. Data were analyzed using chi-square test, independent t-test and ANCOVA with SPSS/PC version 21.0. Results: The intervention group reported significantly higher scores of communication competency(F=8.41, p=.005) and interpersonal relations(F=8.97 p=.004) compared to those of the control group at the completion of the intervention. Conclusion: The findings of this study show that $Gagn{\acute{e}}^{\prime}s$ 9 events of instruction and video clip is an effective communication training for improving communication competency and interpersonal relations in nursing students. A randomized clinical trial is needed to confirm the value of communication training using $Gagn{\acute{e}}^{\prime}s$ 9 events of instruction and video clip for nursing students.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

A Study on the Alternative Method of Video Characteristics Using Captioning in Text-Video Retrieval Model (텍스트-비디오 검색 모델에서의 캡션을 활용한 비디오 특성 대체 방안 연구)

  • Dong-hun, Lee;Chan, Hur;Hyeyoung, Park;Sang-hyo, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.347-353
    • /
    • 2022
  • In this paper, we propose a method that performs a text-video retrieval model by replacing video properties using captions. In general, the exisiting embedding-based models consist of both joint embedding space construction and the CNN-based video encoding process, which requires a lot of computation in the training as well as the inference process. To overcome this problem, we introduce a video-captioning module to replace the visual property of video with captions generated by the video-captioning module. To be specific, we adopt the caption generator that converts candidate videos into captions in the inference process, thereby enabling direct comparison between the text given as a query and candidate videos without joint embedding space. Through the experiment, the proposed model successfully reduces the amount of computation and inference time by skipping the visual processing process and joint embedding space construction on two benchmark dataset, MSR-VTT and VATEX.

The Effects of Child Cardiopulmonary Resuscitation Education for Childcare Teachers with a Video Self-Instruction Program (Video Self-Instruction Program을 이용한 보육교사의 소아심폐소생술 교육의 효과)

  • Kim, Geon-Hee
    • The Korean Journal of Emergency Medical Services
    • /
    • v.13 no.2
    • /
    • pp.87-98
    • /
    • 2009
  • Purpose : This study set out to compare the educational effects of a video self-instruction program for child CPR education on childcare teachers by applying the 2006 KACPR Guideline. By adopting the nonequivalent control group posttest quasi-experimental design, the study examined the educational effects on a group that did not receive instructions from the instructor, another group that received his instructions, and the other group that received an extra three-minute practice training session in addition to instructions. Methods : Data were gathered from August 6 to 18, 2008. As for research tools, the Knowledge Instrument of CPR by Connolly (2006) was used along with the National Practice Test Protocol for C1ass 1 Emergency Medical Technicians (2007) and Common Protocol for CPR (2006) to examine the performance of child CPR. By shooting the guide screen of $Resusci^{(R)}$ Junior CPR Manikin of Leardal with a video camera and using the Skill Guide Checklist of the Common Protocol for CPR (2006), the subjects' technical accuracy of chi1d CPR was evaluated. There were three subject groups: 29 childcare teachers randomly assigned to received the video self-instruction program training for chi1d CPR and no instructions from the instructor made up the control group; 22 childcare teachers randomly assigned to received the program training and instructions from the instructor made up experiment group I; 23 childcare teachers randomly assigned to received an extra three-minute practice training session in addition to the program training and the instructions made up experiment group II. The gathered data were analyzed with SPSS/PC+ (Version 14.0) in frequency, percentage, $X^2$-test, ANOVA, Scheffe test. Results : 1) There were no statistically significant differences (F=1.030, p=.362) among the groups in terms of knowledge scores after the child CPR education. 2) There were statistically significant differences (F=13.625, p=.000) among the groups in terms of performance abilities after the child CPR education. 3) There were no statistically significant differences (F=1.610, p=.207) among the groups in terms of technical accuracy of mouth-to-mouth resuscitation after the child CPR education 4) There were no statistically significant differences (F=1.484, p=.234) among the groups in terms of technical accuracy of chest compression after the child CPR education. Conclusion : The results indicate that childcare teachers can improve their performance abilities in child CPR when the instructors are active with their instructions and extra practice hours are secured through a VSI program. It's also needed to provide education with increasing concentration ratio about the items of lower knowledge points in order to help the teachers learn the accurate theory of child CPR. And there should be VSI programs of diverse conditions to increase the effects of child CPR training among childcare teachers.

  • PDF

Video Object Segmentation with Weakly Temporal Information

  • Zhang, Yikun;Yao, Rui;Jiang, Qingnan;Zhang, Changbin;Wang, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1434-1449
    • /
    • 2019
  • Video object segmentation is a significant task in computer vision, but its performance is not very satisfactory. A method of video object segmentation using weakly temporal information is presented in this paper. Motivated by the phenomenon in reality that the motion of the object is a continuous and smooth process and the appearance of the object does not change much between adjacent frames in the video sequences, we use a feed-forward architecture with motion estimation to predict the mask of the current frame. We extend an additional mask channel for the previous frame segmentation result. The mask of the previous frame is treated as the input of the expanded channel after processing, and then we extract the temporal feature of the object and fuse it with other feature maps to generate the final mask. In addition, we introduce multi-mask guidance to improve the stability of the model. Moreover, we enhance segmentation performance by further training with the masks already obtained. Experiments show that our method achieves competitive results on DAVIS-2016 on single object segmentation compared to some state-of-the-art algorithms.

5D Light Field Synthesis from a Monocular Video (단안 비디오로부터의 5차원 라이트필드 비디오 합성)

  • Bae, Kyuho;Ivan, Andre;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.755-764
    • /
    • 2019
  • Currently commercially available light field cameras are difficult to acquire 5D light field video since it can only acquire the still images or high price of the device. In order to solve these problems, we propose a deep learning based method for synthesizing the light field video from monocular video. To solve the problem of obtaining the light field video training data, we use UnrealCV to acquire synthetic light field data by realistic rendering of 3D graphic scene and use it for training. The proposed deep running framework synthesizes the light field video with each sub-aperture image (SAI) of $9{\times}9$ from the input monocular video. The proposed network consists of a network for predicting the appearance flow from the input image converted to the luminance image, and a network for predicting the optical flow between the adjacent light field video frames obtained from the appearance flow.