• 제목/요약/키워드: Video Software Method

검색결과 308건 처리시간 0.023초

보행동작에 대한 바이오메카닉스적 분석과 비디오의 정성적 분석의 상호관련성 (Relationship between the Biomechanical Analysis and the Qualitative Analysis of Video Software for the Walking Movement)

  • 배영상;우오구;이정민
    • 한국운동역학회지
    • /
    • 제20권4호
    • /
    • pp.421-427
    • /
    • 2010
  • The purpose of this study was to investigate the relationship between the quantitative analysis of biomechanical movement and the qualitative analysis of video software in order to evaluate for the walking movement. The fourteen collegiate students who agreed with the purpose and method of this study participated as subjects. The slow walking and fast walking of the subjects in the place of experiment were photographed, and calculated several mechanical factors. This empirical evidence from the experiment indicated the significant difference(p<.001) between each distant factors of the walking movement for both analyses methods, but there was no statistically significant difference between the spacial factors observed in the experiment. For more detail, no significant difference between the walking ratios that expressed the coordination between stride length and stride frequency was found. The findings also indicated the high coefficient of correlation(over r=.9) which supports higher explanation force for the biomechanical method and the Dartfish video software method. Therefore, if the data was gathered by using the proper experimental method, the video software method could be used just like the quantitative data of biomechanical method.

비디오 모니터링 환경에서 정확한 돼지 탐지 (Accurate Pig Detection for Video Monitoring Environment)

  • 안한세;손승욱;유승현;서유일;손준형;이세준;정용화;박대희
    • 한국멀티미디어학회논문지
    • /
    • 제24권7호
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.

Quality of Experience Experiment Method and Statistical Analysis for 360-degree Video with Sensory Effect

  • Jin, Hoe-Yong;Kim, Sang-Kyun
    • 방송공학회논문지
    • /
    • 제25권7호
    • /
    • pp.1063-1072
    • /
    • 2020
  • This paper proposes an experimental method for measuring the quality of experience to measure the influence of the participants' immersion, satisfaction, and presence according to the application of sensory effects to 360-degree video. Participants of the experiment watch 360-degree videos using HMD and receive sensory effects using scent diffusing devices and wind devices. Subsequently, a questionnaire was conducted on the degree of immersion, satisfaction, and present feelings for the video you watched. By analyzing the correlation of the survey results, we found that the provision of sensory effects satisfies the 360-degree video viewing, and the experimental method was appropriate. In addition, using the P.910 method, a result was derived that was not suitable for measuring the quality of the immersion and presence of 360-degree video according to the provision of sensory effects.

Suboptimal video coding for machines method based on selective activation of in-loop filter

  • Ayoung Kim;Eun-Vin An;Soon-heung Jung;Hyon-Gon Choo;Jeongil Seo;Kwang-deok Seo
    • ETRI Journal
    • /
    • 제46권3호
    • /
    • pp.538-549
    • /
    • 2024
  • A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.

임베디드 보드에서 영상 처리 및 딥러닝 기법을 혼용한 돼지 탐지 정확도 개선 (Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board)

  • 유승현;손승욱;안한세;이세준;백화평;정용화;박대희
    • 한국멀티미디어학회논문지
    • /
    • 제25권4호
    • /
    • pp.583-599
    • /
    • 2022
  • Although the object detection accuracy with a single image has been significantly improved with the advance of deep learning techniques, the detection accuracy for pig monitoring is challenged by occlusion problems due to a complex structure of a pig room such as food facility. These detection difficulties with a single image can be mitigated by using a video data. In this research, we propose a method in pig detection for video monitoring environment with a static camera. That is, by using both image processing and deep learning techniques, we can recognize a complex structure of a pig room and this information of the pig room can be utilized for improving the detection accuracy of pigs in the monitored pig room. Furthermore, we reduce the execution time overhead by applying a pruning technique for real-time video monitoring on an embedded board. Based on the experiment results with a video data set obtained from a commercial pig farm, we confirmed that the pigs could be detected more accurately in real-time, even on an embedded board.

Privacy-Preserving H.264 Video Encryption Scheme

  • Choi, Su-Gil;Han, Jong-Wook;Cho, Hyun-Sook
    • ETRI Journal
    • /
    • 제33권6호
    • /
    • pp.935-944
    • /
    • 2011
  • As a growing number of individuals are exposed to surveillance cameras, the need to prevent captured videos from being used inappropriately has increased. Privacy-related information can be protected through video encryption during transmission or storage, and several algorithms have been proposed for such purposes. However, the simple way of evaluating the security by counting the number of brute-force trials is not proper for measuring the security of video encryption algorithms, considering that attackers can devise specially crafted attacks for specific purposes by exploiting the characteristics of the target video codec. In this paper, we introduce a new attack for recovering contour information from encrypted H.264 video. The attack can thus be used to extract face outlines for the purpose of personal identification. We analyze the security of previous video encryption schemes against the proposed attack and show that the security of these schemes is lower than expected in terms of privacy protection. To enhance security, an advanced block shuffling method is proposed, an analysis of which shows that it is more secure than the previous method and can be an improvement against the proposed attack.

Gradient Fusion Method for Night Video Enhancement

  • Rao, Yunbo;Zhang, Yuhong;Gou, Jianping
    • ETRI Journal
    • /
    • 제35권5호
    • /
    • pp.923-926
    • /
    • 2013
  • To resolve video enhancement problems, a novel method of gradient domain fusion wherein gradient domain frames of the background in daytime video are fused with nighttime video frames is proposed. To verify the superiority of the proposed method, it is compared to conventional techniques. The implemented output of our method is shown to offer enhanced visual quality.

Energy-Aware Video Coding Selection for Solar-Powered Wireless Video Sensor Networks

  • Yi, Jun Min;Noh, Dong Kun;Yoon, Ikjune
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권7호
    • /
    • pp.101-108
    • /
    • 2017
  • A wireless image sensor node collecting image data for environmental monitoring or surveillance requires a large amount of energy to transmit the huge amount of video data. Even though solar energy can be used to overcome the energy constraint, since the collected energy is also limited, an efficient energy management scheme for transmitting a large amount of video data is needed. In this paper, we propose a method to reduce the number of blackout nodes and increase the amount of gathered data by selecting an appropriate video coding method according to the energy condition of the node in a solar-powered wireless video sensor network. This scheme allocates the amount of energy that can be used over time in order to seamlessly collect data regardless of night or day, and selects a high compression coding method when the allocated energy is large and a low compression coding when the quota is low. Thereby, it reduces the blackout of the relay node and increases the amount of data obtained at the sink node by allowing the data to be transmitted continuously. Also, if the energy is lower than operating normaly, the frame rate is adjusted to prevent the energy exhaustion of nodes. Simulation results show that the proposed scheme suppresses the energy exhaustion of the relay node and collects more data than other schemes.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

Extraction of User Preference for Video Stimuli Using EEG-Based User Responses

  • Moon, Jinyoung;Kim, Youngrae;Lee, Hyungjik;Bae, Changseok;Yoon, Wan Chul
    • ETRI Journal
    • /
    • 제35권6호
    • /
    • pp.1105-1114
    • /
    • 2013
  • Owing to the large number of video programs available, a method for accessing preferred videos efficiently through personalized video summaries and clips is needed. The automatic recognition of user states when viewing a video is essential for extracting meaningful video segments. Although there have been many studies on emotion recognition using various user responses, electroencephalogram (EEG)-based research on preference recognition of videos is at its very early stages. This paper proposes classification models based on linear and nonlinear classifiers using EEG features of band power (BP) values and asymmetry scores for four preference classes. As a result, the quadratic-discriminant-analysis-based model using BP features achieves a classification accuracy of 97.39% (${\pm}0.73%$), and the models based on the other nonlinear classifiers using the BP features achieve an accuracy of over 96%, which is superior to that of previous work only for binary preference classification. The result proves that the proposed approach is sufficient for employment in personalized video segmentation with high accuracy and classification power.