• Title/Summary/Keyword: video analysis

Search Result 2,486, Processing Time 0.028 seconds

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.

A Study on COP-Transformation Based Metadata Security Scheme for Privacy Protection in Intelligent Video Surveillance (지능형 영상 감시 환경에서의 개인정보보호를 위한 COP-변환 기반 메타데이터 보안 기법 연구)

  • Lee, Donghyeok;Park, Namje
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.417-428
    • /
    • 2018
  • The intelligent video surveillance environment is a system that extracts various information about a video object and enables automated processing through the analysis of video data collected in CCTV. However, since the privacy exposure problem may occur in the process of intelligent video surveillance, it is necessary to take a security measure. Especially, video metadata has high vulnerability because it can include various personal information analyzed based on big data. In this paper, we propose a COP-Transformation scheme to protect video metadata. The proposed scheme is advantageous in that it greatly enhances the security and efficiency in processing the video metadata.

지연에 민감한 대화형 서비스를 위한 동영상 전송율 평활화 연구

  • 장승기;서덕영
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.147-151
    • /
    • 1996
  • This paper describes two traffic shaping methods for VBR(variable bit rate) video bit streams encoded in the MPEG-2 syntax. Difficulties in control of VBR video traffic are lessened by traffic shaping. Burstness of single layer MPEG-2 video can be reduced by performing intra-refresh in more than one consecutive frames. In two layer encoding of spatial scalability, burstness can be reduced by setting the temporal locations of GOP starting frame of a layer, differently from the other. Queueing analysis shows that these two methods outperform conventional encoding schemes in terms of temporal and semantic QoS requirements.

  • PDF

A Fast Scalable Video Encoding Algorithm (고속 스케일러블 동영상 부호화 알고리듬)

  • Moon, Yong Ho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.5
    • /
    • pp.285-290
    • /
    • 2012
  • In this paper, we propose a fast encoding algorithm for scalable video encoding without compromising coding performance. Through analysis on multiple motion estimation processes performed at the enhancement layer, we show redundant motion estimations and suggest the condition under which the redundant ones can efficiently be determined without additional memory. Based on the condition, the redundant motion estimation processes are excluded in the proposed algorithm. Simulation results show that the proposed algorithm is faster than the conventional fast encoding method without performance degradation and additional memory.

Analysis of the Users' Viewing Characteristics of YouTube Video Contents Related to Science Education (과학교육 관련 유튜브 동영상 콘텐츠 이용자들의 시청 특징 분석)

  • Jeong, Eunju;Son, Jeongwoo
    • Journal of Science Education
    • /
    • v.45 no.1
    • /
    • pp.118-128
    • /
    • 2021
  • In this study, as the viewing characteristics of users of YouTube video content related to science education, 'Inflow and Access' is analyzed to find out the interaction between learners and the system, and 'Reaction and Subscription' to find out the interaction between learners and contents. To this end, the YouTube channel "Elementary Science TV," was selected as the subject of research. The channel is mainly focused on the contents of elementary science textbooks, STEAM, and gifted education. The channel's data of YouTube studio was analyzed. The following results were obtained through data analysis: first, as a result of 'Inflow and Access' analysis, YouTube video content related to science education was most often introduced through external links, and the access device was mainly a computer. Second, as a result of the analysis of 'Reaction and Subscription,' 'like' and commenting performed as a reaction to the video were less than 1% of the number of views. Most users watch without a subscription, and watch for longer when using self-directed. Although this study was analyzed through a limited channel called 'Elementary Science TV,' we were able to discover a little about the users' viewing characteristics of YouTube video contents related to science education. In the future, it is expected that it can be used as a basic material for creating videos related to science education for remote classes, establishing a science education video platform.

A Novel Perceptual No-Reference Video-Quality Measurement With the Histogram Analysis of Luminance and Chrominance (휘도, 색차의 분포도 분석을 이용한 인지적 무기준법 영상 화질 평가방법)

  • Kim, Yo-Han;Sung, Duk-Gu;Han, Jung-Hyun;Shin, Ji-Tae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.127-133
    • /
    • 2009
  • With advances in video technology, many researchers are interested in video quality assessment to prove better performance of proposed algorithms. Since human visual system is too complex to be formulated exactly, many researches about video quality assessment are in progressing. No-reference video-quality assessment is suitable for various video streaming services, because of no requested additional data and network capacity to perform quality assessment. In this paper, we propose a novel no-reference video-quality assessment method with the estimation of dynamic range distortion. To measure the performance, we obtain mean opinion score (MOS) data by subject video quality test with the ITU-T P.910 Absolute Category Rating (ACR) method. And, we compare it with proposed algorithm using 363 video sequences. Experimental results show that the proposed algorithm has a higher correlation with obtained MOS.

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.

Automatic Extraction of Focused Video Object from Low Depth-of-Field Image Sequences (낮은 피사계 심도의 동영상에서 포커스 된 비디오 객체의 자동 검출)

  • Park, Jung-Woo;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.851-861
    • /
    • 2006
  • The paper proposes a novel unsupervised video object segmentation algorithm for image sequences with low depth-of-field (DOF), which is a popular photographic technique enabling to represent the intention of photographer by giving a clear focus only on an object-of-interest (OOI). The proposed algorithm largely consists of two modules. The first module automatically extracts OOIs from the first frame by separating sharply focused OOIs from other out-of-focused foreground or background objects. The second module tracks OOIs for the rest of the video sequence, aimed at running the system in real-time, or at least, semi-real-time. The experimental results indicate that the proposed algorithm provides an effective tool, which can be a basis of applications, such as video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing systems.

Overview and Performance Analysis of the Emerging Scalable Video Coding (스케일러블 비디오 부호화의 개요 및 성능 분석)

  • Choi, Hae-Chul;Lee, Kyung-Il;Kang, Jung-Woo;Bae, Seong-Jun;Yoo, Jeong-Ju
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.542-554
    • /
    • 2007
  • Seamless streaming of multimedia content via heterogeneous networks to viewers using a variety of devices has been a desire for many multimedia services, for which the multimedia contents should be adapted to usage environments such as network characteristics, terminal capabilities, and user preferences. Scalability in video coding is one of attractive features to meet dynamically changing requirements of heterogeneous networks. Currently a new scalable video coding (SVC) is standardizing in the Joint Video Team (JVT) of the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), which will be released as Extension 3 of H.264/MPEG-4 AVC. In this paper, we introduce new technologies of SVC and evaluate performance of it especially regarding on overhead bit-rate and coding efficiency to support spatial, temporal, and quality scalability.

Orientation Analysis between UAV Video and Photos for 3D Measurement of Bridges (교량의 3차원 측정을 위한 UAV 비디오와 사진의 표정 분석)

  • Han, Dongyeob;Park, Jae Bong;Huh, Jungwon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.451-456
    • /
    • 2018
  • UAVs (Unmanned Aerial Vehicles) are widely used for maintenance and monitoring of facilities. It is necessary to acquire a high-resolution image for evaluating the appearance state of the facility in safety inspection. In addition, it is essential to acquire the video data in order to acquire data over a wide area rapidly. In general, since video data does not include position information, it is difficult to analyze the actual size of the inspection object quantitatively. In this study, we evaluated the utilization of 3D point cloud data of bridges using a matching between video frames and reference photos. The drones were used to acquire video and photographs. And exterior orientations of the video frames were generated through feature point matching with reference photos. Experimental results showed that the accuracy of the video frame data is similar to that of the reference photos. Furthermore, the point cloud data generated by using video frames represented the shape and size of bridges with usable accuracy. If the stability of the product is verified through the matching test of various conditions in the future, it is expected that the video-based facility modeling and inspection will be effectively conducted.