• 제목/요약/키워드: real video

Search Result 2,058, Processing Time 0.031 seconds

Advanced Real-Time Rate Control for Low Bit Rate Video Communication

  • Kim, Yoon
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.513-520
    • /
    • 2006
  • In this paper, we propose a novel real-time frame-layer rate control algorithm using sliding window method for low bit rate video coding. The proposed rate control method performs bit allocation at the frame level to minimize the average distortion over an entire sequence as well as variations in distortion between frames. A new frame-layer rate-distortion model is derived, and a non-iterative optimization method is used for low computational complexity. In order to reduce the quality fluctuation, we use a sliding window scheme which does not require the pre-analysis process. Therefore, the proposed algorithm does not produce time delay from encoding, and is suitable for real-time low-complexity video encoder. Experimental results indicate that the proposed control method provides better visual and PSNR performance than the existing TMN8 rate control method.

  • PDF

Telemedicine for Real-Time Multi-Consultation

  • Chun Hye J.;Youn HY;Yoo Sun K.
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.5
    • /
    • pp.301-307
    • /
    • 2005
  • We introduce a new multimedia telemedicine system which is called Telemedicine for Real-time Emergency Multi-consultation(TREM), based on multiple connection between medical specialists. Due to the subdivision of medical specialties, the existing one-to-one telemedicine system needs be modified to a simultaneous multi-consulting system. To facilitate the consultation the designed system includes following modules: high-quality video, video conferenceing, bio-signal transmission, and file transmission. In order to enhance the operability of the system in different network environment, we made it possible for the user to choose appropriate data acquisition sources of multimedia data and video resolutions. We have tested this system set up in three different places: emergency room, radiologist's office, and surgeon's office. All three communicating systems were successful in making connections with the multi-consultation center to exchange data simultaneously in real-time.

A Study on Effective Optimization by Comparison with FMQ of Real-time Rendering for Variable Surface Formats (다양한 텍스처 형식에 따른 실시간 렌더링의 FMQ 비교를 통한 효과적인 최적화(Optimization) 기법에 관한 연구)

  • Chae Heon-Joo;Ryu Seuc-Ho;Kyung Byung-Pyo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.13-18
    • /
    • 2005
  • Textures used in the game and VR environments with real-time rendering technology have different results according to used texture format supported video card. We propose some ideas for the texture using method by comparison with Frame Rate, Video Memory, and Quality.

  • PDF

ESTIMATION OF PEDESTRIAN FLOW SPEED IN SURVEILLANCE VIDEOS

  • Lee, Gwang-Gook;Ka, Kee-Hwan;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.330-333
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, pixel-to-meter conversion factors are calculated from camera geometry. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.1m/s. The proposed method also showed a promising result for the real video.

  • PDF

Intelligent Composition of CG and Dynamic Scene (CG와 동영상의 지적합성)

  • 박종일;정경훈;박경세;송재극
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1995.06a
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.

Software-defined Radio (SDR): An Approach to Real-Time Video Data Transceiver Implementation (소프트웨어 정의 라디오: 실시간 동영상 데이터 송수신기 구현에 대한 접근)

  • Dongho You
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.149-152
    • /
    • 2023
  • In this paper, I present an approach to implement a real-time video transceiver using software-defined radio (SDR). Through this, it is expected that it will be able to lower the access threshold and provide new perspectives and insights to researchers who want to study the recently spotlighted Open Radio Access Network (O-RAN) and implement it through SDR devices and open software.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

An Efficient Video Retrieval Algorithm Using Color and Edge Features

  • Kim Sang-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.1
    • /
    • pp.11-16
    • /
    • 2006
  • To manipulate large video databases, effective video indexing and retrieval are required. A large number of video indexing and retrieval algorithms have been presented for frame-w]so user query or video content query whereas a relatively few video sequence matching algorithms have been proposed for video sequence query. In this paper, we propose an efficient algorithm to extract key frames using color histograms and to match the video sequences using edge features. To effectively match video sequences with low computational load, we make use of the key frames extracted by the cumulative measure and the distance between key frames, and compare two sets of key frames using the modified Hausdorff distance. Experimental results with several real sequences show that the proposed video retrieval algorithm using color and edge features yields the higher accuracy and performance than conventional methods such as histogram difference, Euclidean metric, Battachaya distance, and directed divergence methods.

  • PDF

A Mask Wearing Detection System Based on Deep Learning

  • Yang, Shilong;Xu, Huanhuan;Yang, Zi-Yuan;Wang, Changkun
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.159-166
    • /
    • 2021
  • COVID-19 has dramatically changed people's daily life. Wearing masks is considered as a simple but effective way to defend the spread of the epidemic. Hence, a real-time and accurate mask wearing detection system is important. In this paper, a deep learning-based mask wearing detection system is developed to help people defend against the terrible epidemic. The system consists of three important functions, which are image detection, video detection and real-time detection. To keep a high detection rate, a deep learning-based method is adopted to detect masks. Unfortunately, according to the suddenness of the epidemic, the mask wearing dataset is scarce, so a mask wearing dataset is collected in this paper. Besides, to reduce the computational cost and runtime, a simple online and real-time tracking method is adopted to achieve video detection and monitoring. Furthermore, a function is implemented to call the camera to real-time achieve mask wearing detection. The sufficient results have shown that the developed system can perform well in the mask wearing detection task. The precision, recall, mAP and F1 can achieve 86.6%, 96.7%, 96.2% and 91.4%, respectively.

Dynamic Modeling and Georegistration of Airborne Video Sequences

  • Lee, Changno
    • Korean Journal of Geomatics
    • /
    • v.3 no.1
    • /
    • pp.23-32
    • /
    • 2003
  • Rigorous sensor and dynamic modeling techniques are required if spatial information is to be accurately extracted from video imagery. First, a mathematical model for an uncalibrated video camera and a description of a bundle adjustment with added parameters, for purposes of general block triangulation, is presented. This is followed by the application of invariance-based techniques, with constraints, to derive initial approximations for the camera parameters. Finally, dynamic modeling using the Kalman Filter is discussed. The results of various experiments with real video imagery, which apply the developed techniques, are given.

  • PDF