• 제목/요약/키워드: Video stream

검색결과 573건 처리시간 0.028초

적외선 영상해석을 이용한 이중목적탄 자탄계수 계측기법연구 (DPICM subprojectile counting technique using image analysis of infrared camera)

  • 박원우;최주호;유준
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.11-16
    • /
    • 1997
  • This paper describes the grenade counting system developed for DPICM submunition analysis using the infrared video streams, and its some video stream processing technique. The video stream data processing procedure consists of four sequences; Analog infrared video stream recording, video stream capture, video stream pre-processing, and video stream analysis including the grenade counting. Some applications of this algorithms to real bursting test has shown the possibility of automation for submunition counting.

  • PDF

웨이브릿 기반 비디오 신호의 멀티 스트림 전송 기법 (Multi-stream Delivery Method of the Video Signal based on Wavelet)

  • 강경원;류권열;권기룡;문광석;김문수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(3)
    • /
    • pp.101-104
    • /
    • 2001
  • Over the last few years, streaming audio and video content on Internet sites has increased at unprecedented rates. The predominant method of delivering video over the current Internet is video streaming such as SureStream or Intelligent Stream. Since each method provides the client with only one data stream from one server, it often suffers from poor qualify of pictures in the case of network link congestion. In this paper, we propose a novel method of delivering video stream based on wavelet to a client by utilizing multi-threaded parallel connections from the client to multiple servers and to provides a better way to address the scalability functionalities. The experimental results show that the video quality delivered by the proposed multithreaded stream could significantly be improved over the conventional single video stream methods.

  • PDF

모바일 환경을 위한 준-동적 디지털 비디오 어댑테이션 시스템 (Semi-Dynamic Digital Video Adaptation System for Mobile Environment)

  • 추진호;이상민;낭종호
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권10호
    • /
    • pp.1320-1331
    • /
    • 2004
  • 동영상 어댑테이션 시스템은 네트워크 제약, 클라이언트 제약 등을 만족하면서, 동영상의 품질이 최대가 되도록 동영상을 변환해 주는 시스템을 말한다. 본 논문에서는 정적으로 중간 동영상과 품질측정에 관한 정보를 생성해두는 준-동적 어댑테이션 시스템을 제안한다. 중간 동영상은 원본 동영상의 해상도를 반으로 줄여가며 생성되어, 서버에 저장된다. 품질 측정에 관한 정보는 프레임 율 별 부드러운 정도의 수치와, 픽셀 당 비트 량 별 선명한 정도의 수치에 대한 테이블을 정적으로 생성해 둔 것이다. 이런 중간 결과물들은 클라이언트에서의 서비스 품질을 고려하며 동적으로 동영상을 변환 할 때 가능한 빠르게 동영상 변환이 수행될 수 있도록 해준다. 실험 결과 제안된 어댑테이션 시스템은 기존의 동적 어댑테이션 시스템에 비해 약 30배정도 빠르게 어댑테이션을 수행하는 반면, 약 2%정도의 품질 저하가 있었고 중간동영상을 저장하기 위한 추가적인 서버공간이 필요하다는 것을 확인 할 수 있었다.

대화형 연산 후 수렴을 이용한 저장된 비디오의 효율적인 전송 스케줄 작성 방안 (An Efficient Scheme to write a Transmission Schedule using Convergence after Interactive Operations in a Stored Video)

  • 이재홍;김승환
    • 한국정보처리학회논문지
    • /
    • 제7권7호
    • /
    • pp.2050-2059
    • /
    • 2000
  • In a video-on-Demand(VOD) service, a server has to return to he normal playback quickly at a certain new frame position after interactive operations such as jump or last playback. In this paper, we propose an efficient scheme to write a transmission schedule for a playback restart of a video stream at a new frame position after interactive operations. The proposed scheme is based on convergence characteristics, that is transmission schedules with different playback startup frame position in a video stream meet each other at some frame position. The scheme applies a bandwidth smoothing from a new frame position to a convergence position without considering all remaining frames of a video stream. And then the scheme transmits video dta according to the new schedule from the new frame position to the convergence position, and then transmits the remaining video data according to the reference schedule from the convergence position, and then transmits the remaining video data according to the reference schedule from the convergence position to the last frame position. In this paper, we showed that there existed the convergence position corresponding to nay frame position in a video stream through many experiments based on MPEG-1 bit trace data. With the convergence we reduced the computational overhead of a bandwidth smoothing, which was applied to find a new transmission schedule after interactive operations. Also, storage overhead is greatly reduced by storing pre-calculated schedule information up to the convergence position for each I frame position of a video stream with video data off-line. By saving information on a transmission schedule off-line along with the video data and searching the schedule corresponding to the specified restarting frame position, we expect the possibility of normal playback of a video stream with small tolerable playback startup delay.

  • PDF

SPATIOTEMPORAL MARKER SEARCHING METHOD IN VIDEO STREAM

  • Shimizu, Noriyuki;Miyao, Jun'ichi
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.812-815
    • /
    • 2009
  • This paper discusses a searching method for special markers attached with persons in a surveillance video stream. The marker is a small plate with infrared LEDs, which is called a spatiotemporal marker because it shows a 2-D sequential pattern synchronized with video frames. The search is based on the motion vectors which is the same as one in video compression. The experiments using prototype markers show that the proposed method is practical. Though the method is applicable to a video stream independently, it can decrease total computation cost if motion vector analyses of a video compression and the proposed method is unified.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

동영상 스트림 크기 및 품질 예측에 기반한 동적 동영상 적응변환 방법 (A Dynamic Video Adaptation Scheme based on Size and Quality Predictions)

  • 김종항;낭종호
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제32권2호
    • /
    • pp.95-105
    • /
    • 2005
  • 본 논문에서는 반복적인 인코딩/디코딩 작업 없이 모바일 단말기나 현재 네크웍 상황에 적합한 동영상 스트림을 생성하는 새로운 동적 동영상 적응변환 방법을 제안한다. 제안한 방법에서는 MPEG-1/-2/-4와 같은 비디오 코덱의 특성을 부호화 된 동영상 스트림 크기와 품질에 초점을 맞추어 미리 분석하고, 이를 코덱 의존적인 특성 테이블로 프록시에 저장한다. 이런 특성 테이블 내용과 단말기가 요청한 동영상에 대한 최고 품질 스트림의 크기 및 품질 정보를 이용하여 요청한 모바일 단말기에 적합한 동영상 스트림의 크기 및 품질을 동적으로 예측할 수 있다. 제안한 방법에서는 이런 예측을 바탕으로 동영상의 최대 품질을 유지하며 모바일 단말기의 공간 제약을 만족하는 적응화 된 동영상 스트림 버전을 반복적인 인코딩/디코딩 작업 없이 생성한다. 실험 결과 제안한 방법은 5% 미만의 오차율로 매우 빠르게 동적 동영상 적응변환을 수행함을 알 수 있었다. 제안한 방법은 다양한 비디오 코덱으로 부호화 된 인터넷상의 동영상을 빠르게 변환하는 모바일 단말기를 위한 프록시 서버에 사용될 수 있을 것이다.

Video Quality for DTV Essential Hidden Area Utilization

  • Han, Chan-Ho
    • Journal of Multimedia Information System
    • /
    • 제4권1호
    • /
    • pp.19-26
    • /
    • 2017
  • The compression of video for both full HD and UHD requires the inclusion of extra vertical lines to every video frame, named as the DTV essential hidden area (DEHA), for the effective functioning of the MPEG-2/4/H encoder, stream, and decoder. However, while the encoding/decoding process is dependent on the DEHA, the DEHA is conventionally viewed as a redundancy in terms of channel utilization or storage efficiency. This paper proposes a block mode DEHA method to more effectively utilize the DEHA. Partitioning video block images and then evenly filling the representative DEHA macroblocks with the average DC coefficient of the active video macroblock can minimize the amount of DEHA data entering the compressed video stream. Theoretically, this process results in smaller DEHA data entering the video stream. Experimental testing of the proposed block mode DEHA method revealed a slight improvement in the quality of the active video. Outside of this technological improvement to video quality, the attractiveness of the proposed DEHA method is also heightened by the ease that it can be implemented with existing video encoders.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

멀티스트림을 이용한 비디오 스트림의 평활화 (Video Stream Smoothing Using Multistreams)

  • 강경원;문광석
    • 융합신호처리학회논문지
    • /
    • 제3권1호
    • /
    • pp.21-26
    • /
    • 2002
  • 비디오 스트림들은 사용된 압축 알고리듬의 구조와 화면의 복잡도 등에 따라 다양한 형태의 트래픽이 발생함으로, 송신측과 수신측 사이의 자원할당을 어렵게 할 뿐만 아니라, 현재의 인터넷과 같은 패킷 통신망에서는 연속적인 재생을 어렵게 한다. 따라서, 본 논문에서는 멀티스트림을 이용한 비디오 스트림의 평활화 방법을 제안한다. 제안한 방법은 스트림의 형태에 따라 LDU(logical data unit)를 정의한 후 일정한 크기로 다수의 스트림으로 생성하여 전송함으로써, 평활화와 선반입 과정에서 발생하는 버퍼링 시간을 줄일 수 있을 뿐만 아니라 네트워크의 지터에도 강하면, 클라이언트의 대역폭을 최대한 활용할 수 있는 효율적인 전송 특성을 얻을 수 있다.

  • PDF