• Title/Summary/Keyword: Video-based technique

Search Result 571, Processing Time 0.024 seconds

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

A Scalable Video Coding(SVC) and Balanced Selection Algorithm based P2P Streaming Technique for Efficient Military Video Information Transmission (효율적인 국방 영상정보 전송을 위한 확장비디오코딩(SVC) 및 균형선택 알고리즘 기반의 피투피(P2P) 비디오 스트리밍 기법 연구)

  • Shin, Kyuyong;Kim, Kyoung Min;Lee, Jongkwan
    • Convergence Security Journal
    • /
    • v.19 no.4
    • /
    • pp.87-96
    • /
    • 2019
  • Recently, with the rapid development of video equipment and technology, tremendous video information is produced and utilized in military domain to acquire battlefield information or for effective command control. Note that the video playback devices currently used in the military domain ranges from low-performance tactical multi-functional terminals (TMFT) to high-performance video servers and the networks where the video information is transmitted also range from the low speed tactical information and communication network (TICN) to ultra-high speed defense broadband converged networks such as M-BcN. Therefore, there is a need for an efficient streaming technique that can efficiently transmit defense video information in heterogeneous communication equipment and network environments. To solve the problem, this paper proposes a Scalable Video Coding (SVC) and balanced selection algorithm based Peer-to-Peer (P2P) streaming technique and the feasibility of the proposed technique is verified by simulations. The simulation results based on our BitTorrent simulator show that the proposed balanced selection scheme outperforms the sequential or rarest selection algorithm.

Monitoring System of Sandbar Variation of Estuary using Video-based Technique (비디오를 이용한 하구 사주 변화 모니터링 시스템(I) - Hardware System 구축을 중심으로 -)

  • Yoon, Han-Sam;Ryu, Seung-Woo;Kang, Tae-Soon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.32 no.4
    • /
    • pp.630-636
    • /
    • 2008
  • Monitoring the location of the shoreline and foreshore changes through the time and core tasks are carried out by coastal engineers for a wide range of research. With the advent of digital imaging technology, the shore-based video monitoring system provides many advantages than field surveys. This study presents the development and construction(installation) of video monitoring system to assist the study of coastal and shoreline dynamics and evolution, especially sandbar variation at the Nakdong river estuary. For the purpose of this study, at high building near the Dadea-po beach (St. 2) and Jinudo(island) (St. 1) foreshore region, where coastline variation is highly active, 5 video cameras installed; the coastline movement has monitored since Aug. 2007 using the systems. From the image results of video camera, the 'Spit' type sandbar appears at the foreshore region of Doyodeung and Dadea-po beach and measured the deposition process of Jinudo(island) foreshore region. As a result, the monitoring system using video-based technique built in this study would be able to identify changes in the area and width of shoreline and beach of Nakdong river estuary.

Implementation of a video service system for internet based on H.263 (H.263을 기반으로하는 인터넷용 동영상 서비스 시스템 구현)

  • 이성수;남재열
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.737-740
    • /
    • 1998
  • Under the worldwide booming internet environment, there has been increasing demand for various multimedia services. Especially the demand for effective video services has been rapidly increased. In this paper, we describe the implementation of video service system for internet use based on H/263 video compression technique and UDP socket on the TCP/IP environment. In addition, by using the plug-in-play technique, the implemented system improved user interface for correct retrieval and easy usage.

  • PDF

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

Removing Shadows for the Surveillance System Using a Video Camera (비디오 카메라를 이용한 감시 장치에서 그림자의 제거)

  • Kim, Jung-Dae;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.176-178
    • /
    • 2005
  • In the images of a video camera employed for surveillance, detecting targets by extracting foreground image is of great importance. The foreground regions detected, however, include not only moving targets but also their shadows. This paper presents a novel technique to detect shadow pixels in the foreground image of a video camera. The image characteristics of video cameras employed, a web-cam and a CCD, are first analysed in the HSV color space and a pixel-level shadow detection technique is proposed based on the analysis. Compared with existing techniques where unified criteria are used to all pixels, the proposed technique determines shadow pixels utilizing a fact that the effect of shadowing to each pixel is different depending on its brightness in background image. Such an approach can accommodate local features in an image and hold consistent performance even in changing environment. In experiments targeting pedestrians, the proposed technique showed better results compared with an existing technique.

  • PDF

Video Scene Segmentation Technique based on Color and Motion Features (칼라 및 모션 특징 기반 비디오 씬 분할 기법)

  • 송창준;고한석;권용무
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.102-112
    • /
    • 2000
  • The previous video structuring techniques are mainly limited to shot or shot group level. However, the shot level structure couldn't provide semantics within a video. So, researches on high level structuring are going on for getting over the drawbacks of shot level structure, recently. To overcome the drawbacks of shot level structure, we propose video scene segmentation technique based on color and motion features. For considering various color distribution, each shot is divided into sub-shots based on color feature. A key frame is extracted from each sub-shot. The motion feature in a shot is extracted from MPEG-1 video's motion vector. Moreover adaptive weights based on motion's property in search range are applied to color and motion features. The experiment results of proposed technique show the excellence in view of the over-segmentation and the reflection of semantics, comparing with those of previous techniques. The proposed technique decomposes video into meaningful hierarchical structure and provides video browsing or retrieval based on scene.

  • PDF

Design of VCR Functions With MPEG Characteristics for VOD based on Multicast (멀티캐스트 기반의 VOD 시스템에서 MPEG의 특성을 고려한 VCR 기능의 설계)

  • Lee, Joa-Hyoung;Jung, In-Bum
    • The KIPS Transactions:PartC
    • /
    • v.16C no.4
    • /
    • pp.487-494
    • /
    • 2009
  • VOD(Video On Demand) that provides streaming service according to the user's requirement in real time, consists of the video streaming server and the client system. Since it is very hard to apply the traditional server-client model that a server communicates with many clients through 1:1 connection to VOD system because it requires very high network bandwidth, many researches have been done to address this problem. Batching technique is one of VOD system based on Multicast that requires very small network bandwidth. However, the batching based VOD system has a limitation that it is very hard to provide VCR(Video Cassette Recorder) ability. In this paper, we propose a technique that reduces the required network bandwidth to provide VCR function by using the characteristic of MPEG, one of international video compression standard. In the proposed technique, a new video stream for VCR function is constructed with I pictures that is able to be decoded independently. The new video stream for VCR function is transmitted with the video stream for normal play together in Batching manner. The performance evaluation result shows that the proposed technique not only reduces the required network bandwidth and memory usage but also decreases the CPU usages.

Joint Source Channel Coding for H-263+ Based Video by Using the Unequal Error Protection Technique (H.263+ 기반 영상 소스 채널 결합 부호화기의 불균등 오류 보호(UEP) 기법 연구)

  • 이상훈;최윤식
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.45-48
    • /
    • 2001
  • Unequal Error Protection(UP) is reasonable scheme in transmission of compressed video with low bit rate. Because it offers the error correction ability each other data according to the Source Significance Information. Hence it can also be flexible to the given channel environment on the video transmission. This paper propose the joint source channel coding through the UEP in consideration of the hierarchical structure of H.263+ based video and the influence of the transmission error. It especially proposes the error-resilient video transmission technique which can reduce complexity of channel coder & decoder by partitioning the video data with a frame. As the result of the proposed algorithm, it is possible to increase the quality of reconstructed video in the error environment without creating additional bits.

  • PDF