• Title/Summary/Keyword: RTSP(Real-Time Streaming Protocol)

Search Result 23, Processing Time 0.018 seconds

A Kernel-level RTP for Efficient Support of Multimedia Service on Embedded Systems (내장형 시스템의 원활한 멀티미디어 서비스 지원을 위한 커널 수준의 RTP)

  • Sun Dong Guk;Kim Tae Woong;Kim Sung Jo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.460-471
    • /
    • 2004
  • Since the RTP is suitable for real-time data transmission in multimedia services like VoD, AoD, and VoIP, it has been adopted as a real-time transport protocol by RTSP, H.323, and SIP. Even though the RTP protocol stack for embedded systems has been in great need for efficient support of multimedia services, such a stack has not been developed yet. In this paper, we explain embeddedRTP which supports the RTP protocol stack at the kernel level so that it is suitable for embedded systems. Since embeddedRTP is designed to reside in the UBP module, existing applications which rely ell TCP/IP services can proceed the same as before, while applications which rely on the RTP protocol stack can request HTP services through embeddedRTp API. EmbeddedRTP stores transmitted RTP packets into per session packet buffer, using the packet's port number and multimedia session information. Communications between applications and embeddedRTP is performed through system calls and signal mechanisms. Additionally, embeddedRTP API makes it possible to develop applications more conveniently. Our performance test shows that packet-processing speed of embeddedRTP is about 7.5 times faster than that oi VCL RTP for multimedia streaming services on PDA in spite that its object code size is reduced about by 58% with respect to UCL RTP's.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Applying a Two-channel Video Streaming Technology Front and Rear Vehicle Wireless Video Monitoring System (2채널 영상 스트리밍 기술을 적용한 차량용 전. 후방 무선 영상 모니터링 시스템)

  • Na, HeeSu;Won, YoungJin;Yoon, JungGeun;Lee, SangMin;Ahn, MyeongIl;Kim, DongHyun;Moon, JongHoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.210-216
    • /
    • 2014
  • In this paper, it was proposed to develop front and rear image monitoring system for vehicle that help a driver to cope with urgent situation about a dangerous element. When parking a vehicle, the risk factors to be formed by the dead zone can be resolved by using anterior and posterior cameras of the vehicle. In embedded system environment, a SoC(System on Chip) and two high-resolution CMOS (Complementary metal-oxide-semiconductor) image sensors were used to transfer two high-resolution image data through he TCP/ IP-based network. To transfer image data through he TCP/ IP-based network, the images received by two cameras were compressed by using H.264 and they were transmitted with wireless method(Wi-Fi) by using real-time transport protocol (Real-time Transport Protocol). Transmission loss, transmission delay and transmission limit were solved in wireless (Wi-Fi) environment and the bit-rate of two image data compressed by H.264 was adjusted. And the system for the optimal transmission in wireless (Wi-Fi) environment was materialized and experimented.