• Title/Summary/Keyword: video resolution

Search Result 476, Processing Time 0.026 seconds

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

High Resolution Video Synthesis with a Hybrid Camera (하이브리드 카메라를 이용한 고해상도 비디오 합성)

  • Kim, Jong-Won;Kyung, Min-Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.4
    • /
    • pp.7-12
    • /
    • 2007
  • With the advent of digital cinema, more and more movies are digitally produced, distributed via digital medium such as hard drives and network, and finally projected using a digital projector. However, digital cameras capable of shotting at 2K or higher resolution for digital cinema are still very expensive and bulky, which impedes rapid transition to digital production. As a low-cost solution for acquiring high resolution digital videos, we propose a hybrid camera consisting of a low-resolution CCD for capturing videos and a high-resolution CCD for capturing still images at regular intervals. From the output of the hybrid camera, we can synthesize high-resolution videos by software as follows: for each frame, 1. find pixel correspondences from the current frame to the previous and subsequent keyframes associated with high resolution still images, 2. synthesize a high-resolution image for the current frame by copying the image blocks associated with the corresponding pixels from the high-resolution keyframe images, and 3. complete the synthesis by filling holes in the synthesized image. This framework can be extended to making NPR video effects and capturing HDR videos.

  • PDF

Video Watermarking Algorithm for H.264 Scalable Video Coding

  • Lu, Jianfeng;Li, Li;Yang, Zhenhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.56-67
    • /
    • 2013
  • Because H.264/SVC can meet the needs of different networks and user terminals, it has become more and more popular. In this paper, we focus on the spatial resolution scalability of H.264/SVC and propose a blind video watermarking algorithm for the copyright protection of H.264/SVC coded video. The watermark embedding occurs before the H.264/SVC encoding, and only the original enhancement layer sequence is watermarked. However, because the watermark is embedded into the average matrix of each macro block, it can be detected in both the enhancement layer and base layer after downsampling, video encoding, and video decoding. The proposed algorithm is examined using JSVM, and experiment results show that is robust to H.264/SVC coding and has little influence on video quality.

Development CMOS Sensor-Based Portable Video Scope and It's Image Processing Application (CMOS 센서를 이용한 휴대용 비디오스코프 및 영상처리 응용환경 개발)

  • 김상진;김기만;강진영;김영욱;백준기
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.517-520
    • /
    • 2003
  • Commercial video scope use CCD sensor and frame grabber for image capture and A/D interface but application limited by input resolution and high cost. In this paper we introduce portable video scope using CMOS sensor, USB pen and tuner card (low frame grabber) in place of commercial CCD sensor and frame grabber. Our video scope serves as an essential link between advancing commercial technology and research, providing cost effective solutions for educational, engineering and medical applications across an entire spectrum of needs. The software implementation is done using Direct Show in second version after initial trials using First version VFW (video for window), which gave very low frame rate. Our video scope operates on windows 98, ME, XP, 2000. The drawback of our video scope is crossover problem in output images caused due to interpolation, which has to be rectified for more efficient performance.

  • PDF

Transforming Text into Video: A Proposed Methodology for Video Production Using the VQGAN-CLIP Image Generative AI Model

  • SukChang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.225-230
    • /
    • 2023
  • With the development of AI technology, there is a growing discussion about Text-to-Image Generative AI. We presented a Generative AI video production method and delineated a methodology for the production of personalized AI-generated videos with the objective of broadening the landscape of the video domain. And we meticulously examined the procedural steps involved in AI-driven video production and directly implemented a video creation approach utilizing the VQGAN-CLIP model. The outcomes produced by the VQGAN-CLIP model exhibited a relatively moderate resolution and frame rate, and predominantly manifested as abstract images. Such characteristics indicated potential applicability in OTT-based video content or the realm of visual arts. It is anticipated that AI-driven video production techniques will see heightened utilization in forthcoming endeavors.

Exploring Image Processing and Image Restoration Techniques

  • Omarov, Batyrkhan Sultanovich;Altayeva, Aigerim Bakatkaliyevna;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.172-179
    • /
    • 2015
  • Because of the development of computers and high-technology applications, all devices that we use have become more intelligent. In recent years, security and surveillance systems have become more complicated as well. Before new technologies included video surveillance systems, security cameras were used only for recording events as they occurred, and a human had to analyze the recorded data. Nowadays, computers are used for video analytics, and video surveillance systems have become more autonomous and automated. The types of security cameras have also changed, and the market offers different kinds of cameras with integrated software. Even though there is a variety of hardware, their capabilities leave a lot to be desired. Therefore, this drawback is trying to compensate by dint of computer program solutions. Image processing is a very important part of video surveillance and security systems. Capturing an image exactly as it appears in the real world is difficult if not impossible. There is always noise to deal with. This is caused by the graininess of the emulsion, low resolution of the camera sensors, motion blur caused by movements and drag, focus problems, depth-of-field issues, or the imperfect nature of the camera lens. This paper reviews image processing, pattern recognition, and image digitization techniques, which will be useful in security services, to analyze bio-images, for image restoration, and for object classification.

Video Transcoding Scheme for N-Screen Service Based on Cloud Computing (클라우드 컴퓨팅에서 N-스크린 서비스를 위한 동영상 트랜스 코딩 기법)

  • Lim, Heon-Yong;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.9
    • /
    • pp.11-19
    • /
    • 2014
  • In this paper, we propose a real-time video transcoding scheme for N-Screen service based on cloud computing. This scheme creates an intro-block and several playback blocks by splitting the original video. And there is the first service request, after transmitting the intro-block, transmits the playback blocks that converting the blocks on real-time. In order to completing trans-coding within playback time of each block, we split and allocate the block to node according to performance of each node. Also, in order to provide real-time video playback service, the previous scheme convert original video into all format and resolution. However we show that the proposed scheme can reduce storage usage by converting original video into format with proper resolution suitable to device and platform of client. Through simulation, we show that it is more effective to real-time video playback for N-screen service than the previous method. We also show that the proposed scheme uses less storage usage than previous method.

A network-adaptive SVC Streaming Architecture

  • Chen, Peng;Lim, Jeong-Yeon;Lee, Bum-Shik;Kim, Mun-Churl;Hahm, Sang-Jin;Kim, Byung-Sun;Lee, Keun-Sik;Park, Keun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.257-260
    • /
    • 2006
  • In Video streaming environment, we must consider terminal and network characteristics, such as display resolution, frame rate, computational resource, network bandwidth, etc. The JVT (Joint Video Team) by ISO/IEC MPEG and ITU-TVCEG is currently standardizing Scalable Video Coding (SVC). This can represent video bitstreams in different sealable layers for flexible adaptation to terminal and network characteristics. This characteristic is very useful in video streaming applications. One fully scalable video can be extracted with specific target spatial resolution, temporal frame rate and quality level to match the requirements of terminals and networks. Besides, the extraction process is fast and consumes little computational resource, so it is possible to extract the partial video bitstream online to accommodate with changing network conditions etc. With all the advantages of SVC, we design and implement a network-adaptive SVC streaming system with an SVC extractor and a streamer to extract appropriate amounts of bitstreams to meet the required target bitrates and spatial resolutions. The proposed SVC extraction is designed to allow for flexible switching from layer to layer in SVC bitstreams online to cope with the change in network bandwidth. The extraction is made in every GOP unit. We present the implementation of our SVC streaming system with experimental results.

  • PDF

Video Quality Improvement Method of Up-sampling Video by Relationship of Intra Prediction Data and DCT Coefficient (화면 내 예측 정보와 DCT 계수의 관계에 의한 상향 표본화 영상의 화질 개선 방법)

  • Lee, Yoon-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.59-65
    • /
    • 2011
  • Korea DMB Service is popularized, and is used by many users. But latest display devices compared to the DMB content resolution support higher resolution and a variety of video resampling technologies has been used. Generally, subjective video quality is determined by object recognition rate in video, and increased as the edge space between objects are more clear. An edge is the boundary between an object and the background, and indicates the boundary between overlapping objects. the predicted direction in intra prediction used in H.264/AVC has the similarity up to 80% for the edge information. In the study, we propose an effective up-sampling mothed using the edge information that is extracted for the relationship between the intra prediction data and the DCT coefficient data of H.264 video encoding.