• Title/Summary/Keyword: video to images

Search Result 1,348, Processing Time 0.024 seconds

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

A Study of Video Synchronization Method for Live 3D Stereoscopic Camera (실시간 3D 영상 카메라의 영상 동기화 방법에 관한 연구)

  • Han, Byung-Wan;Lim, Sung-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.263-268
    • /
    • 2013
  • A stereoscopic image is made via 3 dimensional image processing for combining two images from left and right camera. In this case, it is very important to synchronize input images from two cameras. The synchronization method for two camera input images is proposed in this paper. A software system is used to support various video format. And it will be used in the system for glassless stereoscopic images using several cameras.

A motion classification and retrieval system in baseball sports video using Convolutional Neural Network model

  • Park, Jun-Young;Kim, Jae-Seung;Woo, Yong-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.31-37
    • /
    • 2021
  • In this paper, we propose a method to effectively search by automatically classifying scenes in which specific images such as pitching or swing appear in baseball game images using a CNN(Convolution Neural Network) model. In addition, we propose a video scene search system that links the classification results of specific motions and game records. In order to test the efficiency of the proposed system, an experiment was conducted to classify the Korean professional baseball game videos from 2018 to 2019 by specific scenes. In an experiment to classify pitching scenes in baseball game images, the accuracy was about 90% for each game. And in the video scene search experiment linking the game record by extracting the scoreboard included in the game video, the accuracy was about 80% for each game. It is expected that the results of this study can be used effectively to establish strategies for improving performance by systematically analyzing past game images in Korean professional baseball games.

Interleaved Multiple Frame Coding using JPEG2000

  • Takagi, Ayuko;Kiya, Hitoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.706-709
    • /
    • 2002
  • This paper describes an effective technique for coding video sequences based on JPEG2000 codec. In the proposed method, multiple frames are combined into one large picture by interleaving each pixel data. A large picture enables images to be coded more efficiently and image quality is improved. A video sequence is efficiently coded by adapting the time correlation of the video sequences to spatial correlation. We demonstrated the effectiveness of this method by encoding video sequences using JPEG2000.

  • PDF

Video Sequences Registration by using Interested Points Extraction (특징점 추출에 의한 비디오 영상등록)

  • Kim, Seong-Sam;Lee, Hye-Suk;Kim, Eui-Myoung;Yoo, Hwan-Hee
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2007.04a
    • /
    • pp.127-130
    • /
    • 2007
  • The increased availability of portable, low-cost, high resolution video devices has resulted in a rapid growth of the applications for video sequences. These video devices can be mounted in handhold unit, mobile unit and airborne platforms like maned or unmaned helicopter, plane, airship, etc. A core technique in use of video sequences is to align neighborhood video frames to each other or to reference images. For video sequences registration, we extracted interested points from aerial video sequences using Harris, $F{\square}rstner$, and KLT operators and implemented image matching using these points. As the result, we analysed image matching results for each operators and evaluated accuracy of aerial video registration.

  • PDF

Real-Time Image-Based Relighting for Tangible Video Teleconference (실감화상통신을 위한 실시간 재조명 기술)

  • Ryu, Sae-Woon;Parka, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.807-810
    • /
    • 2009
  • This paper deals with a real-time image based relighting system for tangible video teleconference. The proposed image based relighting system renders the extracted human object using the virtual environmental images. The proposed system can homogenize virtually the lighting environments of remote users on the video teleconference, or render the humans like they are in the virtual places. To realize the video teleconference, the paper obtains the 3D object models of users in real-time using the controlled lighting system. In this paper, we use single color camera and synchronized two directional flash lights. Proposed system generates pure shading images using on and off flash images subtraction. One pure shading reflectance map generates a directional normal map from multiplication of each reflectance map and basic normal vector map. Each directional basic normal map is generated by inner vector calculation of incident light vector and camera viewing vector. And the basic normal vector means a basis component of real surface normal vector. The proposed system enables the users to immerse video teleconference just as they are in the virtual environments.

A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm (블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법)

  • 전병태;이재연;배영래
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.113-122
    • /
    • 2000
  • It is sometimes necessary to remove the captions and recover original images from video images already broadcast, When the number of images requiring such recovery is small, manual processing is possible, but as the number grows it would be very difficult to do it manually. Therefore, a method for recovering original image for the caption areas in needed. Traditional research on image restoration has focused on restoring blurred images to sharp images using frequency filtering or video coding for transferring video images. This paper proposes a method for automatically recovering original image using BMA(Block Matching Algorithm). We extract information on caption regions and scene change that is used as a prior-knowledge for recovering original image. From the result of caption information detection, we know the start and end frames of captions in video and the character areas in the caption regions. The direction for the recovery is decided using information on the scene change and caption region(the start and end frame for captions). According to the direction, we recover the original image by performing block matching for character components in extracted caption region. Experimental results show that the case of stationary images with little camera or object motion is well recovered. We see that the case of images with motion in complex background is also recovered.

  • PDF

A Study on the Gender Identity in Madonna Costume - Focusing on the Music Video Texts - (마돈나 의상에 나타난 젠더 정체성 - 뮤직비디오 텍스트를 중심으로 -)

  • 김주영;양숙희
    • The Research Journal of the Costume Culture
    • /
    • v.10 no.1
    • /
    • pp.60-75
    • /
    • 2002
  • The purpose of this research is to understand the gender identity expressed in Madonna music video texts and performances. Madonna has reconstructed the fluid identities through the variations of body, images, costumes, and attitudes . The results are as fellows; ① Her punky sexuality is to be seen the flash trash look, kitsch fashion, which reconstructs a good/bad taste, modesty/immodesty, the relations of under/outer wear using bawdy sexuality through her early Virgin tour. ② Her Heterosexuality is to be seen the glamourous look, traditional images of women, which represents the passive feminity of patriarch. ③ Her sadomasochism sexuality is to be seen the bondage look of dominatrix image, which deconstructs sexual taboos; represents sexual power. ④ Her bisexuality is to be seen androgynous look, the 3rd species look using masculinity/feminity signifier, which deconstructs the stereotypes of gender roles. ⑤ Her homosexuality is to be seen the fetish fashion by drag and lesbian, which deconstructs the dichotomy of normality/perversion; opens a possibility of women subjectivity of sexual desires.

  • PDF

A study on Web-based Video Panoramic Virtual Reality for Hose Cyber Shell Museum (비디오 파노라마 가상현실을 기반으로 하는 호서 사이버 패류 박물관의 연구)

  • Hong, Sung-Soo;khan, Irfan;Kim, Chang-ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.1468-1471
    • /
    • 2012
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient algorithm for Image registration and stitching of captured imaged from a video stream. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is use for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and a nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

Implementation of Real-Time Video Transfer System on Android Environment (안드로이드 기반의 실시간 영상전송시스템의 구현)

  • Lee, Kang-Hun;Kim, Dong-Il;Kim, Dae-Ho;Sung, Myung-Yoon;Lee, Young-Kil;Jung, Suk-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.3 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • In this paper, we developed real-tim video transfer system based on Android environment. After android device with embedded camera capture images, it sends image frames to video server system. And also video server transfer the images from client to peer client. Peer client also implemented on android environment. We can send 16 image frames per second without any loss in 3G mobile network environment.