• Title/Summary/Keyword: Video Images

Search Result 1,447, Processing Time 0.025 seconds

Image Mood Classification Using Deep CNN and Its Application to Automatic Video Generation (심층 CNN을 활용한 영상 분위기 분류 및 이를 활용한 동영상 자동 생성)

  • Cho, Dong-Hee;Nam, Yong-Wook;Lee, Hyun-Chang;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.23-29
    • /
    • 2019
  • In this paper, the mood of images was classified into eight categories through a deep convolutional neural network and video was automatically generated using proper background music. Based on the collected image data, the classification model is learned using a multilayer perceptron (MLP). Using the MLP, a video is generated by using multi-class classification to predict image mood to be used for video generation, and by matching pre-classified music. As a result of 10-fold cross-validation and result of experiments on actual images, each 72.4% of accuracy and 64% of confusion matrix accuracy was achieved. In the case of misclassification, by classifying video into a similar mood, it was confirmed that the music from the video had no great mismatch with images.

Application of Mexican Hat Function to Wave Profile Detection (파형 분석을 위한 멕시코 모자 함수 응용)

  • 이희성;권순홍;이태일
    • Journal of Ocean Engineering and Technology
    • /
    • v.16 no.6
    • /
    • pp.32-36
    • /
    • 2002
  • This paper presents the results of wave profile detection from video image using the Mexican hat function. The Mexican hat function has been extensively used in the field of signal processing to detect discontinuity in the images. The analysis was done on the numerical image and video images of waves that were taken in the small wave flume. The results show that the Mexican hat function is an excellent tool for wave profile detection.

A Study of Video Synchronization Method for Live 3D Stereoscopic Camera (실시간 3D 영상 카메라의 영상 동기화 방법에 관한 연구)

  • Han, Byung-Wan;Lim, Sung-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.263-268
    • /
    • 2013
  • A stereoscopic image is made via 3 dimensional image processing for combining two images from left and right camera. In this case, it is very important to synchronize input images from two cameras. The synchronization method for two camera input images is proposed in this paper. A software system is used to support various video format. And it will be used in the system for glassless stereoscopic images using several cameras.

Land Cover Classification and Accuracy Assessment Using Aerial Videography and Landsat-TM Satellite Image -A Case Study of Taean Seashore National Park- (항공비디오와 Landsat-TM 자료를 이용한 지피의 분류와 평가 - 태안 해안국립공원을 사례로 -)

  • 서동조;박종화;조용현
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.27 no.4
    • /
    • pp.131-136
    • /
    • 1999
  • Aerial videography techniques have been used to inventory conditions associated with grassland, forests, and agricultural crop production. Most recently, aerial videography has been used to verity satellite image classifications as part of the natural ecosystem survey. The objectives of this study were: (1) to use aerial video images of the study area, one part of Taean Seashore National Park, for the accuracy assessment, and (2) to determine the suitability of aerial videography as an accuracy assessment, of the land cover classification with Landsat-TM data. Video images were collected twice, summer and winter seasons, and divided into two kinds of images, wide angle and narrow angle images. Accuracy assessment methods include the calculation of the error matrix, the overall accuracy and kappa coefficient of agreement. This study indicates that aerial videography is an effective tool for accuracy assessment of the satellite image classifications of which features are relatively large and continuous. And it would be possible to overcome the limits of the present natural ecosystem survey method.

  • PDF

Non-Iterative Threshold based Recovery Algorithm (NITRA) for Compressively Sensed Images and Videos

  • Poovathy, J. Florence Gnana;Radha, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4160-4176
    • /
    • 2015
  • Data compression like image and video compression has come a long way since the introduction of Compressive Sensing (CS) which compresses sparse signals such as images, videos etc. to very few samples i.e. M < N measurements. At the receiver end, a robust and efficient recovery algorithm estimates the original image or video. Many prominent algorithms solve least squares problem (LSP) iteratively in order to reconstruct the signal hence consuming more processing time. In this paper non-iterative threshold based recovery algorithm (NITRA) is proposed for the recovery of images and videos without solving LSP, claiming reduced complexity and better reconstruction quality. The elapsed time for images and videos using NITRA is in ㎲ range which is 100 times less than other existing algorithms. The peak signal to noise ratio (PSNR) is above 30 dB, structural similarity (SSIM) and structural content (SC) are of 99%.

A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm (블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법)

  • 전병태;이재연;배영래
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.113-122
    • /
    • 2000
  • It is sometimes necessary to remove the captions and recover original images from video images already broadcast, When the number of images requiring such recovery is small, manual processing is possible, but as the number grows it would be very difficult to do it manually. Therefore, a method for recovering original image for the caption areas in needed. Traditional research on image restoration has focused on restoring blurred images to sharp images using frequency filtering or video coding for transferring video images. This paper proposes a method for automatically recovering original image using BMA(Block Matching Algorithm). We extract information on caption regions and scene change that is used as a prior-knowledge for recovering original image. From the result of caption information detection, we know the start and end frames of captions in video and the character areas in the caption regions. The direction for the recovery is decided using information on the scene change and caption region(the start and end frame for captions). According to the direction, we recover the original image by performing block matching for character components in extracted caption region. Experimental results show that the case of stationary images with little camera or object motion is well recovered. We see that the case of images with motion in complex background is also recovered.

  • PDF

Interleaved Multiple Frame Coding using JPEG2000

  • Takagi, Ayuko;Kiya, Hitoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.706-709
    • /
    • 2002
  • This paper describes an effective technique for coding video sequences based on JPEG2000 codec. In the proposed method, multiple frames are combined into one large picture by interleaving each pixel data. A large picture enables images to be coded more efficiently and image quality is improved. A video sequence is efficiently coded by adapting the time correlation of the video sequences to spatial correlation. We demonstrated the effectiveness of this method by encoding video sequences using JPEG2000.

  • PDF

Digital Watermarking Technique of Compressed Multi-view Video with Layered Depth Image (계층적 깊이 영상으로 압축된 다시점 비디오에 대한 디지털 워터마크 기술)

  • Lim, Joong-Hee;Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • In this paper, proposed digital image watermarking technique with lifting wavelet transformation. This watermark technique can be easily extended in video content fields. Therefore, we apply this watermark technique to layered depth image structure that is efficient compression method of multi-view video with depth images. This application steps are very simple, because watermark is inserted only reference image. And watermarks of the other view images borrow from reference image. Each view image of multi-view video may be guaranteed authentication and copyright.

  • PDF

Implementation of Real-Time Video Transfer System on Android Environment (안드로이드 기반의 실시간 영상전송시스템의 구현)

  • Lee, Kang-Hun;Kim, Dong-Il;Kim, Dae-Ho;Sung, Myung-Yoon;Lee, Young-Kil;Jung, Suk-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.3 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • In this paper, we developed real-tim video transfer system based on Android environment. After android device with embedded camera capture images, it sends image frames to video server system. And also video server transfer the images from client to peer client. Peer client also implemented on android environment. We can send 16 image frames per second without any loss in 3G mobile network environment.

Video Sequences Registration by using Interested Points Extraction (특징점 추출에 의한 비디오 영상등록)

  • Kim, Seong-Sam;Lee, Hye-Suk;Kim, Eui-Myoung;Yoo, Hwan-Hee
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2007.04a
    • /
    • pp.127-130
    • /
    • 2007
  • The increased availability of portable, low-cost, high resolution video devices has resulted in a rapid growth of the applications for video sequences. These video devices can be mounted in handhold unit, mobile unit and airborne platforms like maned or unmaned helicopter, plane, airship, etc. A core technique in use of video sequences is to align neighborhood video frames to each other or to reference images. For video sequences registration, we extracted interested points from aerial video sequences using Harris, $F{\square}rstner$, and KLT operators and implemented image matching using these points. As the result, we analysed image matching results for each operators and evaluated accuracy of aerial video registration.

  • PDF