• 제목/요약/키워드: Image-to-Video

검색결과 2,715건 처리시간 0.033초

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • 한국측량학회지
    • /
    • 제38권2호
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

A VIDEO GEOGRAPHIC INFORMATION SYSTEM FOR SUPPORTING BI-DIRECTIONAL SEARCH FOR VIDEO DATA AND GEOGRAPHIC INFORMATION

  • Yoo, Jea-Jun;Joo, In-Hak;Park, Jong-Huyn;Lee, Jong-Hun
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.151-156
    • /
    • 2002
  • Recently, as the geographic information system (GIS) which searches, manages geographic information is used more widely, there is more requests for some systems which can search and display more actual and realistic information. As a response to these requests, the video geographic information system which connects video data obtained by using cameras and geographic information as it is by displaying the obtained video data is being more popular. However, because most existing video geographic information systems consider video data as an attribute of geographic information or use simple one-way links from geographic information to video data to connect video data with geographic information, they support only displaying video data through searching geographic information. In this paper, we design and implement a video geographic information system which connects video data with geographic information and supports hi-directional search; searching geographic information through searching video data and searching video data through searching geographic information. To do this, we 1) propose an ER data model to represent connection information related to video data, geographic information, 2) propose a process to extract and to construct connection information from video data and geographic information, 3) show a component based system architecture to organize the video geographic information system.

  • PDF

FEASIBILITY ON GENERATING STEREO MOSAIC IMAGE

  • Noh, Myoung-Jong;Lee, Sung-Hun;Cho, Woo-Sug
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.201-204
    • /
    • 2005
  • Recently, the generation of panoramic images and high quality mosaic images from video sequences has been attempted by a variety of investigations. Among a matter of investigation, in this paper, left and right stereo mosaic image generation utilizing airborne-video sequence images is focused upon. The stereo mosaic image is generated by creating left and right mosaic image which is generated by front and rear slit having different viewing angle in consecutive video frame images. The generation of stereo mosaic image proposed in this paper consists of several processes: camera parameter estimation for each video frame image, rectification, slicing, motion parallax elimination and image mosaicking. However it is necessary to check the feasibility on generating stereo mosaic image as explained processes. Therefore, in this paper, we performed the feasibility test on generating stereo mosaic image using video frame images. In doing so, anaglyphic image for stereo mosaic images is generated and tested for feasibility check.

  • PDF

객체 기반 MPEG-4 동영상의 입체 변환 (Stereoscopic Conversion of Object-based MPEG-4 Video)

  • 박상훈;김만배;손현식
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2407-2410
    • /
    • 2003
  • In this paper, we propose a new stereoscopic video conversion methodology that converts two-dimensional (2-D) MPEG-4 video to stereoscopic video. In MPEG-4, each Image is composed of background object and primary object. In the first step of the conversion methodology, the camera motion type is determined for stereo Image generation. In the second step, the object-based stereo image generation is carried out. The background object makes use of a current image and a delayed image for its stereo image generation. On the other hand, the primary object uses a current image and its horizontally-shifted version to avoid the possible vertical parallax that could happen. Furthermore, URFA(Uncovered Region Filling Algorithm) is applied in the uncovered region which might be created after the stereo image generation of a primary object. In our experiment, show MPEG-4 test video and its stereoscopic video based upon out proposed methodology and analyze Its results.

  • PDF

<백-아베 비디오 신디사이저>의 오디오 비주얼아트적 고찰 (A Study on in the Context of Audiovisual Art)

  • 윤지원
    • 한국멀티미디어학회논문지
    • /
    • 제23권4호
    • /
    • pp.615-624
    • /
    • 2020
  • By enabling musicians to freely control the elements involved in sound production and tone generation with a variety of timbre, synthesizers have revolutionized and permanently changed music since the 1960s. Paik-Abe Video Synthesizer, a masterpiece of video art maestro Nam June Paik, is a prominent example of re-interpretation of this new musical instrument in the realm of video and audio. This article examines Paik-Abe Video Synthesizer as an innovative instrument to play videos from the perspective of audiovisual art, and establishes its aesthetic value and significance through both artistic and technical analysis. The instrument, which embodied the concept of image sampling and real-time interactive video as an image-based multi-channel music production tool, contributed to establishing a new relationship between sound and image within the realm of audiovisual art. The fact that his video synthesizer not only adds image to sound, but also presents a complete fusion of image and sound as an image instrument with musical characteristics, becomes highly meaningful in this age of synesthesia.

차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법 (Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System)

  • 곽재호;김회율
    • 한국자동차공학회논문집
    • /
    • 제24권6호
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.

Efficient Image Size Selection for MPEG Video-based Point Cloud Compression

  • Jia, Qiong;Lee, M.K.;Dong, Tianyu;Kim, Kyu Tae;Jang, Euee S.
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2022년도 하계학술대회
    • /
    • pp.825-828
    • /
    • 2022
  • In this paper, we propose an efficient image size selection method for video-based point cloud compression. The current MPEG video-based point cloud compression reference encoding process configures a threshold on the size of images while converting point cloud data into images. Because the converted image is compressed and restored by the legacy video codec, the size of the image is one of the main components in influencing the compression efficiency. If the image size can be made smaller than the image size determined by the threshold, compression efficiency can be improved. Here, we studied how to improve the compression efficiency by selecting the best-fit image size generated during video-based point cloud compression. Experimental results show that the proposed method can reduce the encoding time by 6 percent without loss of coding performance compared to the test model 15.0 version of video-based point cloud encoder.

  • PDF

The Impact of Video Quality and Image Size on the Effectiveness of Online Video Advertising on YouTube

  • Moon, Jang Ho
    • International Journal of Contents
    • /
    • 제10권4호
    • /
    • pp.23-29
    • /
    • 2014
  • Online video advertising is now an increasingly important tool for marketers to reach and connect with their consumers. The purpose of this study was to empirically investigate the impact of video format on online video advertising. More specifically, this study aimed to explore whether online video quality and image size influences viewer responses toward online video advertising. By conducting an experimental study on YouTube, the results suggested that enhanced video quality of online advertising may have an important impact on effectiveness of the advertising, and the concept of presence is a key to understanding the effects of enhanced video quality in online advertising.

중국의 문화관광 공연작품 <장한가>에 나타난 영상이미지 효과 분석 (Analysis on Video Image Effect in , China's Performing Arts Work of Cultural Tourism)

  • 육정학
    • 한국콘텐츠학회논문지
    • /
    • 제13권6호
    • /
    • pp.77-85
    • /
    • 2013
  • 본 연구는 중국 최초의 대형 역사 무용극을 표방한 서안의 <장한가> 라는 작품 속에 들어있는 영상이미지의 공연효과를 분석하고자 한 것이다. 즉 <장한가> 작품 속에 들어 있는 특정 주제, 소재들을 표현함에 있어 어떠한 영상이미지를 사용하여 공연의 효과를 거두고 있는가에 대한 것이다. 영상이란 '사물의 모습이 반영된 상', 특히 영화, 텔레비전, 사전 등의 이미지를 의미하는 말로 그 범위는 매우 넓으며 image의 어원은 imitary에 근거를 둔 것으로 구체적 또는 심적으로 나타낼 수 있는 시각적 표시를 말한다. 따라서 영상이미지는 '영상'과 '이미지' 라는 동의어의 결합으로 볼 수 있는데 여기서 영상이란 단순히 시나리오의 문학성, 연극성, 미술성 등과 같이 전통적인 예술장르의 종합이 아니라 모든 예술의 본원적 기능을 통합하고 인간존재의 오묘한 이미지 활동을 연결한 결과로서의 총체라고 보는 것이다. 연구결과는 다음과 같다. <장한가>에 표현되는 영상 이미지의 효과로 첫째, 시대성과 문화를 반영한 함축적 의미의 표현 효과 둘째, 상상적 동일시 효과, 셋째, 장면전환의 효과 넷째, 몰입을 통한 극적 재미의 효과, 다섯째, 공연의 입체감을 통한 시각적 효과가 있음을 알 수 있었다.