• Title/Summary/Keyword: video to images

Search Result 1,354, Processing Time 0.027 seconds

How Long Will Your Videos Remain Popular? Empirical Study with Deep Learning and Survival Analysis

  • Min Gyeong Choi;Jae Hong Park
    • Asia pacific journal of information systems
    • /
    • v.33 no.2
    • /
    • pp.282-297
    • /
    • 2023
  • One of the emerging trends in the marketing field is digital video marketing. Online videos offer rich content typically containing more information than any other type of content (e.g., audible or textual content). Accordingly, previous researchers have examined factors influencing videos' popularity. However, few studies have examined what causes a video to remain popular. Some videos achieve continuous, ongoing popularity, while others fade out quickly. For practitioners, videos at the recommendation slots may serve as strong communication channels, as many potential consumers are exposed to such videos. So,this study will provide practitioners important advice regarding how to choose videos that will survive as long-lasting favorites, allowing them to advertise in a cost-effective manner. Using deep learning techniques, this study extracts text from videos and measured the videos' tones, including factual and emotional tones. Additionally, we measure the aesthetic score by analyzing the thumbnail images in the data. We then empirically show that the cognitive features of a video, such as the tone of a message and the aesthetic assessment of a thumbnail image, play an important role in determining videos' long-term popularity. We believe that this is the first study of its kind to examine new factors that aid in ensuring a video remains popular using both deep learning and econometric methodologies.

Sea Ice Extents and global warming in Okhotsk Sea and surrounding Ocean - sea ice concentration using airborne microwave radiometer -

  • Nishio, Fumihiko
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.76-82
    • /
    • 1998
  • Increase of greenhouse gas due to $CO_2$ and CH$_4$ gases would cause the global warming in the atmosphere. According to the global circulation model, it is pointed out in the Okhotsk Sea that the large increase of atmospheric temperature might be occurredin this region by global warming due to the doubling of greenhouse effectgases. Therefore, it is very important to monitor the sea ice extents in the Okhotsk Sea. To improve the sea ice extents and concentration with more highly accuracy, the field experiments have begun to comparewith Airborne Microwave Radiometer (AMR) and video images installed on the aircraft (Beach-200). The sea ice concentration is generally proportional to the brightness temperature and accurate retrieval of sea ice concentration from the brightness temperature is important because of the sensitivity of multi-channel data with the amount of open water in the sea ice pack. During the field experiments of airborned AMR the multi-frequency data suggest that the sea ice concentration is slightly dependending on the sea ice types since the brightness temperature is different between the thin and small piece of sea ice floes, and a large ice flow with different surface signatures. On the basis of classification of two sea ice types, it is cleary distinguished between the thin ice and the large ice floe in the scatter plot of 36.5 and 89.0GHz, but it does not become to make clear of the scatter plot of 18.7 and 36.5GHz Two algorithms that have been used for deriving sea ice concentrations from airbomed multi-channel data are compared. One is the NASA Team Algorithm and the other is the Bootstrap Algorithm. Intrercomparison on both algorithms with the airborned data and sea ice concentration derived from video images bas shown that the Bootstrap Algorithm is more consistent with the binary maps of video images.

  • PDF

Automatic Video Editing Technology based on Matching System using Genre Characteristic Patterns (장르 특성 패턴을 활용한 매칭시스템 기반의 자동영상편집 기술)

  • Mun, Hyejun;Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.861-869
    • /
    • 2020
  • We introduce the application that automatically makes several images stored in user's device into one video by using the different climax patterns appearing for each film genre. For the classification of the genre characteristics of movies, a climax pattern model style was created by analyzing the genre of domestic movie drama, action, horror and foreign movie drama, action, and horror. The climax pattern was characterized by the change in shot size, the length of the shot, and the frequency of insert use in a specific scene part of the movie, and the result was visualized. The model visualized by genre developed as a template using Firebase DB. Images stored in the user's device were selected and matched with the climax pattern model developed as a template for each genre. Although it is a short video, it is a feature of the proposed application that it can create an emotional story video that reflects the characteristics of the genre. Recently, platform operators such as YouTube and Naver are upgrading applications that automatically generate video using a picture or video taken by the user directly with a smartphone. However, applications that have genre characteristics like movies or include video-generation technology to show stories are still insufficient. It is predicted that the proposed automatic video editing has the potential to develop into a video editing application capable of transmitting emotions.

A Method for Generating Inbetween Frames in Sign Language Animation (수화 애니메이션을 위한 중간 프레임 생성 방법)

  • O, Jeong-Geun;Kim, Sang-Cheol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1317-1329
    • /
    • 2000
  • The advanced techniques for video processing and computer graphics enables a sign language education system to appear. the system is capable of showing a sign language motion for an arbitrary sentence using the captured video clips of sign language words. In this paper, a method is suggested which generates the frames between the last frame of a word and the first frame of its following word in order to animate hand motion. In our method, we find hand locations and angles which are required for in between frame generation, capture and store the hand images at those locations and angles. The inbetween frames generation is simply a task of finding a sequence of hand angles and locations. Our method is computationally simple and requires a relatively small amount of disk space. However, our experiments show that inbetween frames for the presentation at about 15fps (frame per second) are achieved so tat the smooth animation of hand motion is possible. Our method improves on previous works in which computation cost is relativey high or unnecessary images are generated.

  • PDF

Comparative Transmission of JPEG2000 and MPEG-4 Patient Images using the Error Resilient Tools over CDMA 1xEVDO Network (CDMA 1xEVDO 망에서 무선 에러에 강인한 JPEG2000과 MPEG4의 환자 영상 전송에 관한 비교연구)

  • Cho, Jin-Ho;Lee, Tong-Heon;Yoo, Sun-Kook
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.6
    • /
    • pp.296-301
    • /
    • 2006
  • Even though the emergency telecommunication make possible that specialist offers medical care over emergency cases in moving vehicle, we still have many problems in transmitting the image or video of patient over several wireless networks. To alleviate the effect of channel errors on compressed video bit-stream, this paper analyzed the error resilient features of JPEG2000 standard and measured the quality of transmission over noisy wireless channel, CDMA2000 1xEV-DO networks, compared to the features of error resilient tool of MPEG-4. We also proposed the optimum solution of transmitting images over real 3G network using JPEG2000 error resilient tool.

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Human Face Identification using KL Transform and Neural Networks (KL 변환과 신경망을 이용한 개인 얼굴 식별)

  • Kim, Yong-Joo;Ji, Seung-Hwan;Yoo, Jae-Hyung;Kim, Jung-Hwan;Park, Mignon
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.1
    • /
    • pp.68-75
    • /
    • 1999
  • Machine recognition of faces from still and video images is emerging as an active research area spanning several disciplines such as image processing, pattern recognition, computer vision and neural networks. In addition, human face identification has numerous applications such as human interface based systems and real-time video systems of surveillance and security. In this paper, we propose an algorithm that can identify a particular individual face. We consider human face identification system in color space, which hasn't often considered in conventional in conventional methods. In order to make the algorithm insensitive to luminance, we convert the conventional RGB coordinates into normalized CIE coordinates. The normalized-CIE-based facial images are KL-transformed. The transformed data are used as the input of multi-layered neural network and the network are trained using error-backpropagation methods. Finally, we verify the system performance of the proposed algorithm by experiments.

  • PDF

MPEG-DASH Services for 3D Contents Based on DMB AF (DMB AF 기반 3D 콘텐츠의 MPEG-DASH 서비스)

  • Kim, Yong Han;Park, Minkyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.115-121
    • /
    • 2013
  • Recently an extension to DMB AF (Digital Multimedia Broadcasting Application Format) standard has been proposed in such a way that the extended DMB AF can include stereoscopic video and stereoscopic images for interactive service data, i.e., MPEG-4 BIFS (Binary Format for Scene) data, in addition to the existing 2D video and 2D images for BIFS services. In this paper we developed a service that provides the streaming of 3D contents in DMB AF by using MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard and validated it by implementing the client software.

The User Interface of Button Type for Stereo Video-See-Through (Stereo Video-See-Through를 위한 버튼형 인터페이스)

  • Choi, Young-Ju;Seo, Young-Duek
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.47-54
    • /
    • 2007
  • This paper proposes a user interface based on video see-through environment which shows the images via stereo-cameras so that the user can control the computer systems or other various processes easily. We include an AR technology to synthesize virtual buttons; the graphic images are overlaid on the captured frames taken by the camera real-time. We search for the hand position in the frames to judge whether or not the user selects the button. The result of judgment is visualized through changing of the button color. The user can easily interact with the system by selecting the virtual button in the screen with watching the screen and moving her fingers at the air.

  • PDF

Implementation of Real-Time Multi-Camera Video Surveillance System with Automatic Resolution Control Using Motion Detection (움직임 감지를 사용하여 영상 해상도를 자동 제어하는 실시간 다중 카메라 영상 감시 시스템의 구현)

  • Jung, Seulkee;Lee, Jong-Bae;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.612-619
    • /
    • 2014
  • This paper proposes a real-time multi-camera video surveillance system with automatic resolution control using motion detection. In ordinary times, it acquires 4 channels of QVGA images, and it merges them into single VGA image and transmit it. When motion is detected, it automatically increases the resolution of motion-occurring channel to VGA and decreases those of 3 other channels to QQVGA, and then these images are overlaid and transmitted. Thus, it can magnifies and watches the motion-occurring channel while maintaining transmission bandwidth and monitoring all other channels. When it is synthesized with 0.18 um technology, the maximum operating frequency is 110 MHz, which can theoretically support 4 HD cameras.