• Title/Summary/Keyword: Video Images

Search Result 1,447, Processing Time 0.029 seconds

Automatic Detection of Dissolving Scene Change in Video (Video 장면전환 중 디졸브 검출에 관한 연구)

  • 박성준;송문호;곽대호;김운경;정민교
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1057-1060
    • /
    • 1999
  • For efficient storage and retrieval of large video data sets, automatic video scene change detection is a necessary tool. Video scene changes fall into two categories, namely fast and gradual scene changes. The gradual scene change effects include, dissolves, wipes, fades, etc. Although currently existing algorithms are able to detect fast scene changes quite accurately, the detection of gradual scene changes continue to remain a difficult problem. In this paper, among various gradual scene changes, we focus on dissolves. The algorithm uses a subset of the entire video, namely the sequence of DC images, for improvement of detection velocity

  • PDF

Implementation of 360 VR Tiled Video Player with Eye Tacking based Foveated Rendering (시점 추적 기반 Foveated Rendering을 지원하는 360 VR Tiled Video Player 구현)

  • Kim, Hyun Wook;Yang, Sung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.7
    • /
    • pp.795-801
    • /
    • 2018
  • In these days, various technologies to provide a service of high quality of 360 VR media contents is being studied and developed. However, rendering high-quality of media images is very difficult with the limited resources of HMD (Head Mount Display). In this paper, we designed and implemented a 360 VR Player for high quality 360 tiled video image render to HMD. Furthermore, we developed multi-resolution-based Foveated Rendering technology. By conducting several experiments, We have confirmed that it improved the performance of video rendering far more than existing tiled video rendering technology.

Standardization Trend of 3DoF+ Video for Immersive Media (이머시브미디어를 3DoF+ 비디오 부호화 표준 동향)

  • Lee, G.S.;Jeong, J.Y.;Shin, H.C.;Seo, J.I.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.156-163
    • /
    • 2019
  • As a primitive immersive video technology, a three degrees of freedom (3DoF) $360^{\circ}$ video can currently render viewport images that are dependent on the rotational movements of the viewer. However, rendering a flat $360^{\circ}$ video, that is supporting head rotations only, may generate visual discomfort especially when objects close to the viewer are rendered. 3DoF+ enables head movements for a seated person adding horizontal, vertical, and depth translations. The 3DoF+ $360^{\circ}$ video is positioned between 3DoF and six degrees of freedom, which can realize the motion parallax with relatively simple virtual reality software in head-mounted displays. This article introduces the standardization trends for the 3DoF+ video in the MPEG-I visual group.

A Study on Gender Identity Expressed in Fashion in Music Video

  • Jeong, Ha-Na;Choy, Hyon-Sook
    • International Journal of Costume and Fashion
    • /
    • v.6 no.2
    • /
    • pp.28-42
    • /
    • 2006
  • In present modern society, media contributes more to the constructing of personal identities than any other medium. Music video, a postmodernism branch among a variety of media, offers a complex experience of sounds combined with visual images. In particular. fashion in music video helps conveying contexts effectively and functions as a medium of immediate communication by visual effect. Considering the socio-cultural effects of music video. gender identity represented in fashion in it can be of great importance. Therefore, this study is geared to the reconsidering of gender identity represented through costumes in music video by analyzing fashions in it. Gender identity in socio-cultural category is classified as masculinity, femininity, and the third sex. By examining fashions based on the classification. this study will help to create new design concepts and to understand gender identity in fashion. The results of this study are as follows: First. masculinity in music video fashion was categorized into stereotyped masculinity, sexual masculinity. and metro sexual masculinity. Second, femininity in music video fashion was categorized into stereotyped femininity. sexual femininity, and contra sexual femininity. Third, the third sex in music video fashion was categorized into transvestism, masculinization of female, and feminization of male. This phenomenon is presented into music videos through females in male attire and males in female attire. Through this research, gender identity represented in fashion of music video was demonstrated, and the importance of the relationship between representation of identity through fashion and socio-cultural environment was reconfirmed.

Fast Mode Decision For Depth Video Coding Based On Depth Segmentation

  • Wang, Yequn;Peng, Zongju;Jiang, Gangyi;Yu, Mei;Shao, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1128-1139
    • /
    • 2012
  • With the development of three-dimensional display and related technologies, depth video coding becomes a new topic and attracts great attention from industries and research institutes. Because (1) the depth video is not a sequence of images for final viewing by end users but an aid for rendering, and (2) depth video is simpler than the corresponding color video, fast algorithm for depth video is necessary and possible to reduce the computational burden of the encoder. This paper proposes a fast mode decision algorithm for depth video coding based on depth segmentation. Firstly, based on depth perception, the depth video is segmented into three regions: edge, foreground and background. Then, different mode candidates are searched to decide the encoding macroblock mode. Finally, encoding time, bit rate and video quality of virtual view of the proposed algorithm are tested. Experimental results show that the proposed algorithm save encoding time ranging from 82.49% to 93.21% with negligible quality degradation of rendered virtual view image and bit rate increment.

Decision on Compression Ratios for Real-Time Transfer of Ultrasound Sequences

  • Lee, Jae-Hoon;Sung, Min-Mo;Kim, Hee-Joung;Yoo, Sun-Kwook;Kim, Eun-Kyung;Kim, Dong-Keun;Jung, Suk-Myung;Yoo, Hyung-Sik
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.489-491
    • /
    • 2002
  • The need for video diagnosis in medicine has been increased and real-time transfer of digital video will be an important component in PACS and telemedicine. But, Network environment has certain limitations that the required throughput can not satisfy quality of service (QoS). MPEG-4 ratified as a moving video standard by the ISO/IEC provides very efficient video coding covering the various ranges of low bit-rate in network environment. We implemented MPEG-4 CODEC (coder/decoder) and applied various compression ratios to moving ultrasound images. These images were displayed in random order on a client monitor passed through network. Radiologists determined subjective opinion scores for evaluating clinically acceptable image quality and then these were statistically processed in the t-Test method. Moreover the MPEG-4 decoded images were quantitatively analyzed by computing peak signal-to-noise ratio (PSNR) to objectively evaluate image quality. The bit-rate to maintain clinically acceptable image quality was up to 0.8Mbps. We successfully implemented the adaptive throughput or bit-rate relative to the image quality of ultrasound sequences used MPEG-4 that can be applied for diagnostic performance in real-time.

  • PDF

Design and Implementation of Multi-View 3D Video Player (다시점 3차원 비디오 재생 시스템 설계 및 구현)

  • Heo, Young-Su;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.258-273
    • /
    • 2011
  • This paper designs and implements a multi-view 3D video player system which is operated faster than existing video player systems. The structure for obtaining the near optimum speed in a multi-processor environment by parallelizing the component modules is proposed to process large volumes of multi-view image data at high speed. In order to use the concurrency of bottleneck, we designed image decoding, synthesis and rendering modules in a pipeline structure. For load balancing, the decoder module is divided into the unit of viewpoint, and the image synthesis module is geometrically divided based on synthesized images. As a result of this experiment, multi-view images were correctly synthesized and the 3D sense could be felt when watching the images on the multi-view autostereoscopic display. The proposed application processing structure could be used to process large volumes of multi-view image data at high speed, using the multi-processors to their maximum capacity.

A Tile-Image Merging Algorithm of Tiled-Display Recorder using Time-stamp (타임 스탬프를 이용한 타일드 디스플레이 기록기의 타일 영상 병합 알고리즘)

  • Choe, Gi-Seok;Nang, Jong-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.327-334
    • /
    • 2009
  • The tiled-display system provides a high resolution display which can be used in different applications in co-working area. The systems used in the co-working field usually save the user logs, and these log information not only makes the maintenance of the tiled-display system easier, but also can be used to check the progress of the co-working. There are three main steps in the proposed tiled display log recorder. The first step is to capture the screen shots of the tiles and send them for merging. The second step is to merge the captured tile images to form a single screen shot of the tiled-display. The final step is to encode the merged tile images to make a compressed video stream. This video stream could be stored for the logs of co-working or be streamed to remote users. Since there could be differences in capturing time of tile images, the quality of merged tiled-display could be degraded. This paper proposes a time stamp-based metric to evaluate the quality of the video stream, and a merging algorithm that could upgrade the quality of the video stream with respect to the proposed quality metrics.

Real-time Video Matting for Mobile Device (모바일 환경에서 실시간 영상 전경 추출 연구)

  • Yoon, Jong-Chul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.487-492
    • /
    • 2018
  • Recently, various applications for image processing have been ported to the mobile environment due to the expansion of the image shooting on the mobile device. However, in the case of extracting the image foreground, which is one of the most important functions of image synthesis, is difficult since it needs complex calculation. In this paper, we propose an video synthesis technique that can divide images captured by mobile devices into foreground / background and combine them in real time on target images. Considering the characteristics of mobile shooting, our system can extract automatically foreground of input video that contains weak motion when shooting. Using SIMD and GPGPU-based acceleration algorithms, SD-quality images can be processed on mobile in real time.

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.