• Title/Summary/Keyword: video art

Search Result 283, Processing Time 0.032 seconds

Comprehensive Investigations on QUEST: a Novel QoS-Enhanced Stochastic Packet Scheduler for Intelligent LTE Routers

  • Paul, Suman;Pandit, Malay Kumar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.579-603
    • /
    • 2018
  • In this paper we propose a QoS-enhanced intelligent stochastic optimal fair real-time packet scheduler, QUEST, for 4G LTE traffic in routers. The objective of this research is to maximize the system QoS subject to the constraint that the processor utilization is kept nearly at 100 percent. The QUEST has following unique advantages. First, it solves the challenging problem of starvation for low priority process - buffered streaming video and TCP based; second, it solves the major bottleneck of the scheduler Earliest Deadline First's failure at heavy loads. Finally, QUEST offers the benefit of arbitrarily pre-programming the process utilization ratio.Three classes of multimedia 4G LTE QCI traffic, conversational voice, live streaming video, buffered streaming video and TCP based applications have been considered. We analyse two most important QoS metrics, packet loss rate (PLR) and mean waiting time. All claims are supported by discrete event and Monte Carlo simulations. The simulation results show that the QUEST scheduler outperforms current state-of-the-art benchmark schedulers. The proposed scheduler offers 37 percent improvement in PLR and 23 percent improvement in mean waiting time over the best competing current scheduler Accuracy-aware EDF.

High-frame-rate Video Denoising for Ultra-low Illumination

  • Tan, Xin;Liu, Yu;Zhang, Zheng;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4170-4188
    • /
    • 2014
  • In this study, we present a denoising algorithm for high-frame-rate videos in an ultra-low illumination environment on the basis of Kalman filtering model and a new motion segmentation scheme. The Kalman filter removes temporal noise from signals by propagating error covariance statistics. Regarded as the process noise for imaging, motion is important in Kalman filtering. We propose a new motion estimation scheme that is suitable for serious noise. This scheme employs the small motion vector characteristic of high-frame-rate videos. Small changing patches are intentionally neglected because distinguishing details from large-scale noise is difficult and unimportant. Finally, a spatial bilateral filter is used to improve denoising capability in the motion area. Experiments are performed on videos with both synthetic and real noises. Results show that the proposed algorithm outperforms other state-of-the-art methods in both peak signal-to-noise ratio objective evaluation and visual quality.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Enhanced 3D Residual Network for Human Fall Detection in Video Surveillance

  • Li, Suyuan;Song, Xin;Cao, Jing;Xu, Siyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3991-4007
    • /
    • 2022
  • In the public healthcare, a computational system that can automatically and efficiently detect and classify falls from a video sequence has significant potential. With the advancement of deep learning, which can extract temporal and spatial information, has become more widespread. However, traditional 3D CNNs that usually adopt shallow networks cannot obtain higher recognition accuracy than deeper networks. Additionally, some experiences of neural network show that the problem of gradient explosions occurs with increasing the network layers. As a result, an enhanced three-dimensional ResNet-based method for fall detection (3D-ERes-FD) is proposed to directly extract spatio-temporal features to address these issues. In our method, a 50-layer 3D residual network is used to deepen the network for improving fall recognition accuracy. Furthermore, enhanced residual units with four convolutional layers are developed to efficiently reduce the number of parameters and increase the depth of the network. According to the experimental results, the proposed method outperformed several state-of-the-art methods.

Intra Prediction Method by Quadric Surface Modeling for Depth Video (깊이 영상의 이차 곡면 모델링을 통한 화면 내 예측 방법)

  • Lee, Dong-seok;Kwon, Soon-kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.35-44
    • /
    • 2022
  • In this paper, we propose an intra-picture prediction method by a quadratic surface modeling method for depth video coding. The pixels of depth video are transformed to 3D coordinates using distance information. A quadratic surface with the smallest error is found by least square method for reference pixels. The reference pixel can be either the upper pixels or the left pixels. In the intra prediction using the quadratic surface, two predcition values are computed for one pixel. Two errors are computed as the square sums of differences between each prediction values and the pixel values of the reference pixels. The pixel sof the block are predicted by the reference pixels and prediction method that they have the lowest error. Comparing with the-state-of-art video coding method, simulation results show that the distortion and the bit rate are improved by up to 5.16% and 5.12%, respectively.

Fast Inverse Transform Considering Multiplications (곱셈 연산을 고려한 고속 역변환 방법)

  • Hyeonju Song;Yung-Lyul Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.100-108
    • /
    • 2023
  • In hybrid block-based video coding, transform coding converts spatial domain residual signals into frequency domain data and concentrates energy in a low frequency band to achieve a high compression efficiency in entropy coding. The state-of-the-art video coding standard, VVC(Versatile Video Coding), uses DCT-2(Discrete Cosine Transform type 2), DST-7(Discrete Sine Transform type 7), and DCT-8(Discrete Cosine Transform type 8) for primary transform. In this paper, considering that DCT-2, DST-7, and DCT-8 are all linear transformations, we propose an inverse transform that reduces the number of multiplications in the inverse transform by using the linearity of the linear transform. The proposed inverse transform method reduced encoding time and decoding time by an average 26%, 15% in AI and 4%, 10% in RA without the increase of bitrate compared to VTM-8.2.

A Study on the Visibility Measurement of CCTV Video for Fire Evacuation Guidance (화재피난유도를 위한 CCTV 영상 가시도 측정에 관한 연구)

  • Yu, Young-Jung;Moon, Sang-Ho;Park, Seong-Ho;Lee, Chul-Gyoo
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.947-954
    • /
    • 2017
  • In case of a fire in urban large structures such as super high-rise buildings, evacuation guidance must be provided to the occupants in order to minimize human deaths and injuries. Therefore, it is essential to provide emergency evacuation guidance when a major fire occurs. In order to effectively support evacuation guidance, it is important to identify major items such as fire location, occupant position, escape route, etc. Also, it is important to quickly identify evacuation areas where residents can safely evacuate from a fire. In this paper, we analyze the CCTV video and propose a method of measuring visibility of the evacuation zone from the smoke caused by the fire in order to determine the safety of evacuation area. To do this, we first extract the background video from the smoke video to measure the visibility of the specific area due to smoke. After generating an edge-extracted image for the extracted background video, the degree of visibility is measured by calculating the change in the edge strength due to smoke.

Interactive information process image with minute hand gestures

  • Lim, Chan
    • Annual Conference of KIPS
    • /
    • 2016.04a
    • /
    • pp.799-802
    • /
    • 2016
  • It is definitely an interesting job to work with V4 to create various contents emphasizing different interfaces like 3D graphics, and multimedia such as video, audio, and camera. Moreover, beyond the other interface, as it could be used in the many aspects of the sensory sign such as visual effects, auditory effects, and touchable effects, it feels free to make a better developed model. We intended the users to feel some kind of pleasure and interactions rather than just using in aspect of Media art.

Study on Image of Future in Modern Fashion (현대 패션의 미래적 이미지에 관한 연구)

  • 김예형;조정미
    • Journal of the Korean Society of Costume
    • /
    • v.53 no.1
    • /
    • pp.35-48
    • /
    • 2003
  • The primary goal of this study is to define the future image of modern fashion. By review of many references, this study has examined predictable future in common, various researches on future, and futurism that appeared from art history. This study has also identified the trend of future image and the properties of the image in terms of fashion as well. The purpose of this study is defined as future image of modern fashion. First of all. through a large literature, this study is to examine general future and the study of future, to investigate futurism appears from art history. and to identify the trend of future image and the properties of the image in terms of fashion. The main results of this study include : 1) General future means forthcoming sometime or a state of life at that time, and future is not drawing near naturally in accordance with the passage of time. The future is developed according as which the owners of time have independent meaning and what they select. 2) The futurism had started with the background based on Darwins and Einsteins scientific theories and Bergsons and Nietzsches philosophical thoughts, which was then established by Marinettis Futurism Statement and Dynamism Theory of Umberto Boccionio, Giaomo Balla, Luigi Russolo and Gino Severini. As the purpose of futurism is to represent the dynamism of machinery and the beauty of speed, it has been developed toward op art and kinetic art including video art, laser art, and holography. 3) Fashion style and trend of futurism from the beginning of 20th century up to now can be defined as follows : Firstly futurism fashion represented by loud colors and geometric pattern appeared from 1910s to 1930s in the first place. Secondly, or art fashion and kinetic fashion appeared in 1960s due to the influence of op art and kinetic art which were developmental arts of futurism paintings. Space Look and Cosmo Corps Look that were designed by Andre Courreges, Pierre Cardin, Rudi Gernreich and Paco Rabanne, were also the trend of future image fashion. Thirdly, various materials and techniques developed this future image fashion in 1980s, and Glitter Look and Collage Look were its representative style. Fourthly, in 1990s, human beings dreamed the freedom of mind by human-oriented thought. and created the ecology of new concept mixed with technology due to anxiety on environmental destruction. which influenced on the advent of Zen style.

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.