• Title/Summary/Keyword: texture and motion factors

Search Result 4, Processing Time 0.017 seconds

Digital Video Steganalysis Based on a Spatial Temporal Detector

  • Su, Yuting;Yu, Fan;Zhang, Chengqian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.360-373
    • /
    • 2017
  • This paper presents a novel digital video steganalysis scheme against the spatial domain video steganography technology based on a spatial temporal detector (ST_D) that considers both spatial and temporal redundancies of the video sequences simultaneously. Three descriptors are constructed on XY, XT and YT planes respectively to depict the spatial and temporal relationship between the current pixel and its adjacent pixels. Considering the impact of local motion intensity and texture complexity on the histogram distribution of three descriptors, each frame is segmented into non-overlapped blocks that are $8{\times}8$ in size for motion and texture analysis. Subsequently, texture and motion factors are introduced to provide reasonable weights for histograms of the three descriptors of each block. After further weighted modulation, the statistics of the histograms of the three descriptors are concatenated into a single value to build the global description of ST_D. The experimental results demonstrate the great advantage of our features relative to those of the rich model (RM), the subtractive pixel adjacency model (SPAM) and subtractive prediction error adjacency matrix (SPEAM), especially for compressed videos, which constitute most Internet videos.

Researching Visual Immersion Elements in VR Game <Half-Life: Alyx>

  • Chenghao Wang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.181-186
    • /
    • 2023
  • With the development of VR technology, the visual immersion of VR games has been greatly enhanced nowadays. There has been an issue that has been troubling players in previous VR games, which is motion sickness. Therefore, VR games have been limited in terms of game mechanics, game duration, and game scale, greatly reducing the immersive experience of visual immersion. However, <Half-Life: Alyx> is different from previous VR games in that players can actually perform spatial displacement in the game scene, rather than being fixed in one place for 360-degree observation and interaction. At the same time, compared to traditional games, VR games no longer need to rely on screens, and the complete visual immersion enhances the fun and playability of the game. This research focuses on the VR game <Half-Life: Alyx> to explore its immersive factors in terms of visual perception. Through in-depth analysis of elements such as color, texture mapping, lighting, etc. in VR games, it was found that the game creates a strong sense of visual immersion in these aspects. Through analysis, it is helpful to gain a deeper understanding of the factors that contribute to visual immersion in VR games, which has certain reference value for game developers and related professionals.

User Perception on Character Clone of Crowds based on Perceptual Organization (군중에서의 캐릭터 복제에 관한 지각체제화 기반 사용자 인지)

  • Byun, Hae-Won;Park, Yoon-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.11
    • /
    • pp.819-830
    • /
    • 2009
  • When simulating large crowds, it is inevitable that the models and motions of many characters will be cloned. McDonnell et al. analyzed user's perception to find cloned characters. They established that clones of appearance are far easier to detect than motion clones. In this paper, we expand McDonnell's research[1], with the focus on multiple clones and the appearance variety in real-time game environment. Introducing the perceptual organization, we show the appearance variety of crowd clones by using game items and texture modulation. Other factors that influence the ability to detect clones were examined, such as the moving direction and distance between character clones. Our results provide novel insights and useful thresholds that will assist in creating more realistic crowds of game environments.

A new transform coding for contours in object-based image compression (객체지향 영상압축에 있어서 윤곽선에 대한 새로운 변환 부호화)

  • 민병석;정제창;최병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.4
    • /
    • pp.1087-1099
    • /
    • 1998
  • In the content-based image coding, where each object in the scene is encoded independently, the shape, texture and motion information are very important factors. Though the contours representing the shape of an object occupy a great amount of data in proportion to the whole information, they strongly affect the subjective image quaility. Therefore, the distortion of contour coding has to be minimized as much as possible. In this paper, we propose a new method for the contour coding in which the contours are approximated to polygon and the eorror signal occurring from polygonal approximation are transformed with new basis functions. Considering the facts that confour segments occurring from polygonal approximation are smooth curves and error signals have two zero-ending points, we design new basis functions based on the Legendre polynomial and then transform the error signals with them. When applied to synthetic images such as circles, ellipses and etc., the proposed method provides, in overall, outstanding results in respect to the transform coding gain compared with DCT and DST. And in the case when applied to natural images, the proposed method gives better image quality over DCT and comparable results with DST.

  • PDF