• Title/Summary/Keyword: video art

Search Result 279, Processing Time 0.024 seconds

A Case Study of Video See-Through HMD in Military Counseling Service

  • Lee, Yoon Soo;Lee, Joong Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.101-107
    • /
    • 2022
  • In Korea, the military has been conducting counseling forthe preemptive detection of psychologically unstable soldiers to prevent unexpected accidents and to help them adapt to military life. However, several soldiers feel anxious about face-to-face counseling with military officers and they have difficulty expressing themselves. Video See-Through HMD is a state-of-the-art mixed reality device that converts the user's real view into a digital view, which leads users to feel the actual situation as the virtual. To validate its usefulness as a new psychological counseling aid, we investigated 11 army soldiers who are under the counseling program in barracks. During the counseling conversation, participants were asked to wear or take off the Video See-Through HMD repeatedly. All conversations were recorded for behavioral observation. As a result, 80% of the soldiers showed a relatively stable state of mind when wearing the Video See-Through HMD, which leads them to be innocent and frank about their concerns. This method could improve the effectiveness of counseling to prevent unexpected accidents caused by unnoticeable psychological instabilities of the clients.

Recursive block splitting in feature-driven decoder-side depth estimation

  • Szydelko, Błazej;Dziembowski, Adrian;Mieloch, Dawid;Domanski, Marek;Lee, Gwangsoon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.38-50
    • /
    • 2022
  • This paper presents a study on the use of encoder-derived features in decoder-side depth estimation. The scheme of multiview video encoding does not require the transmission of depth maps (which carry the geometry of a three-dimensional scene) as only a set of input views and their parameters are compressed and packed into the bitstream, with a set of features that could make it easier to estimate geometry in the decoder. The paper proposes novel recursive block splitting for the feature extraction process and evaluates different scenarios of feature-driven decoder-side depth estimation, performed by assessing their influence on the bitrate of metadata, quality of the reconstructed video, and time of depth estimation. As efficient encoding of multiview sequences became one of the main scopes of the video encoding community, the experimental results are based on the "geometry absent" profile from the incoming MPEG Immersive video standard. The results show that the quality of synthesized views using the proposed recursive block splitting outperforms that of the state-of-the-art approach.

Storytelling in Camille Claudel: Combination of sculpture art and video content (「카미유 클로델」의 스토리텔링 : 조각예술과 영상콘텐츠의 결합)

  • Cha, young-sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.361-362
    • /
    • 2016
  • 여기서는 영화 "카미유 클로델"이 어떻게 예술과 영화의 만남 속에서 조각 예술에 대한 이해를 도우며 예술가의 삶을 복원해내는 서사(narration)의 스토리텔링을 하는지 살펴보겠다.

  • PDF

Low-Complexity Sub-Pixel Motion Estimation Utilizing Shifting Matrix in Transform Domain

  • Ryu, Chul;Shin, Jae-Young;Park, Eun-Chan
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.4
    • /
    • pp.1020-1026
    • /
    • 2016
  • Motion estimation (ME) algorithms supporting quarter-pixel accuracy have been recently introduced to retain detailed motion information for high quality of video in the state-of-the-art video compression standard of H.264/AVC. Conventional sub-pixel ME algorithms in the spatial domain are faced with a common problem of computational complexity because of embedded interpolation schemes. This paper proposes a low-complexity sub-pixel motion estimation algorithm in the transform domain utilizing shifting matrix. Simulations are performed to compare the performances of spatial-domain ME algorithms and transform-domain ME algorithms in terms of peak signal-to-noise ratio (PSNR) and the number of bits per frame. Simulation results confirm that the transform-domain approach not only improves the video quality and the compression efficiency, but also remarkably alleviates the computational complexity, compared to the spatial-domain approach.

Decomposed "Spatial and Temporal" Convolution for Human Action Recognition in Videos

  • Sediqi, Khwaja Monib;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.455-457
    • /
    • 2019
  • In this paper we study the effect of decomposed spatiotemporal convolutions for action recognition in videos. Our motivation emerges from the empirical observation that spatial convolution applied on solo frames of the video provide good performance in action recognition. In this research we empirically show the accuracy of factorized convolution on individual frames of video for action classification. We take 3D ResNet-18 as base line model for our experiment, factorize its 3D convolution to 2D (Spatial) and 1D (Temporal) convolution. We train the model from scratch using Kinetics video dataset. We then fine-tune the model on UCF-101 dataset and evaluate the performance. Our results show good accuracy similar to that of the state of the art algorithms on Kinetics and UCF-101 datasets.

Dual-Stream Fusion and Graph Convolutional Network for Skeleton-Based Action Recognition

  • Hu, Zeyuan;Feng, Yiran;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.423-430
    • /
    • 2021
  • Aiming Graph convolutional networks (GCNs) have achieved outstanding performances on skeleton-based action recognition. However, several problems remain in existing GCN-based methods, and the problem of low recognition rate caused by single input data information has not been effectively solved. In this article, we propose a Dual-stream fusion method that combines video data and skeleton data. The two networks respectively identify skeleton data and video data and fuse the probabilities of the two outputs to achieve the effect of information fusion. Experiments on two large dataset, Kinetics and NTU-RGBC+D Human Action Dataset, illustrate that our proposed method achieves state-of-the-art. Compared with the traditional method, the recognition accuracy is improved better.

An Energy-aware Buffer-based Video Streaming Optimization Scheme (에너지 효율적인 버퍼 기반 비디오 스트리밍 최적화 기법)

  • Kang, Young-myoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1563-1566
    • /
    • 2022
  • Video streaming applications such as Netflix and Youtube are widely used in our daily life. A DASH based streaming client exploits adaptive bit rate (ABR) method to choose the most appropriate video source representation that the network can support. In this paper we propose a novel energy-aware ABR scheme that adds the ability to monitor energy efficiency in addition to the linear quadratic regulator algorithm we previously introduced. Our trace-driven simulation studies show that our proposed scheme mitigates and shortens re-buffering, resulting in energy savings of mobile devices while preserving the similar QoE compared to the state-of-the-art ABR algorithms.

A Comparison of Scene Change Localization Methods over the Open Video Scene Detection Dataset

  • Panchenko, Taras;Bieda, Igor
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.1-6
    • /
    • 2022
  • Scene change detection is an important topic because of the wide and growing range of its applications. Streaming services from many providers are increasing their capacity which causes the industry growth. The method for the scene change detection is described here and compared with the State-of-the-Art methods over the Open Video Scene Detection (OVSD) - an open dataset of Creative Commons licensed videos freely available for download and use to evaluate video scene detection algorithms. The proposed method is based on scene analysis using threshold values and smooth scene changes. A comparison of the presented method was conducted in this research. The obtained results demonstrated the high efficiency of the scene cut localization method proposed by authors, because its efficiency measured in terms of precision, recall, accuracy, and F-metrics score exceeds the best previously known results.

Robust Online Object Tracking with a Structured Sparse Representation Model

  • Bo, Chunjuan;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2346-2362
    • /
    • 2016
  • As one of the most important issues in computer vision and image processing, online object tracking plays a key role in numerous areas of research and in many real applications. In this study, we present a novel tracking method based on the proposed structured sparse representation model, in which the tracked object is assumed to be sparsely represented by a set of object and background templates. The contributions of this work are threefold. First, the structure information of all the candidate samples is utilized by a joint sparse representation model, where the representation coefficients of these candidates are promoted to share the same sparse patterns. This representation model can be effectively solved by the simultaneous orthogonal matching pursuit method. In addition, we develop a tracking algorithm based on the proposed representation model, a discriminative candidate selection scheme, and a simple model updating method. Finally, we conduct numerous experiments on several challenging video clips to evaluate the proposed tracker in comparison with various state-of-the-art tracking algorithms. Both qualitative and quantitative evaluations on a number of challenging video clips show that our tracker achieves better performance than the other state-of-the-art methods.

A Study on the Application of Computers to the Development of Humor Image Fashion Design (컴퓨터를 활용한 유머 이미지 패션디자인 개발에 관한 연구)

  • 우세희;최현숙
    • Journal of the Korean Society of Costume
    • /
    • v.53 no.5
    • /
    • pp.65-77
    • /
    • 2003
  • Due to the rapid changes occurring in many aspects of contemporary society, the need for a means to actively combat widespread feelings of emptiness and alienation among the public, while satisfying its visual pleasure, is increasing. Thus the need for humorous elements which bring freedom to the human psyche is urgently requested. Of course, the field of fashion cannot be left out in this trend, and humor image design is a good example of this. Humor image in fashion endeavors to release the tension accumulated in the modern world, while trying to find a way to recover the original pureness of mankind. Another aspect currently important is computers. The creation of images in modern visual art relies a lot upon computers. Traditional visual processes such as painting, photography and video are now merged within digital technology. and are now quite symbiotic to each other. With the development of computers came computer art. which uses all applicable functions of a computer to create art. Any artistic action which uses a computer in any stage of its creation can be called computer art. The common factor in humor and computer art in modern fashion can be classified as follows : repetition, deformation and distortion. exaggeration and abridgement. juxtaposition. and Tromp l'oeil. This study has placed its objective on the fusion of humor image fashion and computer art, by manufacturing a work with humor and computers, two important aspects of modern culture. Expanding the field of fashion design while promoting creativity In fashion by finding a verging point between art and science is also necessary. I have designed and made five costumes using the above cited techniques in computer humor images, on a theoretical basis.