• Title/Summary/Keyword: Video Projection

Search Result 154, Processing Time 0.023 seconds

Extensibility of Visual Expression in Projection Mapping Installation Art; Focused on Examples and Projection Mapping Installation Artwork Domino (프로젝션맵핑 기반 영상 설치 미술의 시각적 표현 확장성 -사례 분석 및 작품 을 중심으로-)

  • Fang, Bin-Zhou;Lim, Young-Hoon;Paik, Joon-Ki
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.207-220
    • /
    • 2021
  • Recent advances in new media for sensory experiences keep expanding visual expression methods in installation art such as projection mapping and virtual reality. Artists can create and develop visual expression techniques based on such new media. Projection mapping is a new medium that continues to add various possibilities to visual expression in media art. Under the projection mapping environment, artists can recompose the object or space with the digital content by projecting video onto three-dimensional surfaces in the space. This paper focuses on the process where visual expression with the projection mapping technology leads to viewers' sensory experience. To this end, "reproducibility," "dissemination," "virtuality," and "interactivity" of media were analyzed to describe the meaning and *definition of visual expression. Artworks are considered as an example to study visual expression techniques such as "repetition and overlap," "simulacrum and metaphor," and "displacement and conversion." I applied the analysis and created Domino, a projection mapping artwork, which helps the research on visual expression techniques that can lead to sensory experience the extensibility of visual expression.

Robust Dynamic Projection Mapping onto Deforming Flexible Moving Surface-like Objects (유연한 동적 변형물체에 대한 견고한 다이내믹 프로젝션맵핑)

  • Kim, Hyo-Jung;Park, Jinho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.897-906
    • /
    • 2017
  • Projection Mapping, also known as Spatial Augmented Reality(SAR) has attracted much attention recently and used for many division, which can augment physical objects with projected various virtual replications. However, conventional approaches towards projection mapping have faced some limitations. Target objects' geometric transformation property does not considered, and movements of flexible objects-like paper are hard to handle, such as folding and bending as natural interaction. Also, precise registration and tracking has been a cumbersome process in the past. While there have been many researches on Projection Mapping on static objects, dynamic projection mapping that can keep tracking of a moving flexible target and aligning the projection at interactive level is still a challenge. Therefore, this paper propose a new method using Unity3D and ARToolkit for high-speed robust tracking and dynamic projection mapping onto non-rigid deforming objects rapidly and interactively. The method consists of four stages, forming cubic bezier surface, process of rendering transformation values, multiple marker recognition and tracking, and webcam real time-lapse imaging. Users can fold, curve, bend and twist to make interaction. This method can achieve three high-quality results. First, the system can detect the strong deformation of objects. Second, it reduces the occlusion error which reduces the misalignment between the target object and the projected video. Lastly, the accuracy and the robustness of this method can make result values to be projected exactly onto the target object in real-time with high-speed and precise transformation tracking.

Mosaic Detection Based on Edge Projection in Digital Video (비디오 데이터에서 에지 프로젝션 기반의 모자이크 검출)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.5
    • /
    • pp.339-345
    • /
    • 2016
  • In general, mosaic blocks are used to hide some specified areas, such as human faces and disgusting objects, in an input image when images are uploaded on a web-site or blog. This paper proposes a new algorithm for robustly detecting grid mosaic areas in an image based on the edge projection. The proposed algorithm first extracts the Canny edges from an input image. The algorithm then detects the candidate mosaic blocks based on horizontal and vertical edge projection. Subsequently, the algorithm obtains real mosaic areas from the candidate areas by eliminating the non-mosaic candidate regions through geometric features, such as size and compactness. The experimental results showed that the suggested algorithm detects mosaic areas in images more accurately than other existing methods. The suggested mosaic detection approach is expected to be utilized usefully in a variety of multimedia-related real application areas.

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Implementation of SEI Parser and Decoder for Virtual Reality Video Projection Processing (가상 현실 비디오 프로젝션 처리를 위한 SEI 구문 분석기와 디코더 구현)

  • Jeong, JongBeom;Son, Jang-Woo;Jang, Dongmin;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.1-4
    • /
    • 2018
  • 최근 360 도 가상현실을 지원하기 위한 비디오 시스템은 다양한 프로젝션에 대한 처리를 필요로 한다. 이를 위해 Moving Picture Experts Group (MPEG) 비디오 표준화 기술은 비디오에 대한 추가적인 정보들로 프로젝션을 처리하는 기술을 표준 채택하였다. 즉, 다양한 프로젝션의 비디오에 대응하는 비디오 메타데이터 처리를 H.265/HEVC(High Efficiency Video Coding)에서 제안된 Supplemental Enhancement Information(SEI) 메세지를 사용하여 지원한다. 본 논문은 비디오의 인코딩, 디코딩 시에 비디오 프로젝션 타입에 따라 다르게 처리하는 시스템의 구현 기술을 소개한다. 이를 위해 본 논문은 SEI 메시지 구문 분석기를 구현 시 HEVC Test Model(HM)을 이용하고, 디코더 구현 시 FFmpeg 라이브러리를 이용한다. 최종적으로 구현된 시스템은, 본 기관의 또 다른 구현 물인 실시간 360 비디오 플레이어에 통합되어 실시간 디코딩 및 다양한 프로젝션의 전/후처리를 문제 없이 지원하였다.

  • PDF

MMT based V3C data packetizing method (MMT 기반 V3C 데이터 패킷화 방안)

  • Moon, Hyeongjun;Kim, Yeonwoong;Park, Seonghwan;Nam, Kwijung;Kim, Kyuhyeon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.836-838
    • /
    • 2022
  • 3D Point Cloud는 3D 콘텐츠를 더욱 실감 나게 표현하기 위한 데이터 포맷이다. Point Cloud 데이터는 3차원 공간상에 존재하는 데이터로 기존의 2D 영상에 비해 거대한 용량을 가지고 있다. 최근 대용량 Point Cloud의 3D 데이터를 압축하기 위해 V-PCC(Video-based Point Cloud Compression)와 같은 다양한 방법이 제시되고 있다. 따라서 Point Cloud 데이터의 원활한 전송 및 저장을 위해서는 V-PCC와 같은 압축 기술이 요구된다. V-PCC는 Point Cloud의 데이터들을 Patch로써 뜯어내고 2D에 Projection 시켜 3D의 영상을 2D 형식으로 변환하고 2D로 변환된 Point Cloud 영상을 기존의 2D 압축 코덱을 활용하여 압축하는 기술이다. 이 V-PCC로 변환된 2D 영상은 기존 2D 영상을 전송하는 방식을 활용하여 네트워크 기반 전송이 가능하다. 본 논문에서는 V-PCC 방식으로 압축한 V3C 데이터를 방송망으로 전송 및 소비하기 위해 MPEG Media Transport(MMT) Packet을 만드는 패킷화 방안을 제안한다. 또한 Server와 Client에서 주고받은 V3C(Visual Volumetric Video Coding) 데이터의 비트스트림을 비교하여 검증한다.

  • PDF

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.

Parametric Video Compression Based on Panoramic Image Modeling (파노라믹 영상 모델에 근거한 파라메트릭 비디오 압축)

  • Sim Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.96-107
    • /
    • 2006
  • In this paper, a low bitrate video coding method based on new panoramic modeling is proposed for panning cameras. An input video frame from a panning camera is decomposed into a background image, rectangular moving object regions, and a residual image. In coding the background, we employ a panoramic model that can account for several image formation processes, such as perspective projection, lens distortion, vignetting and illumination effects. Moving objects aredetected, and their minimum bounding rectangular regions are coded with a JPEG-2000 coder. We have evaluated the effectiveness of the proposed algorithm with several indoor and outdoor sequences and found that the PSNR is improved by $1.3{\sim}4.4dB$ compared to that of JPEG-2000.

A Study on the Shooting Traning of Basketball Sports Club using Video File (영상 자료를 활용한 학생 농구부 슈팅 훈련 지도방안)

  • Kim, Semin;Lee, Gyujeong;Lee, Jeongwon;Jeon, Byungil;Hong, Ki-Cheon;You, Kangsoo;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.382-384
    • /
    • 2019
  • In this study, we analyzed the angle of shooting based on the footage of shooting training of basketball student athletes taken by the basketball team leaders. In the shooting video, the shot height of the player and the trajectory of the ball were drawn through a solid line, and the player was shown to guide the player to adjust the angle of projection to a situation where the success rate was higher. Through this study, the video shows that efficient guidance can be made available to basketball players.

  • PDF

Real-time multi-GPU-based 8KVR stitching and streaming on 5G MEC/Cloud environments

  • Lee, HeeKyung;Um, Gi-Mun;Lim, Seong Yong;Seo, Jeongil;Gwak, Moonsung
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.62-72
    • /
    • 2022
  • In this study, we propose a multi-GPU-based 8KVR stitching system that operates in real time on both local and cloud machine environments. The proposed system first obtains multiple 4 K video inputs, decodes them, and generates a stitched 8KVR video stream in real time. The generated 8KVR video stream can be downloaded and rendered omnidirectionally in player apps on smartphones, tablets, and head-mounted displays. To speed up processing, we adopt group-of-pictures-based distributed decoding/encoding and buffering with the NV12 format, along with multi-GPU-based parallel processing. Furthermore, we develop several algorithms such as equirectangular projection-based color correction, real-time CG overlay, and object motion-based seam estimation and correction, to improve the stitching quality. From experiments in both local and cloud machine environments, we confirm the feasibility of the proposed 8KVR stitching system with stitching speed of up to 83.7 fps for six-channel and 62.7 fps for eight-channel inputs. In addition, in an 8KVR live streaming test on the 5G MEC/cloud, the proposed system achieves stable performances with 8 K@30 fps in both indoor and outdoor environments, even during motion.