• Title/Summary/Keyword: Chromakey

Search Result 6, Processing Time 0.024 seconds

Automatic Video Chromakeying Generation Technology Using Background Modeling (배경 모델링을 이용한 비디오 크로마키 생성기법)

  • Yoo, Gil-Sang
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.1-8
    • /
    • 2021
  • In online meetings and classes using webcams, the chromakey technique is a very necessary part to produce content. We proposed a technology that enables background synthesis without using a cloth for chromakey. The proposed method consists of three steps: an HSI image conversion step, a step of detecting a region changed from a background, and a step of replacing the background region with a chromakey and applying it. In the input video, the block average image of each frame is calculated, and the difference between the block average image of the background image and the block average image of the input image is used to detect the change area. The developed chromakey effect technology uses a technique of acquiring a background image without an object from a single camera and extracting only an object by distinguishing the moving object and the background. The proposed method is not only capable of processing even if the background has a variety of colors, but also has the seamless processing of the boundary lines of objects.

Development of a Mobile App Combining React Native and Unity3D for Chromakey-based Image Composition (React Native와 Unity3D를 활용한 크로마키 기반 이미지 합성 모바일 앱 개발)

  • Kim, Seung-Jun;Seo, Beom-Joo;Cho, Sung-Hyun
    • Journal of Korea Game Society
    • /
    • v.20 no.4
    • /
    • pp.11-20
    • /
    • 2020
  • In the rapidly changing mobile app market, it is crucial to develop a good idea quickly and receive its market evaluation. For a small-sized company, however, it is very challenging to rapidly develop and deploy their products in response to highly fragmented mobile environments. This article demonstrates that our integrated development environment using both React Native and Unity3D when developing a mobile app achieves a high level of functionality and performance requirements successfully. Moreover, this integrated environment helps reduce development costs and shorten development time.

The Design and Implementation of Real-time Virtual Image Synthesis System of Map-based Depth (깊이 맵 기반의 실시간 가상 영상합성 시스템의 설계 및 구현)

  • Lee, Hye-Mi;Ryu, Nam-Hoon;Roh, Gwhan-Sung;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1317-1322
    • /
    • 2014
  • To complete an image, it is needed to go through the process to capture the actual actor's motion and compose it with virtual environment. Due to the excessive cost for production or lack of post-processing technology, however, it is mostly conducted by manual labor. The actor plays his role depending on his own imagination at the virtual chromakey studio, and at that time, he has to move considering the possible collision with or reaction to an object that does not exist. And in the process of composition applying CG, when the actor's motion does not go with the virtual environment, the original image may have to be discarded and it is necessary to remake the film. The current study suggested and realized depth-based real-time 3D virtual image composition system to reduce the ratio of remaking the film, shorten the production time, and lower the production cost. As it is possible to figure out the mutual collision or reaction by composing the virtual background, 3D model, and the actual actor in real time at the site of filming, the actor's wrong position or acting can be corrected right there instantly.

A Real-time Virtual Imaging System (실시간 가상이미징 시스템)

  • 남승진;오주현;박성춘
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2002.11a
    • /
    • pp.269-274
    • /
    • 2002
  • 최근에 TV의 축구, 야구와 같은 스포츠 중계 방송에서 팀의 로고나 점수, 거리와 같은 그래픽 정보를 운동장에 합성시켜 보여주는 새로운 기술이 사용되고 있다. 이는 증강현실(AR: Augmented Reality)의 한 분야인 실시간 이미징 합성 기법을 사용한 것으로서 일반적으로 가상이미징 시스템이라 불리운다. 본 논문에서는 방송용 카메라의 렌즈와 팬(pan), 틸트(tilt) 축에 센서를 부착하여 이를 해석함으로써, 실시간으로 3차원 그래픽 좌표계의 가상 카메라를 제어하는 센서기반 가상이미징 시스템의 구현 결과와 관련기술을 소개한다. KBS 기술연구소에서는 실시간 가상이미징 시스템을 개발하여 'VIVA(Virtual Imaging & Virtual Advertising)'라 이름지었다. 이 시스템은 렌즈 데이터에 근거하여 카메라 캘리브레이션을 수행하며, 주밍(zooming)시 발생하는 카메라의 시점(view point) 변화를 반영하여 원거리 촬영이 이루어지는 스포츠 중계뿐만 아니라 보다 정확성이 요구되는 스튜디오 내의 근거리 프로그램 제작에도 사용이 가능하다. 합성되는 그래픽은 실세계와 같은 완전한 3차원 좌표 공간에 놓이게 되며, 월드 좌표계와 카메라 좌표계를 이용하여 사전 제작된 오브젝트들을 원하는 곳에 위치시킬 수 있다. 실사와 그래픽의 효과적인 합성을 위하여 오버레이(overlay) 모드는 물론 알파키(alphakey)와 크로마키(chromakey)를 조합한 모드를 사용하였다. VIVA는 2002년 부산 아시안 게임에서 수영과 양궁 중계에 활용하였다.

  • PDF

A Study on Comparison of background chroma studio for Virtual Studio (가상 스튜디오 크로마키 배경 비교에 관한 연구)

  • Lee, Choong-Koo;Park, Gooman
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.2
    • /
    • pp.36-41
    • /
    • 2012
  • In this paper we have compared three materials which are used for background chromakey in virtual studio. Each material reflects the illumination in different ways. In our experimental comparison, the 'chromatte' have the best quality in making good background image. Without predetermined light condition, chromatte provides wide range of adaptability.

The Extraction of Camera Parameters using Projective Invariance for Virtual Studio (가상 스튜디오를 위한 카메라 파라메터의 추출)

  • Han, Seo-Won;Eom, Gyeong-Bae;Lee, Jun-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2540-2547
    • /
    • 1999
  • Chromakey method is one of key technologies for realizing virtual studio, and the blue portions of a captured image in virtual studio, are replaced with a computer generated or real image. The replaced image must be changed according to the camera parameter of studio for natural merging with the non-blue portions of a captured image. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns that are painted on a blue screen. We extract corresponding points between a blue screen. We extract corresponding points between a blue screen and a captured image using the projective invariant features of a pentagon. Then, calculate camera parameters using corresponding points by the modification of Tsai's method. Experimental results indicate that the proposed method is more accurate compared to conventional method and can process about twelve frames of video per a second in Pentium-MMX processor with CPU clock of 166MHz.

  • PDF