• Title/Summary/Keyword: Visual Scene

Search Result 369, Processing Time 0.03 seconds

Design and Implementation of Interactive Multi-view Visual Contents Authoring System (대화형 복수시점 영상콘텐츠 저작시스템 설계 및 구현)

  • Lee, In-Jae;Choi, Jin-Soo;Ki, Myung-Seok;Jeong, Se-Yoon;Moon, Kyung-Ae;Hong, Jin-Woo
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.458-470
    • /
    • 2006
  • This paper describes issues and consideration on authoring of interactive multi-view visual content based on MPEG-4. The issues include types of multi-view visual content; scene composition for rendering; functionalities for user-interaction; and multi-view visual content file format. The MPEG-4 standard, which aims to provide an object based audiovisual coding tool, has been developed to address the emerging needs from communications, interactive broadcasting as well as from mixed service models resulting from technological convergence. Due to the feature of object based coding, the use of MPEG-4 can resolve the format diversity problem of multi-view visual contents while providing high interactivity to users. Throughout this paper, we will present which issues need to be determined and how they can be realized by means of MPEG-4 Systems.

An Analysis on the Range of Singular Fusion of Augmented Reality Devices

  • Lee, Hanul;Park, Minyoung;Lee, Hyeontaek;Choi, Hee-Jin
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.540-544
    • /
    • 2020
  • Current two-dimensional (2D) augmented reality (AR) devices present virtual image and information to a fixed focal plane, regardless of the various locations of ambient objects of interest around the observer. This limitation can lead to a visual discomfort caused by misalignments between the view of the ambient object of interest and the visual representation on the AR device due to a failing of the singular fusion. Since the misalignment becomes more severe as the depth difference gets greater, it can hamper visual understanding of the scene, interfering with task performance of the viewer. Thus, we analyzed the range of singular fusion (RSF) of AR images within which viewers can perceive the shape of an object presented on two different depth planes without difficulty due to the failure of singular fusion. It is expected that our analysis can inspire the development of advanced AR systems with low visual discomfort.

A Study on the Correlation of the Theory of Montage in Film Arts with Animation (영상예술 몽타주이론과 애니메이션의 상관관계 연구)

  • Lee, Lee-Nam
    • Cartoon and Animation Studies
    • /
    • s.9
    • /
    • pp.199-219
    • /
    • 2005
  • This paper is studying about how things are showed the montage theory and mis-en-scene effect in the screen media, what the concrete project's cases and how those theories have been supported to the animation's effects and its development. Besides, 1 tried to describe the shown things in the animation area, what the montage theory and mis-en-scene effect had been imported and expressed based on the screen studies of representative genre in the visual media. The purpose of this thesis suggests to help the creative animation scenes by liberal understanding and acceptance about the montage theory and mis-en-scene effect for the future animation's progressive aspect. With the improvement, there this thesis's suggestions could help the creative and special effects of animation asa part of Screen Arts, and would be the part of the progressive factors in the animation area.

  • PDF

Children's Education Application Design Using AR Technology (AR기술을 활용한 어린이 교육 어플리케이션 디자인)

  • Chung, HaeKyung;Ko, JangHyok
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.23-28
    • /
    • 2021
  • Augmented reality is a technique for combining virtual images into real life by showing information of virtual 3D objects on top of a real-world environment (Azuma et al., 2001). This study is an augmented reality-based educational content delivery device that receives user input that selects either a preset object or a photographed object for augmented reality-based training; It includes a three-dimensional design generation unit that generates a stereoscopic model of the augmented reality environment from an object, a three-dimensional view of the scene, a disassembly process of the developing road from a three-dimensional model, and a content control unit provided by the user terminal by generating educational content including a three-dimensional model, a scene chart, a scene, a decomposition process, and a coupling process to build a coupling process from the scene to the three-dimensional model in an augmented reality environment. The next study provides a variety of educational content so that children can use AR technology as well as shapes to improve learning effectiveness. We also believe that studies are needed to quantitatively measure the efficacy of which educational content is more effective when utilizing AR technology.

Implementation of Video-Forensic System for Extraction of Violent Scene in Elevator (엘리베이터 내의 폭행 추출을 위한 영상포렌식 시스템 구현)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2427-2432
    • /
    • 2014
  • Color-$X^2$ is used as a method for scene change detection. It extracts a violent scene in an elevator and then could be used for real-time surveillance of criminal acts. The scene could be also used to secure after-discovered evidences and to prove analysis processes. Video Forensic is defined as a research on various methods to efficiently analyze evidences upon crime-related visual images in the field of digital forensic. The method to use differences of color-histogram detects the difference values of histogram for RGB color from two frames respectively. Our paper uses Color-$X^2$ histogram that is composed of merits of color histogram and ones of $X^2$ histogram, in order to efficiently extract violent scenes in elevator. Also, we use a threshold so as to find out key frame, by use of existing Color-$X^2$ histogram. To increase the probability that discerns whether a real violent scene or not, we take advantage of statistical judgments with 20 sample visual images.

An Analysis on the Image and Visual Preference of the Environmental Sculpture in Urban Streetscapes (도시가로 경관에 있어 환경조형물의 이미지 및 시각적 선호도 분석)

  • Suh, Joo-Hwan;Park, Tae-Hie;Heo, Jun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.32 no.1
    • /
    • pp.57-68
    • /
    • 2004
  • The purpose of this paper is to discover the Image and Visual Preference of the Environmental Sculpture in Urban Streetscape. For this, the analysis was preformed by the data obtained from questionnaires and from the photos of the environmental sculptures scene. The landscape image was analyzed by the factor analysis algorithm. The level of visual preference was measured by a slide simulation test, and this data was analyzed by multiple regression. The results of this study can be summarized as follows: The visual preference of the environmental sculpture has been evaluated to average 4.03 on a scale of 7 Landscape slides No. 11 and No.5 were ranked more highly for visual preference. Factors formulating the landscape image were found to be 'beauty', 'orderliness', 'emotion', and 'formation'. By using the control method for the number of factors, T.V., were obtained as 63.0%. For all experimental landscape slides, the factor of orderliness was found to be the main factor determining the visual preference. The 4 factors for visual preference were analyzed by regression as follows: Visual Preference = 3.996 + 0.341(FS1) + 0.595(FS2) + 0.222(FS3) + 0.011(FS4), R-Square = 0.520.

An MPEG-4 Contents Authoring System based on Temporal Constraints Model supporting User Interaction (사용자 상호작용 지원 시간 제약 모델 기반 MPEG-4 컨텐츠 저작 시스템)

  • 김희선;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.2
    • /
    • pp.182-190
    • /
    • 2003
  • For temporal relations of interactive media such as MPEG-4, it is necessary for a temporal model that can update dynamically presentation time and temporal relations among objects by user events occurring during playback. Also, 811hough the temporal attributes are changed by user events, the validity of scene must be maintained. In this paper, we propose a temporal model that supports user interaction and developed an MPES-4 contents authoring system applying this model. We define the temporal relations of MPEG-4 that tan be authored and user interactions that can change the temporal properties. This authoring system defines the constraints on temporal relations and events and can generate the scene without error by checking the constraints. Also it provides the authoring environment to author visual]y temporal relations and events in MPEG-4 scene and generates the MPEG-4 stream by encoding the authored scene.

Design of a Format Converter from MPEG-4 Over MPEG-2 TS to MP4 (MPEG-4 Over MPEG-2 TS로부터 MP4 파일로의 포맷 변환기 설계)

  • 최재영;정제창
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.176-187
    • /
    • 2000
  • MPEG-4 is a digital bit stream format and associated protocols for representing multimedia content consisting of natural and synthetic audio, video and object data. This paper describes an application where multiple audio/visual data stream are combined in MPEG-4 and transported via MPTG-2 transport streams(TS). Also, this paper describes how to convert MPEG-4 Over MPEG-2 TS bit streams into MP4 file which Is designed to contain the media information of an MPEG-4 presentation in a flexible, extensible format. MPEG-4 is presented in the form of audio-visual objects that are arranged into an audio-visual scene by means of a scene descriptor and is composed of the audio-visual objects by means of an object descriptor. These descriptor streams are not defined MPEG-2 TS. So. this paper focuses on handling of these descriptors and parsing TS streams to get MPEG-4 data. The MPEG-4 Over MPEG-2 TS to MP4 format converter is implemented in the demonstrated systems.

  • PDF

Neural Relighting using Specular Highlight Map (반사 하이라이트 맵을 이용한 뉴럴 재조명)

  • Lee, Yeonkyeong;Go, Hyunsung;Lee, Jinwoo;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.87-97
    • /
    • 2020
  • In this paper, we propose a novel neural relighting that infers a relighted rendering image based on the user-guided specular highlight map. The proposed network utilizes a pre-trained neural renderer as a backbone network learned from the rendered image of a 3D scene with various lighting conditions. We jointly optimize a 3D light position and its associated relighted image by back-propagation, so that the difference between the base image and the relighted image is similar to the user-guided specular highlight map. The proposed method has the advantage of being able to explicitly infer the 3D lighting position, while providing the artists' preferred 2D screen-space interface. The performance of the proposed network was measured under the conditions that can establish ground truths, and the average error rate of light position estimations is 0.11, with the normalized 3D scene size.

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF