• Title/Summary/Keyword: 영상실감

Search Result 317, Processing Time 0.025 seconds

Design and Implementation of a Realistic Multi-View Scalable Video Coding Scheme (실감형 다시점 스케일러블 비디오 코딩 방법의 설계 및 구현)

  • Park, Min-Woo;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.703-720
    • /
    • 2009
  • This paper proposes a realistic multi-view scalable video coding scheme designed for user's interest in 3D content services and the usage in the future computing environment. Future video coding schemes should support realistic services that make users feel the 3-D presence through stereoscopic or multi-view videos, as well as to accomplish the so-called one-source multi-use services in order to comprehensively support diverse transmission environments and terminals. Unlike the most of video coding methods which only support two-dimensional display, the proposed coding scheme in this paper is the method which can support such realistic services. This paper designs and also implements the proposed coding scheme through integrating Multi-view Video Coding scheme and Scalable Video Coding scheme, then shows its possibility of realization of 3D services by the simulation. The simulation results show the proposed structure remarkably improves the performance of random access with almost the same coding efficiency.

Realistic Expression Factor to Visual Presence of Virtual Avatar in Eye Reflection (가상 아바타의 각막면에 비친 반사영상의 시각적 실재감에 대한 실감표현 요소)

  • Won, Myoung Ju;Lee, Eui Chul;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.7
    • /
    • pp.9-15
    • /
    • 2013
  • In the VnR (Virtual and Real Worlds) of recent virtual reality convergence, the modelling of realistic human face is focused on the facial appearance such as the shape of facial parts and muscle movement. However, the facial changing parameters caused by environmental factors beyond the facial appearance factors can be regarded as important ones in terms of effectively representing virtual avatar. Therefore, this study evaluates user's visual feeling response according to the opacity variation of eye reflection of virtual avatar which is considered as a new parameter for reprenting realistic avatar. Experimental result showed that more clear eye reflection induced more realistic visual feeling of subjects. This result can be regarded as a basis for designing realistic virtual avatar by supporting a new visual realistic representing factor (eye reflection) and its degree of representation (reflectance ratio).

Emotional adjective profiles of various odor stimuli (감성형용사를 사용한 다양한 향의 프로파일)

  • Jung, Yun-Jin;Lee, Guk-Hee;Li, Hyung-Chul O.;Kim, Shin-Woo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.2
    • /
    • pp.75-84
    • /
    • 2015
  • Although various methods have been proposed and utilized for video reality improvement, use of olfaction still remains at a rudimentary stage. Previous research reported reality improvement effect of some scents when a video displayed specific objects whose odors matched to the scents provided. In addition, another study showed that provision of scents that correspond to the prevailing color of a video improves sense of immersion. However, the above studies have clear limitations because not all videos have specific odor or obvious color. Assuming, in this study, that sensibility-based scent provision in the absence of main odor or color will increase sense of reality, the present study aimed at building adjective profiles of various scents that convey different sensibilities. To this end, in Experiments 1 and 2, we collected a set of adjectives appropriate for description of scents, and in Experiment 3, we built profiles of 16 scents using 30 adjectives. In addition, we grouped scents of similar sensibilities using cluster analysis. These results could be used not only for video reality improvement but also for the purposes of emphasizing product concepts or store positioning, etc.

A Real and Effective Multi-Videoconferencing Service Based on IP Networks (네트워크를 통한 실질적이고 효과적인 다자간 영상회의 서비스)

  • Kim, Sang-Hyun;Song, Jae-Phil;Sohn, Jin-Soo
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2008.08a
    • /
    • pp.478-481
    • /
    • 2008
  • 오늘날 비즈니스의 글로벌화, 사업장의 분산화는 필연적으로 영상회의의 활성화를 가져왔으나, 기 구축된 SD(Standard Definition)급 화질의 영상회의는 실감성 및 장시간 사용시 피로감 등에 따른 이용율 저조로 기대 만큼의 성화를 이루지 못하였다. 또한 기존 영상회의 시스템에서의 dec는 보조적 채널, 시스템 조작과 운영의 복잡성, 낮은 해상도로 인한 표정 및 눈짓 등의 요소 포착이 어려운 한계를 극복하지 못하였고, 기업들은 기업 내부 및 국내외 타기업들과의 협업을 위해 막대한 비용과 시간이 요구되고 있는 실정이다. KT는 기존 영상회의 문제 극복을 위해 서울<->대전 연구소간 전용선 연결, 비용 및 여러 site 확장을 위한 VPN으로의 전환 방안등 최상의 영상회의 구축을 위한 검토를 통해 Full HD실감 화질의 영상회의를 구축하였으며, 향후에는 Full HD 데이터 압축 기술 향상 및 데이터 전송 방안에 대한 더 많은 연구가 필요하다고 판단된다. 본 고에서는 KT에서 구축한 영상회의 사례를 중심으로 기본적인 기술과 구조, 국내외 시장의 현황 및 활용 방안에 대해 논하였다.

  • PDF

A Study on Fingerprinting Robustness Indicators for Immersive 360-degree Video (실감형 360도 영상 특징점 기술 강인성 지표에 관한 연구)

  • Kim, Youngmo;Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.743-753
    • /
    • 2020
  • In this paper, we propose a set of robustness indicators for immersive 360-degree video. With the full-fledged service of mobile carriers' 5G networks, it is possible to use large-capacity, immersive 360-degree videos at high speed anytime, anywhere. Since it can be illegally distributed in web-hard and torrents through DRM dismantling and various video modifications, however, evaluation indicators that can objectively evaluate the filtering performance for copyright protection are required. In this paper, a robustness indicators is proposed that applies the existing 2D Video robustness indicators and considers the projection method and reproduction method, which are the characteristics of Immersive 360-degree Video. The performance evaluation experiment has been carried out for a sample filtering system and it is verified that an excellent recognition rate of 95% or more has been achieved in about 3 second execution time.

The Study of the Plan regarding DSM Generation for Production of Orthophoto (정사영상 제작을 위한 정밀 DSM 생성 방안)

  • Lee, Hyun-Jik;Ru, Ji-Ho;Yoo, Kang-Min
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.369-374
    • /
    • 2007
  • 최근 사진측량기술과 IT의 발달로 인해 고품질의 데이터의 획득이 가능하게 되면서 고해상도 영상을 이용한 정사영상의 제작과 활용이 증가하고 있다. 일반적인 정사영상은 DEM을 이용하여 편위수정이 수행하기 때문에 건물, 교량과 같은 비고를 가지고 있는 인공지형지물에 대한 기복변위를 완벽하게 제거하지 못하는 문제점을 안고 있다. 이에 본 연구에서는 건물의 기복변위를 제거한 실감정사영상을 제작하기 위하여 수치사진 측량기법과 LiDAR데이터를 이용한 네가지 실험 CASE별로 DSM을 생성하였고, 각 DSM별 정사영상을 제작하여 정확도 분석을 수행하여, 실감정사영상제작에 적합한 DSM 제작 방안을 제시하였다.

  • PDF

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF

A Feature Point Recognition Ratio Improvement Method for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 인식률 향상 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.419-425
    • /
    • 2020
  • The market size of immersive 360-degree video contents, which are noted as one of the main technology of the fourth industry, increases every year. However, since most of the images are distributed through illegal distribution networks such as Torrent after the DRM gets lifted, the damage caused by illegal copying is also increasing. Although filtering technology is used as a technology to respond to these issues in 2D videos, most of those filtering technology has issues in that it has to overcome the technical limitation such as huge feature-point data volume and the related processing capacity due to ultra high resolution such as 4K UHD or higher in order to apply the existing technology to immersive 360° videos. To solve these problems, this paper proposes a feature-point recognition ratio improvement method for immersive 360-degree videos using deep learning technology.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.

The Effect of Matching between Odor and Color on Video Reality and Sense of Immersion (향과 색의 어울림이 영상 실감과 몰입감에 미치는 효과)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Bang, Dongmin;Ahn, ChungHyun;Ki, MyungSeok;Kim, ShinWoo
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.877-895
    • /
    • 2014
  • It is common sense that providing specific odor can increase the video reality when video scene has an object having specific odor. However, people still do not know how to increase video reality and emotional immersion when there is no information on specific odor in the scene. So, present study explores how we improve video reality and immersion when the scene has no concrete odor information from some objects. Especially, this research focuses on diverse previous studies about matching between odor and color and then we expect providing odor can increase video reality if the odor is well-matched with the video's color. To do this, we collected 48 odors and investigated which color was well-matched with each odor. As a result, we get 5 odors which had clearly well-matched colors and decide ill-matched colors of those 5 odors as complementary colors of well-matched colors (Experiment 1). After that, we organize 3 conditions such as coloring image and video clip with well-matched color (color-odor match condition), coloring those with ill-matched color (color-odor mismatch condition), and coloring those with achromatic color by removing color saturation (color-odor neutral condition). Under each of these three conditions, image-odor matching, increment of reality with the odor, increment of immersion with the odor, and odor preference are asked (Experiment 2; 3). The results showed that the scores of all 4 questions in color-odor match condition were higher than color-odor mismatch condition and neutral condition. These results mean that providing matching odor with the scene's color in video is very effective to increase video reality and immersion. We expect experiencing better reality and immersion with olfactory information by adding various future research.