• Title/Summary/Keyword: 영상실감

Search Result 317, Processing Time 0.022 seconds

FMM: Fusion media middleware for actual feeling service (실감 서비스 제공을 위한 융합 미디어 미들웨어)

  • Lee, Ji-Hye;Yoon, Yong-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.2
    • /
    • pp.308-315
    • /
    • 2010
  • User Generated contents(UGC) interchange with internet users actively in Web2.0 environment. According to growth of content sharing site, the number of non-expert's contents increased. But non-expert's contents have a simple media just recorded. For providing actual feeling like effects and actions to non-expert's contents, we suggest Fusion Media Middleware(FMM). The FMM can increase user satisfaction by providing actual feeling. Furthermore, The content changes advanced media that has emotional impression. The FMM for providing actual feeling classify the inputted media as a scene based on MPEG-7. The FMM provide an actual feeling to simple media by inserting effects like a sound, image and text among the classified media. Using the BSD code of MPEG-21, the FMM can link up with inputted media and effects. Through the mapping BSD code the FMM control synchronization between media and effects. In this paper, Using the Fusion Media Middleware, the non-expert's contents express value as multimedia that has an actual feeling. Futhermore, the FMM creates flow of new media circulation.

Actual Feeling Service Model for Video-Media Contents (영상미디어콘텐츠에 대한 실감 서비스 모델)

  • Lee, Ji-Hye;Yoon, Yong-Ik
    • Journal of Digital Contents Society
    • /
    • v.10 no.3
    • /
    • pp.453-459
    • /
    • 2009
  • In recently, as the interest of media contents increase among internet users, a variety of media contents are circulated in the web. Especially, video-media content in media contents attracts internet user's interest. In conjunction with web 2.0, internet users open and share their making contents by themselves. Their attitude about accepting media contents is not passive but aggressive. Additionally, they create new form of distribution of the flow. Video media content for distribution on the Web is created by experts to a professional content, but Web 2.0 era, the UCC (User Create Contents) in the form of self-produced content is the most. The generated media by the general internet users, but self-produced content, provides video information only and has limitations. To satisfy internet users as consumers in the web 2.0 eras, it has needed to provide actual feeling contents that add various effects not just simple media. Therefore, this paper represents the existing media content with simple information based on the concept of ontology and the meaning to the subject for the media content. We will provide an actual feeling how to offer the configuration of a service model (AF-VS : Actual Feeling Video Service).

  • PDF

A Research on Development of Multi-Screen Image and Application to Ultra-High Definition Contents (멀티스크린의 발전과 초고화질 콘텐츠 응용에 대한 연구)

  • Moon, Dae-Hyuk
    • Journal of Industrial Convergence
    • /
    • v.18 no.6
    • /
    • pp.33-39
    • /
    • 2020
  • The multi-screen image system could make the audiences appreciate contents without special devices by expansively composing the images played on a single screen to many facets suitable for the use, which could provide the great immersiveness to the audiences. Based on the recent interest in realistic images, it is produced as films through the multi-projection technology such as Screen X or Escape, and it is developed into the media that could deliver stories and information. Also, the size of display tends to gradually get larger while the image quality is improved to high definition. Thus, the development is accelerated in the form of Digital Signage that could play the high definition image contents by solidly composing many HD or UHD display screens. Moreover, through the convergence of digital technologies, it is developed into the higher value-added industry that could have two-way communication. This study aims to understand the developmental history of multi-screen image from 1950 to the present, technical analysis, and the production method, and then to research how to minimize the image degradation when playing the contents, in various platforms using the multi-screen image.

Uncompressed 3D HD Video and Multi-channel Sound Transport (비압축 3D HD 영상 및 다채널 음성 전송)

  • Chae, Jong-Kwon;Lee, Young-Han;Kim, Jong-Won;Kim, Hong-Kook
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.706-712
    • /
    • 2007
  • 국가간 연구목적으로 개설된 초고속 광 네트워크 기술의 발전은 새로운 응용 기술의 등장을 요구하고 있다. 고화질 저지연의 실감 협업 응용은 이러한 연구 목적에 부합할 뿐만 아니라 향후 커뮤니티 기반 응용 기술의 요구를 충족시킬 것으로 보인다. 본 논문에서는 실감 협업 응용 기술에 필요한 비압축 HD stereoscopic 영상 전송 시스템을 구성해 3D HD 영상을 사용자가 체감할 수 있도록 한다. 또한, 소프트웨어 기반 다채널 음성 재생을 다루며 실험을 통해 방향성 있는 협업 환경 구축의 가능성을 보여준다. 입체감 있는 미디어 재생을 위해 병렬 구조의 좌 우 송수신 시스템을 구축 후 stereoscopic 비압축 영상 전송을 수행하며, 좌 우 영상 세션간의 인터 미디어 동기화 기법의 설계방법을 제안한다. 음성 재생 소프트웨어는 ALSA를 이용하여 구현하였으며 가변 데이터 길이 및 프레임 손실로 인한 채널 뒤섞임(channel swapping)을 방지하기 위한 버퍼를 재생 모듈 전처리단에 추가하였다. 초고속 네트워크와 비압축 미디어 전송의 결합은 IP를 이용해 다채널 음성 지원의 실감 HDTV를 가능케 하므로 이를 유용하게 활용할 수 있는 사용 시나리오를 살펴본다.

  • PDF

Improved Method for Depth Map Fusion in Multi View System (Multi View System 에서 Depth Map Fusion 을 위한 개선된 기법)

  • Jung, Woo-Kyung;Kim, Haekwang;Han, Jong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.223-225
    • /
    • 2021
  • 실감 미디어에 대한 수요가 증가함에 따라 고품질의 실감 미디어에 대한 중요성이 증가하고 있다. 이러한 실감미디어를 제작하기 위해 사용되는 일반적인 기법 중 하나인 Multi View Stereo 는 깊이 영상 추정 및 해당 깊이 영상을 이용하여 3 차원에 point cloud 를 생성하는 fusion 과정을 거치게 된다. 본 논문에서는 다중 시점 영상의 깊이 영상을 정합하는 fusion 과정을 개선하기 위한 방법을 제안한다. 제안하는 방법에서는 깊이 영상, 색상정보를 이용하여 기준 시점의 depth map 을 이용한 fusion 과정을 거친다. 실험을 통하여 제안한 알고리즘을 이용한 결과가 기존보다 개선됨을 보인다.

  • PDF

Implementation of Panoramic Realistic Images with the Use of Ultra High Definition(UHD) TV (초고선명(UHD)TV를 이용한 파노라마 실감영상구현)

  • Moon, Dae-Hyuk
    • Journal of Digital Convergence
    • /
    • v.14 no.7
    • /
    • pp.411-418
    • /
    • 2016
  • Digital broadcast environment led to the emergence of UHDTV following HDTV. The demands for realistic images which are created with the use of UHDTV have been applied to various fields. Of various application methods, multiplanar imaging, which is used to display high definition image contents after multiple HD or UHD displays are connected with each other up and down, and left and right, is often applied to display images and the advertising market. Today, the firms that make high-definition multiplanar images mostly use their independently developed program. Therefore, it costs higher than other general works. For the reason, multiplanar image contents are mostly made in the way of enlarging HD images over 2 to 5 times. In this experiment, the software application widely known for UHDTV based multiplanar images is applied to test a method of implementing UHD panoramic realistic images without quality degradation.

Implementation of High-definition Digital Signage Reality Image Using Chroma Key Technique (크로마키 기법을 이용한 고해상도 디지털 사이니지 실감 영상 구현)

  • Moon, Dae-Hyuk
    • Journal of Industrial Convergence
    • /
    • v.19 no.6
    • /
    • pp.49-57
    • /
    • 2021
  • Digital Signage and multi-view image system are used as the 4th media to deliver stories and information due to their strong immersion. A content image displayed on large Digital Signage is produced with the use of computer graphics, rather than reality image. That is because the images shot for content making have an extremely limited range of production and their limitation to high resolution, and thereby have difficulty being displayed in a large and wide Digital Signage screen. In case of Screen X and Escape that employ the left and right walls of in the center a movie theater as a screen, images are shot with three cameras for Digital Cinema, and are screened in a cinema with multi-view image system after stitching work is applied. Such realistic images help viewers experience real-life content. This research will be able to display high-resolution images on Digital Signage without quality degradation by using the multi-view image making technique of Screen X and Chroma key technique are showed the high-resolution Digital Signage content making method.

A Study on Robustness Indicators for Performance Evaluation of Immersive 360-degree Video Filtering (실감형 360도 영상 필터링 성능 평가를 위한 강인성 지표에 관한 연구)

  • Jang, Seyoung;Yoo, Injae;Lee, Jaecheng;Park, Byeongchan;Kim, Youngmo;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.437-438
    • /
    • 2020
  • 국내 실감형 콘텐츠 시장은 전년도비 42.9%의 연평균 성장률을 보이며 2020년에는 약 5조 7,271억원의 규모에 이를 것으로 전망된다. 특히 2018년 기점으로 하드웨어보다는 콘텐츠 시장이 확대되었다. 최근 실감형 콘텐츠의 유통이 본격적으로 시작됨에 따라 저작권 침해 사례들이 나타나고 있으나 시장의 저변확대 측면에서 그렇게 주목받지 못하고 있다. 실감형 저작물을 제작하는 업체가 주로 소기업이고, 제작하는 비용이 고비용인 점을 고려할 때 저작권 보호 기술인 필터링 기술이 절대적으로 요구되고 있다. 필터링 기술의 성능 평가할 기준인 강인성 지표가 미정립 된 상태이다. 따라서 본 논문에서는 특정 기술에 종속되지 않는 실감형 360도 영상 콘텐츠 강인성 지표를 제안한다.

  • PDF

Automated Extraction of Orthorectified Building Layer from High-Resolution Satellite Images (고해상도 위성영상으로부터 건물 정위 레이어 자동추출)

  • Seunghee Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.339-353
    • /
    • 2023
  • As the availability of high-resolution satellite imagery increases, improvement of positioning accuracy of satellite images is required. The importance of orthorectified images is also increasing, which removes relief displacement and establishes true localization of man-made structures. In this paper, we performed automated extraction of building rooftops and total building areas within original satellite images using the existing building height database. We relocated the rooftop sin their true position and generated an orthorectified building layer. The extracted total building areas were used to blank out building areas and generate true orthographic non-building layer. A final orthorectified image was provided by overlapping the building layer and non-building layer.We tested the proposed method with KOMPSAT-3 and KOMPSAT-3A satellite images and verified the results by overlapping with a digital topographical map. Test results showed that orthorectified building layers were generated with a position error of 0.4m.Through the proposed method, the feasibility of automated true orthoimage generation within dense urban areas was confirmed.

User Perception of Olfactory Information for Video Reality and Video Classification (영상실감을 위한 후각정보에 대한 사용자 지각과 영상분류)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, Chung Hyun;Choi, Ji Hoon;Kim, Shin Woo
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.9-19
    • /
    • 2013
  • There has been much advancement in reality enhancement using audio-visual information. On the other hand, there is little research on provision of olfactory information because smell is difficult to implement and control. In order to obtain necessary basic data when intend to provide smell for video reality, in this research, we investigated user perception of smell in diverse videos and then classified the videos based on the collected user perception data. To do so, we chose five main questions which were 'whether smell is present in the video'(smell presence), 'whether one desire to experience the smell with the video'(preference for smell presence with the video), 'whether one likes the smell itself'(preference for the smell itself), 'desired smell intensity if it is presented with the video'(smell intensity), and 'the degree of smell concreteness'(smell concreteness). After sampling video clips of various genre which are likely to receive either high and low ratings in the questions, we had participants watch each video after which they provided ratings on 7-point scale for the above five questions. Using the rating data for each video clips, we constructed scatter plots by pairing the five questions and representing the rating scale of each paired questions as X-Y axes in 2 dimensional spaces. The video clusters and distributional shape in the scatter plots would provide important insight into characteristics of each video clusters and about how to present olfactory information for video reality.

  • PDF