• 제목/요약/키워드: interactive and background

검색결과 120건 처리시간 0.026초

Infrared Sensitive Camera Based Finger-Friendly Interactive Display System

  • Ghimire, Deepak;Kim, Joon-Cheol;Lee, Kwang-Jae;Lee, Joon-Whoan
    • International Journal of Contents
    • /
    • 제6권4호
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper we present a system that enables the user to interact with large display system even without touching the screen. With two infrared sensitive cameras mounted on the bottom left and bottom right of the display system pointing upwards, the user fingertip position on the selected region of interest of each camera view is found using vertical intensity profile of the background subtracted image. The position of the finger in two images of left and right camera is mapped to the display screen coordinate by using pre-determined matrices, which are calculated by interpolating samples of user finger position on the images taken by pointing finger over some known coordinate position of the display system. The screen is then manipulated according to the calculated position and depth of the fingertip with respect to the display system. Experimental results demonstrate an efficient, robust and stable human computer interaction.

Educational Framework for Interactive Product Prototyping

  • Nam Tek-Jin
    • 디자인학연구
    • /
    • 제19권3호
    • /
    • pp.93-104
    • /
    • 2006
  • When the design profession started, design targets were mainly static hardware centered products. Due to the development of network and digital technologies, new products with dynamic and software-hardware hybrid interactive characteristics have become one of the main design targets. To accomplish the new projects, designers are required to learn new methods, tools and theories in addition to the traditional design expertise of visual language. One of the most important tools for the change is effective and rapid prototyping. There have been few researches on educational framework for interactive product or system prototyping to date. This paper presents a new model of educational contents and methods for interactive digital product prototyping, and it's application in a design curricula. The new course contents, integrated with related topics such as physical computing and tangible user interface, include microprocessor programming, digital analogue input and output, multimedia authoring and programming language, sensors, communication with other external devices, computer vision, and movement control using motors. The final project of the course was accomplished by integrating all the exercises. Our educational experience showed that design students with little engineering background could learn various interactive digital technologies and its' implementation method in one semester course. At the end of the course, most of the students were able to construct prototypes that illustrate interactive digital product concepts. It was found that training for logical and analytical thinking is necessary in design education. The paper highlights the emerging contents in design education to cope with the new design paradigm. It also suggests an alterative to reflect the new requirements focused on interactive product or system design projects. The tools and methods suggested can also be beneficial to students, educators, and designers working in digital industries.

  • PDF

향 디스플레이가 가능한 송도 Tomorrow-city 체험관의 상호작용 가상공간 (The Interactive Virtual Space with Scent Display for Song-Do Tomorrow-City Experience Complex)

  • 김정도;박성대;이정환;김정주;이상국
    • 대한인간공학회지
    • /
    • 제29권4호
    • /
    • pp.585-593
    • /
    • 2010
  • Recently, we designed an interactive virtual space for the multi-purpose hall in Songdo Future City, located in Incheon, Korea. The goal of the design is to make a virtual space that is flexible and can be adjusted thanks to its unfixed seats in order to accommodate different and unspecified audience sizes. Virtual images are interactively adjusted according to the distance, position and size of audiences, information about which is detected by 9 photo sensors. To increase the sense of immersion, intensity and reality, we utilized the technology of scent display that can create appropriate scents to match the images on the screen. The intensity and persistence of scents were determined by the size, distance and position of audiences. The virtual image contains background images and reactive images. The background images repeatedly project images of spring, summer, autumn and winter. The reactive images consist of small portraits or pictures or icons that define or characterize the season types, and these are added to the background image according to the distance, position and size of the audiences.

차별화된 서비스제공을 위한 트래픽 모델 (A Traffic Model based on the Differentiated Service Routing Protocol)

  • 인치형
    • 한국통신학회논문지
    • /
    • 제28권10B호
    • /
    • pp.947-956
    • /
    • 2003
  • NGN(Next Generation Network)을 목표로 최근에 들어 사용자의 QoS요구 시, 다양한 QoS를 패킷네트워크에서 처리한 수 있도록 IETF에서 DiffServ, RSVP, MPLS등과 같은 패킷 QoS기법에 대한 표준화 작업이 진행중이며, 그 중에서 DiffServ네트워크가 대표적이다. 따라서 본 논문에서는 이 DiffServ패킷 네트워크상에서 다양하게 유입되는 트래픽의 종류에 따라 사용자의 응용에 적절히 대응하여 트래픽을 처리하는 라우팅 기법트래픽 모델 및 알고리즘을 연구하고 기존의 최선형(Best effort) 즉, 지연에 민감하지 않은 트래픽을 처리하기 위한트래픽 분산 라우팅 프로토콜(Traffic-Balanced Routing Protocol : TBRP), 최적의 중간 노드를 선택하여 유무선 통합과 높은 순위의 상호형 데이터를 처리하기 위한 계층적 라우팅 프로토콜(Hierarchical Traffic-Traffic-Scheduling Routing Protocol : HTSRP), 대화형 또는 스트리밍 패킷서비스를 위한 즉, QoS파라미터을 기반으로 엑세스 계층의 자원 활용도를 최대화하고 지연에 민감한 트래픽 처리하는 HTSRP_Q(HTSRP for QoS)를 연구하였고, 이를 기반으로 각 트래픽 모델에 대한 매핑기법과 관리기법을 연구하였다. 본 연구에서 제시한 프로토콜은 트래픽 모델은 다양한 엑세스망과 백본망에 유연한 트래픽 처리기법으로서 NGN의 효율성과 안정성에 적합하였다.

Visual Dynamics Model for 3D Text Visualization

  • Lim, Sooyeon
    • International Journal of Contents
    • /
    • 제14권4호
    • /
    • pp.86-91
    • /
    • 2018
  • Text has evolved along with the history of art as a means of communicating human intentions and emotions. In addition, text visualization artworks have been combined with the social form and contents of new media to produce social messages and related meanings. Recently, in text visualization artworks combined with digital media, communication forms with viewers are changing instantly and interactively, and viewers are actively participating in creating artworks by direct engagement. Interactive text visualization with additional viewer's interaction, generates external dynamics from text shapes and internal dynamics from embedded meanings of text. The purpose of this study is to propose a visual dynamics model to express the dynamics of text and to implement a text visualization system based on the model. It uses the deconstruction of the imaged text to create an interactive text visualization system that reacts to the gestures of the viewer in real time. Visual Transformation synchronized with the intentions of the viewer prevent the text from remaining in the interpretation of language symbols and extend the various meanings of the text. The visualized text in various forms shows visual dynamics that interpret the meaning according to the cultural background of the viewer.

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • 제16권4호
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

지상파 DMB 대화형 서비스를 위한 BIFS 콘텐츠 저작 시스템 구현 (Development of BIFS Contents Authoring System for T-DMB Interactive Data Service)

  • 안상우;차지훈;문경애;정원식
    • 방송공학회논문지
    • /
    • 제11권3호
    • /
    • pp.263-275
    • /
    • 2006
  • 본 논문에서는 지상파 DMB 대화형 서비스를 위한 BIFS 콘텐츠를 쉽고 편리하게 저작할 수 있도록 하는 BIFS 콘텐츠 저작 시스템에 대하여 소개한다. 지상파 DMB에서는 대화형 방송 서비스를 위하여 MPEG-4 BIFS 규격을 채택하고 있으며 이를 이용하여 다양한 형태의 대화형 멀티미디어 방송 서비스가 가능하다. 이러한 대화형 서비스 활성화되기 위해서는 다양한 형태의 대화형 콘텐츠의 원활한 공급이 선행되어야 한다. 지상파 DMB 대화형 방송 서비스를 위한 BIFS 콘텐츠는 많은 수의 노드(node), 라우트 (route) 및 기술자 (descriptor) 들의 조합으로 표현된다. 따라서, 지상파 DMB에서의 대화형 방송 서비스가 활성화되기 위해서는 MPEC-4 BIFS에 대한 기술적인 지식이 없는 경우에도 쉽고 편리하게 대화형 콘텐츠를 저작 할 수 있는 대화형 콘텐츠 저작 기술이 필수적이다. 본 논문에서 소개하는 지상파DMB 대화형 콘텐츠 저작 시스템은 지상파 DMB 규격 및 MPEG-4 BIFS 규격을 준수하도록 개발되었으며, 상기 규격에 대한 지식이 전혀 없는 일반 사용자들도 쉽고 편리하게 대화형 콘텐츠를 저작할 수 있도록 개발되었다. 따라서, 본 논문에서 소개하는 BIFS 콘텐츠 저작 시스템은 지상파 DMB를 통한 대화형 방송 서비스의 활성화에 크게 기여할 것으로 기대된다.

리눅스 기반 모바일 기기에서 사용자 응답성 향상을 위한 프레임워크 지원 선별적 페이지 보호 기법 (Framework-assisted Selective Page Protection for Improving Interactivity of Linux Based Mobile Devices)

  • 김승준;김정호;홍성수
    • 정보과학회 논문지
    • /
    • 제42권12호
    • /
    • pp.1486-1494
    • /
    • 2015
  • 스마트폰과 같은 모바일 기기가 널리 보급됨에 따라 사용자들은 모바일 기기 응용들을 사용하면서 빠른 응답성을 제공받기를 바란다. 하지만 모바일 기기 응용들은 종종 사용자가 기대하는 수준의 응답성을 제공하지 못한다. 응답성을 저해하는 주 원인들 중 하나는 과도한 페이지 폴트 발생에 따른 대화형 태스크 수행의 지연이다. 이는 대화형 태스크의 상주 페이지(resident page)들이 비대화형 태스크와의 페이지 캐시 경쟁에 의해 더욱 빈번히 희생될 페이지(victim page)으로 선정되어 스토리지로 쫓겨나기 때문이다. 이 논문은 이러한 문제를 해결하기 위해 프레임워크 지원 선별적 페이지 보호 기법을 제시한다. 제안한 기법은 프레임워크 레벨에서 대화형 태스크를 식별하고 이를 커널에 전달하여 페이지 replacement 시에 대화형 태스크의 페이지를 보호하고, 사용자 입력 처리 중에 발생하는 페이지 폴트를 줄인다. 실험 결과 제안된 기법은 기존 시스템에 비해 페이지 폴트 횟수를 37% 감소시켰고, 응답시간을 11% 단축할 수 있었다.

Automatic Object Segmentation and Background Composition for Interactive Video Communications over Mobile Phones

  • Kim, Daehee;Oh, Jahwan;Jeon, Jieun;Lee, Junghyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권3호
    • /
    • pp.125-132
    • /
    • 2012
  • This paper proposes an automatic object segmentation and background composition method for video communication over consumer mobile phones. The object regions were extracted based on the motion and color variance of the first two frames. To combine the motion and variance information, the Euclidean distance between the motion boundary pixel and the neighboring color variance edge pixels was calculated, and the nearest edge pixel was labeled to the object boundary. The labeling results were refined using the morphology for a more accurate and natural-looking boundary. The grow-cut segmentation algorithm begins in the expanded label map, where the inner and outer boundary belongs to the foreground and background, respectively. The segmented object region and a new background image stored a priori in the mobile phone was then composed. In the background composition process, the background motion was measured using the optical-flow, and the final result was synthesized by accurately locating the object region according to the motion information. This study can be considered an extended, improved version of the existing background composition algorithm by considering motion information in a video. The proposed segmentation algorithm reduces the computational complexity significantly by choosing the minimum resolution at each segmentation step. The experimental results showed that the proposed algorithm can generate a fast, accurate and natural-looking background composition.

  • PDF