• Title/Summary/Keyword: 3D spatial interaction

Search Result 58, Processing Time 0.03 seconds

3D Spatial Interaction Method using Visual Dynamics and Meaning Production of Character

  • Lim, Sooyeon
    • International journal of advanced smart convergence
    • /
    • v.7 no.3
    • /
    • pp.130-139
    • /
    • 2018
  • This study is to analyze the relationship between character and human semantic production through research on character visualization artworks and to develop a creative platform that visually expresses the formative and semantic dynamics of characters using the results will be. The 3D spatial interaction system using the character visualization proposed generates the transformation of the character in real time using the interaction with user and the deconstruction of the character structure. Transformations of characters including the intentions of the viewers provide a dynamic visual representation to the viewer and maximize the efficiency of meaning transfer by producing various related meanings. The method of dynamic deconstruction and reconstruction of the characters provided by this system creates special shapes that viewers cannot imagine until now and further extends the interpretation range of the meaning of the characters. Therefore, the proposed system not only induces an active viewing attitude from viewers, but also gives them an opportunity to enjoy watching the artwork and demonstrate creativity as a creator. This system induces new gestures of the viewer in real time through the transformation of characters in accordance with the viewer''s gesture, and has the feature of exchanging emotions with viewers.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Experimental Study of Spatial and Temporal Dynamics in Double Phase Conjugation

  • Kwak, Keum-Cheol;Yu, Yong-Hun;Lim, Tong-Kun;Lee, Dae-Eun;Son, Jung-Young
    • Journal of the Optical Society of Korea
    • /
    • v.3 no.2
    • /
    • pp.41-46
    • /
    • 1999
  • Spatial and temporal dynamics arising in a photorefractive crystal(BaTiO3) during the process of double phase conjugation was studied experimentally. We studied the dynamical effects caused by the buildup of the diffraction grating and turn on of phase conjugated beams, as well as the spatial effects caused by the finite transverse coupling of beams and the propagation direction of beams. We observed conical emission in DPCM. We believe that various temporal and spatial instabilities are due to movement of the nonlinear grating. For a real beam coupling and constructive interaction of interference fringes in the crystal, we observed steady, periodic, irregular temporal behavior. And, by the calculation of the correlation index, we found that the spatial correlation decreased as the transverse interaction region was increased.

Introducing Depth Camera for Spatial Interaction in Augmented Reality (증강현실 기반의 공간 상호작용을 위한 깊이 카메라 적용)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.62-67
    • /
    • 2009
  • Many interaction methods for augmented reality has attempted to reduce difficulties in tracking of interaction subjects by either allowing a limited set of three dimensional input or relying on auxiliary devices such as data gloves and paddles with fiducial markers. We propose Spatial Interaction (SPINT), a noncontact passive method that observes an occupancy state of the spaces around target virtual objects for interpreting user input. A depth-sensing camera is introduced for constructing the virtual space sensors, and then manipulating the augmented space for interaction. The proposed method does not require any wearable device for tracking user input, and allow versatile interaction types. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also minimized for more precise interaction. The exhibits of dynamic contents such as Miniature AR System (MINARS) could benefit from this fluid 3D user interface.

  • PDF

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Rational Design and Facile Fabrication of Tunable Nanostructures towards Biomedical Applications

  • Yu, Eun-A;Choe, Jong-Ho;Park, Gyu-Hwan
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.105.2-105.2
    • /
    • 2016
  • For the rational design and facile fabrication of novel nanostructures, we present a new approach to generating arrays of three-dimensionally tunable nanostructures by exploiting light-matter interaction. To create controlled three-dimensional (3D) nanostructures, we utilize the 3D spatial distribution of light, induced by the light-matter interaction, within the matter to be patterned. As a systematic approach, we establish 3D modeling that integrates the physical and chemical effects of the photolithographic process. Based on a comprehensive analysis of structural formation process and nanoscale features through this modeling, we are able to realize three-dimensionally tunable nanostructures using facile photolithographic process. Here we first demonstrate the arrays of three-dimensionally controlled, stacked nanostructures with nanoscale, tunable layers. We expect that the promising strategy would open new opportunities to produce the arrays of tunable 3D nanostructures using more accessible and facile fabrication process for various biomedical applications ranging from biosensors to drug delivery devices.

  • PDF

Visualizing Geographical Contexts in Social Networks

  • Lee, Yang-Won;Kim, Hyung-Joo
    • Spatial Information Research
    • /
    • v.14 no.4 s.39
    • /
    • pp.391-401
    • /
    • 2006
  • We propose a method for geographically enhanced representation of social networks and implement a Web-based 3D visualization of geographical contexts in social networks. A renovated social network graph is illustrated by using two key components: (i) GWCMs (geographically weighted centrality measures) that reflect the differences in interaction intensity and spatial proximity among nodes and (ii) MSNG (map-integrated social network graph) that incorporates the GWCMs and the geographically referenced arrangement of nodes on a choroplethic map. For the integrated 3D visualization of the renovated social network graph, we employ X3D (Extensible 3D), a standard 3D authoring tool for the Web. An experimental case study of regional R&D collaboration provides a visual clue to geographical contexts in social networks including how the social centralization relates to spatial centralization.

  • PDF

Experiencing the 3D Color Environment: Understanding User Interaction with a Virtual Reality Interface (3차원 가상 색채 환경 상에서 사용자의 감성적 인터랙션에 관한 연구)

  • Oprean, Danielle;Yoon, So-Yeon
    • Science of Emotion and Sensibility
    • /
    • v.13 no.4
    • /
    • pp.789-796
    • /
    • 2010
  • The purpose of this study was to test a large screen and rear-projected virtual reality (VR) interface in color choice for environmental design. The study piloted a single three-dimensional model of a bedroom including furniture in different color combinations. Using a mouse with an $8'{\times}6'$ rear-projector screen, participants could move 360 degree motion in each room. The study used 34 college students who viewed and interacted with virtual rooms projected on a large screen, then filled out a survey. This study aimed to understand the interaction between the users and the VR interface through measurable dimensions of the interaction: interest and user perceptions of presence and emotion. Specifically, the study focused on spatial presence, topic involvement, and enjoyment. Findings should inform design researchers how empirical evidence involving environmental effects can be obtained using a VR interface and how users experience the interaction with the interface.

  • PDF

Visual Representation of Temporal Properties in Formal Specification and Analysis using a Spatial Process Algebra (공간 프로세스 대수를 이용한 정형 명세와 분석에서의 시간속성의 시각화)

  • On, Jin-Ho;Choi, Jung-Rhan;Lee, Moon-Kun
    • The KIPS Transactions:PartD
    • /
    • v.16D no.3
    • /
    • pp.339-352
    • /
    • 2009
  • There are a number of formal methods for distributed real-time systems in ubiquitous computing to analyze and verify the behavioral, temporal and the spatial properties of the systems. However most of the methods reveal structural and fundamental limitations of complexity due to mixture of spatial and behavioral representations. Further temporal specification makes the complexity more complicate. In order to overcome the limitations, this paper presents a new formal method, called Timed Calculus of Abstract Real-Time Distribution, Mobility and Interaction(t-CARDMI). t-CARDMI separates spatial representation from behavioral representation to simplify the complexity. Further temporal specification is permitted only in the behavioral representation to make the complexity less complicate. The distinctive features of the temporal properties in t-CARDMI include waiting time, execution time, deadline, timeout action, periodic action, etc. both in movement and interaction behaviors. For analysis and verification of spatial and temporal properties of the systems in specification, t-CARDMI presents Timed Action Graph (TAG), where the spatial and temporal properties are visually represented in a two-dimensional diagram with the pictorial distribution of movements and interactions. t-CARDMI can be considered to be one of the most innovative formal methods in distributed real-time systems in ubiquitous computing to specify, analyze and verify the spatial, behavioral and the temporal properties of the systems very efficiently and effectively. The paper presents the formal syntax and semantics of t-CARDMI with a tool, called SAVE, for a ubiquitous healthcare application.

A Study on the Comparison Between Full-3D and Quasi-1D Supercompact Multiwavelets (Full-3D와 Quasi-1D Supercompact Multiwavelets의 비교 연구)

  • Park, June-Pyo;Lee, Do-Hyung;Kwon, Do-Hoon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.28 no.12
    • /
    • pp.1608-1615
    • /
    • 2004
  • CFD data compression methods based on Full-3D and Quasi-1D supercompact multiwavelets are presented. Supercompact wavelets method provide advantageous benefit that it allows higher order accurate representation with compact support. Therefore it avoids unnecessary interaction with remotely located data across singularities such as shock. Full-3D wavelets entails appropriate cross-derivative scaling function & wavelets, hence it can allow highly accurate multi-spatial data representation. Quasi-1D method adopt 1D multiresolution by alternating the directions rather than solving huge transformation matrix in Full-3D method. Hence efficient and relatively handy data processing can be conducted. Several numerical tests show swift data processing as well as high data compression ratio for CFD simulation data.