• Title/Summary/Keyword: Object Interaction

Search Result 529, Processing Time 0.026 seconds

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

The Effects of Interaction with an Object and with an Adult on Young Children's Cognitive Level (도구 및 성인과의 상호작용이 유아의 인지수준에 미치는 효과)

  • Lee, Soeun;Song, Ji-Young
    • Korean Journal of Child Studies
    • /
    • v.23 no.1
    • /
    • pp.71-85
    • /
    • 2002
  • This study examined the effects of different interaction styles, that is, interaction with an object and interaction with an adult, on young children's cognitive level. Subjects were 150 5-year-old children. The task required children to predict the working of a mathematical balance beam. Seven cognitive levels were identified based on the logic of prediction. Data were analyzed by t-test, F-test, Duncan Test and Wilcoxon Matched-Pairs Test. Results showed that both interaction styles caused improvement in children's cognitive level, but when interaction with an adult was divided into two categories, i.e., interaction with the higher group and interaction with the lower group, the latter experienced decline in cognitive level. Regardless of sex, interactions within the Zone of Proximal Development and with the object were found to be effective methods for children's cognitive improvement.

  • PDF

Mother-Infant Interaction Styles Associated with Infant Development (어머니-영아간의 상호작용방식이 영아발달에 미치는 영향)

  • Park, Sung-Yun;Soe, So-Jung;Bornstein, M.
    • Korean Journal of Child Studies
    • /
    • v.26 no.5
    • /
    • pp.15-30
    • /
    • 2005
  • The subjects of this study were 31 mothers and their first-born infants from middle class families living in Seoul. Mother-infant interactions were filmed at 5 and 13 months of age during naturalistic play situations at home. Questionnaire data were also collected. Results revealed that both maternal didactic and social interactions decreased over the 5 to 13 month time period but (only for infants) object-oriented interaction increased over time. Infant object-oriented interaction at 13 months was predicted by cumulative effects of both mother's social stimulation at 5 months and infant social interaction at 13 months. Infant's social interaction at 13 months was predicted by infant's object-oriented interaction at 13 months. Infant language development was predicted by mother's didactic stimulation.

  • PDF

Human-Object Interaction Detection Data Augmentation Using Image Concatenation (이미지 이어붙이기를 이용한 인간-객체 상호작용 탐지 데이터 증강)

  • Sang-Baek Lee;Kyu-Chul Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.91-98
    • /
    • 2023
  • Human-object interaction(HOI) detection requires both object detection and interaction recognition, and requires a large amount of data to learn a detection model. Current opened dataset is insufficient in scale for training model enough. In this paper, we propose an easy and effective data augmentation method called Simple Quattro Augmentation(SQA) and Random Quattro Augmentation(RQA) for human-object interaction detection. We show that our proposed method can be easily integrated into State-of-the-Art HOI detection models with HICO-DET dataset.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

A Study on the Interactive relationship of Object factors and Space (공간과 오브제 요소의 인터랙션에 관한 연구)

  • Lee Chan;Bae Yun-Joon
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.6 s.53
    • /
    • pp.103-111
    • /
    • 2005
  • Since the Industrial Revolution in the 18th century, the production system adopted mechanization and mass production to popularize and standardize overall society. The architectural space removed basic desire of men on the decoration to neglect historical and regional continuity of the architecture and to make uniform designs, so that human sensibility and emotion were excluded. The architectural space had arbitrary and functionalism features in accordance with such a social change to display abstract space having no personality. The limitation expanded value of the space that was not lot possession and residence but for communication with men to express object factors. However, the object factors of the space were expressed in each factor consisting of either material factors or non-material factors. This study investigated interaction of the space by both expressive interaction and potential interaction to find out key words for making frames of the investigation and to examine cases comprehensively. The purpose of the study was to recover relations between the space and men in horizontal way and mutual communication and to present possibility integrating men and space.

Comparison of User Interaction Alternatives in a Tangible Augmented Reality Environment (감각형 증강현실 기반 상호작용 대안들의 비교)

  • Park, Sang-Jin;Jung, Ho-Kyun;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.6
    • /
    • pp.417-425
    • /
    • 2012
  • In recent years, great attention has been paid to using simple physical objects as tangible objects to improve user interaction in augmented reality (AR) environments. In this paper, we address AR-based user interaction using tangible objects, which has been used as a key component for virtual design evaluation of engineered products including digital handheld products. We herein consider the use of two types (product-type and pointer-type) of tangible objects. The user creates input events by touching specified parts of the product-type object with the pointer-type object, and the virtual product reacts to the events by rendering its visual and auditory contents on the output devices. The product-type object is used to reflect the geometric shape of a product of interest and to determine its position and orientation in the AR environment. The pointer-type object is used to recognize the reference position of the pointer (or finger) in the same environment. The rapid prototype of the product is employed as a good alternative to the product-type object, but various alternatives to the pointer-type object can be considered according to fabrication process and touching mechanism. In this paper, we present four alternatives to the pointer-type object and investigate their strong and weak points by performing experimental comparison of their various aspects including interaction accuracy, task performance, and qualitative user experience.

Real-time Simulation Technique for Visual-Haptic Interaction between SPH-based Fluid Media and Soluble Solids (SPH 기반의 유체 및 용해성 강체에 대한 시각-촉각 융합 상호작용 시뮬레이션)

  • Kim, Seokyeol;Park, Jinah
    • Journal of the Korean Society of Visualization
    • /
    • v.15 no.1
    • /
    • pp.32-40
    • /
    • 2017
  • Interaction between fluid and a rigid object is frequently observed in everyday life. However, it is difficult to simulate their interaction as the medium and the object have different representations. One of the challenging issues arises especially in handling deformation of the object visually as well as rendering haptic feedback. In this paper, we propose a real-time simulation technique for multimodal interaction between particle-based fluids and soluble solids. We have developed the dissolution behavior model of solids, which is discretized based on the idea of smoothed particle hydrodynamics, and the changes in physical properties accompanying dissolution is immediately reflected to the object. The user is allowed to intervene in the simulation environment anytime by manipulating the solid object, where both visual and haptic feedback are delivered to the user on the fly. For immersive visualization, we also adopt the screen space fluid rendering technique which can balance realism and performance.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF