• 제목/요약/키워드: object-human interaction

검색결과 127건 처리시간 0.028초

이미지 이어붙이기를 이용한 인간-객체 상호작용 탐지 데이터 증강 (Human-Object Interaction Detection Data Augmentation Using Image Concatenation)

  • 이상백;이규철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권2호
    • /
    • pp.91-98
    • /
    • 2023
  • 인간-객체 상호작용 탐지는 객체 탐지와 상호작용 인식을 함께 풀어야하는 분야로 탐지 모델의 학습을 위해서 많은 데이터를 필요로 한다. 현재 공개된 데이터셋은 규모가 부족하여 데이터 증강 기법에 대한 요구가 커지고 있으나, 대부분의 연구에서 기존의 객체 탐지, 이미지 분할분야에서 활용하는 증강 기법을 활용하고 있는 실정이다. 이에 본 연구에서는 인간-객체 상호작용 탐지 분야에서 활용하는 데이터셋의 특성을 파악하고, 이를 통해 인간-객체 상호작용 탐지 모델 성능 향상에 효과적인 데이터 증강 기법을 제안한다. 본 연구에서 제안한 증강 기법에 대한 검증을 위하여 실험 환경을 구축하고, 기존의 학습 모델에 적용하여 증강 기법을 적용할 경우에 탐지 모델의 성능 향상이 가능함을 확인하였다.

RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크 (Human-Object Interaction Framework Using RGB-D Camera)

  • 백용환;임창민;박종일
    • 방송공학회논문지
    • /
    • 제21권1호
    • /
    • pp.11-23
    • /
    • 2016
  • 터치 인터페이스는 가장 최근에 등장한 상호작용 인터페이스 중에서도 그 사용성과 응용성이 뛰어난 인터페이스이다. 기술의 발전과 더불어 터치 인터페이스는 현대 사회에서 시계에서부터 광고판까지 많은 영역에 걸쳐 빠르게 퍼져나가고 있다. 하지만 여전히 터치 상호작용은 접촉식 센서가 내장되어있는 유효한 영역 내에서만 이루어질 수 있다. 따라서 센서가 내장될 수 없는 일반적인 사물들과의 상호작용은 아직 불가능한 상황이다. 본 논문에서는 이와 같은 제약 사항을 극복하기 위해 RGB-D 카메라를 이용한 사람-사물 상호작용 프레임워크를 제안한다. 제안하는 프레임워크는 물체와 사용자의 손이 서로 가려지는 상황에서도 서로 문제없이 상호작용을 수행할 수 있는 장점을 가지고 있으며, 빠르고 크기 변화와 회전에 강인한 물체 인식 알고리즘과 윤곽 정보를 이용한 손 제스처 인식 알고리즘을 통해 실시간으로 사용자가 사물과 상호작용을 수행할 수 있다.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

어머니-영아간의 상호작용방식이 영아발달에 미치는 영향 (Mother-Infant Interaction Styles Associated with Infant Development)

  • 박성연;서소정
    • 아동학회지
    • /
    • 제26권5호
    • /
    • pp.15-30
    • /
    • 2005
  • The subjects of this study were 31 mothers and their first-born infants from middle class families living in Seoul. Mother-infant interactions were filmed at 5 and 13 months of age during naturalistic play situations at home. Questionnaire data were also collected. Results revealed that both maternal didactic and social interactions decreased over the 5 to 13 month time period but (only for infants) object-oriented interaction increased over time. Infant object-oriented interaction at 13 months was predicted by cumulative effects of both mother's social stimulation at 5 months and infant social interaction at 13 months. Infant's social interaction at 13 months was predicted by infant's object-oriented interaction at 13 months. Infant language development was predicted by mother's didactic stimulation.

  • PDF

Stereo-Vision-Based Human-Computer Interaction with Tactile Stimulation

  • Yong, Ho-Joong;Back, Jong-Won;Jang, Tae-Jeong
    • ETRI Journal
    • /
    • 제29권3호
    • /
    • pp.305-310
    • /
    • 2007
  • If a virtual object in a virtual environment represented by a stereo vision system could be touched by a user with some tactile feeling on his/her fingertip, the sense of reality would be heightened. To create a visual impression as if the user were directly pointing to a desired point on a virtual object with his/her own finger, we need to align virtual space coordinates and physical space coordinates. Also, if there is no tactile feeling when the user touches a virtual object, the virtual object would seem to be a ghost. Therefore, a haptic interface device is required to give some tactile sensation to the user. We have constructed such a human-computer interaction system in the form of a simple virtual reality game using a stereo vision system, a vibro-tactile device module, and two position/orientation sensors.

  • PDF

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회논문지
    • /
    • 제1권1호
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식 (Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction)

  • 윤성백;배세호;박한재;이준호
    • 한국통신학회논문지
    • /
    • 제40권11호
    • /
    • pp.2284-2290
    • /
    • 2015
  • 다양한 외형 변화를 가지는 물체의 범주 인식성능을 향상 시키는데 있어서 사람의 행동은 매우 효과적인 컨텍스트 정보이다. 본 연구에서는 Bayesian 접근법을 기반으로 하는 간단한 확률 그래프 모델을 통해 사람의 행동을 물체 범주 인식을 위한 컨텍스트 정보로 활용하였다. 다양한 외형의 컵, 전화기, 가위 그리고 스프레이 물체에 대해 실험을 수행한 결과 물체의 용도에 대한 사람의 행동을 인식함으로써 물체 인식 성능을 8%~28%개선할 수 있었다.

An Art-Robot Expressing Emotion with Color Light and Behavior by Human-Object Interaction

  • Kwon, Yanghee;Kim, Sangwook
    • Journal of Multimedia Information System
    • /
    • 제4권2호
    • /
    • pp.83-88
    • /
    • 2017
  • The era of the fourth industrial revolution, which will bring about a great wave of change in the 21st century, is the age of super-connection that links humans to humans, objects to objects, and humans to objects. In the smart city and the smart space which are evolving further, emotional engineering is a field of interdisciplinary researches that still attract attention with the development of technology. This paper proposes an emotional object prototype as a possibility of emotional interaction in the relation between human and object. By suggesting emotional objects that produce color changes and movements through the emotional interactions between humans and objects against the current social issue-loneliness of modern people, we have approached the influence of our lives in the relation with objects. It is expected that emotional objects that are approached from the fundamental view will be able to be in our lives as a viable cultural intermediary in our future living space.

Near-body Interaction Enhancement with Distance Perception Matching in Immersive Virtual Environment

  • Yang, Ungyeon;Kim, Nam-Gyu
    • Journal of Multimedia Information System
    • /
    • 제8권2호
    • /
    • pp.111-120
    • /
    • 2021
  • As recent virtual reality technologies provide a more natural three-dimensional interactive environment, users naturally learn to explore space and interact with synthetic objects. The virtual reality researcher develops a technique that realizes realistic sensory feedback to get appropriate feedback to sense input behavior. Although much recent virtual reality research works extensively consider the human factor, it is not easy to adapt to all new virtual environment contents. Among many human factors, distance perception has been treated as very important in virtual environment interaction accuracy. We study the experiential virtual environment with the feature of the virtual object connected with the real object. We divide the three-dimensional interaction, in which distance perception and behavior have a significant influence, into two types (whole-body movement and direct manipulation) and analyze the real and virtual visual distance perception heterogeneity phenomenon. Also, we propose a statistical correction method that can reduce a near-body movement and manipulation error when changing the interaction location and report the experiment results proving its effectiveness.