• 제목/요약/키워드: object-human interaction

검색결과 127건 처리시간 0.026초

실내디자인에서 Object의 비일상성 연구 - Nigel Coates의 초기 상업공간작품을 중심으로 - (A Study on the Non-everydayness of Interior Object - Focused on Nigel Coates' Early Commercial Interior Design -)

  • 서정연
    • 한국가구학회지
    • /
    • 제23권2호
    • /
    • pp.185-194
    • /
    • 2012
  • Contemporary society maintains mass-product system that keeps endless cycle of making and consuming. In this vein, everyday life becomes to be under the control of function and efficiency. On the contrary, the people are getting to have a desire of escaping from this everydayness, that is, the desire for non-everydayness. British architect, Nigel Coates understood the potentiality of contemporary metropolis which produce new experiences through their heterogeneities. During 1980s, Japanese economic bubble provided rich nourishment to the desire for non-everydayness based on consumers' tastes. Nigel Coates snatched this phenomena and designed commercial spaces aligned to the non-everydayness. He shows very eloquent version of escaping sense. We can find the exquisite quality of non-everydayness through design vocabulary of object's form and arrangement. In the viewpoint of object form, Coates adopted classical statues of Greek, that is antique, and modern gadgets such as airplane wings and seats. Also, we can find abundant gestures of curvilineal contours throughout the objects he designed. As for the objects' arrangement, he introduced repetition and curved composition that can stimulate human interaction with interior scape.

  • PDF

카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종 (Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner)

  • 김형래;최학남;이재홍;이승준;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

An Innovative Approach to Track Moving Object based on RFID and Laser Ranging Information

  • Liang, Gaoli;Liu, Ran;Fu, Yulu;Zhang, Hua;Wang, Heng;Rehman, Shafiq ur;Guo, Mingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권1호
    • /
    • pp.131-147
    • /
    • 2020
  • RFID (Radio Frequency Identification) identifies a specific object by radio signals. As the tag provides a unique ID for the purpose of identification, RFID technology effectively solves the ambiguity and occlusion problem that challenges the laser or camera-based approach. This paper proposes an approach to track a moving object based on the integration of RFID and laser ranging information using a particle filter. To be precise, we split laser scan points into different clusters which contain the potential moving objects and calculate the radial velocity of each cluster. The velocity information is compared with the radial velocity estimated from RFID phase difference. In order to achieve the positioning of the moving object, we select a number of K best matching clusters to update the weights of the particle filter. To further improve the positioning accuracy, we incorporate RFID signal strength information into the particle filter using a pre-trained sensor model. The proposed approach is tested on a SCITOS service robot under different types of tags and various human velocities. The results show that fusion of signal strength and laser ranging information has significantly increased the positioning accuracy when compared to radial velocity matching-based or signal strength-based approaches. The proposed approach provides a solution for human machine interaction and object tracking, which has potential applications in many fields for example supermarkets, libraries, shopping malls, and exhibitions.

A Dual Modeling Method for a Real-Time Palpation Simulator

  • Kim, Sang-Youn;Park, Se-Kil;Park, Jin-Ah
    • Journal of Information Processing Systems
    • /
    • 제8권1호
    • /
    • pp.55-66
    • /
    • 2012
  • This paper presents a dual modeling method that simulates the graphic and haptic behavior of a volumetric deformable object and conveys the behavior to a human operator. Although conventional modeling methods (a mass-spring model and a finite element method) are suitable for the real-time computation of an object's deformation, it is not easy to compute the haptic behavior of a volumetric deformable object with the conventional modeling method in real-time (within a 1kHz) due to a computational burden. Previously, we proposed a fast volume haptic rendering method based on the S-chain model that can compute the deformation of a volumetric non-rigid object and its haptic feedback in real-time. When the S-chain model represents the object, the haptic feeling is realistic, whereas the graphical results of the deformed shape look linear. In order to improve the graphic and haptic behavior at the same time, we propose a dual modeling framework in which a volumetric haptic model and a surface graphical model coexist. In order to inspect the graphic and haptic behavior of objects represented by the proposed dual model, experiments are conducted with volumetric objects consisting of about 20,000 nodes at a haptic update rate of 1000Hz and a graphic update rate of 30Hz. We also conduct human factor studies to show that the haptic and graphic behavior from our model is realistic. Our experiments verify that our model provides a realistic haptic and graphic feeling to users in real-time.

인간-컴퓨터 상호작용을 위한 CNN 기반 객체 검출 (CNN-based Object Detection for Human-Computer Interaction)

  • 박명숙;김상훈
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.1110-1111
    • /
    • 2019
  • 비전 기반 제스처 인식은 비 침입적이고 저렴한 비용으로 자연스러운 인간-컴퓨터 상호 작용을 제공한다. 로봇의 사용이 증가함에 따라 인간-로봇 상호 작용은 점점 더 중요해질 것이다. 최근 효율적인 딥러닝 기술이 연구되고 있다. 본 연구는 인간 컴퓨터 상호 작용을 위해 CNN을 기반으로 한 얼굴 및 손 동작의 인식을 위해 객체 검출 기법의 적용 결과를 제시한다.

오정보 효과와 정보의 유형: 한국인과 미국인의 비교 (Misinformation Effect and the type of information: A Comparison of Korean and American Sample)

  • 한유화
    • 한국심리학회지 : 문화 및 사회문제
    • /
    • 제25권2호
    • /
    • pp.157-177
    • /
    • 2019
  • 본 연구에서는 Han(2017)의 연구에서 수정된 오정보 효과 검증을 위한 실험재료를 번역한 후 이 실험재료를 사용하여 한국인들에게서도 오정보 효과를 관찰할 수 있는지 확인하였다(연구 1). 또한 한국인들의 자료와 Han(2017)의 연구에서 사용된 미국인들의 자료를 통합하여 오정보 제시여부, 정보의 유형 및 서로 다른 인지양식을 반영할 것으로 기대되는 참가자 국적에 따른 시간적 구조 및 대상정보에 대한 기억을 비교하였다(연구 2). 연구 결과, 연구 1에서 번역한 Han(2017)의 실험재료를 사용하여 한국인들에게서 오정보 효과를 관찰할 수 있었으며, 오정보 효과는 시간정보와 대상정보 모두에서 관찰되었다. 연구 2에서 오정보 제시여부, 정보유형, 참가자 국적에 따라 재인검사의 정답률을 비교한 결과, 세 독립변인의 주효과, 오정보 제시여부와 정보유형의 이원 상호작용효과 및 세 독립변인의 삼원 상호작용효과가 통계적으로 유의한 것으로 나타났다. 요약하면, 오정보를 제시 받지 않은 정보와 대상정보에 대한 정답률이 높았고, 미국자료의 정답률이 한국자료의 정답률보다 높았다. 오정보 효과는 시간정보보다 대상정보에서 더 크게 나타났으나, 오정보 제시여부와 정보유형의 이원 상호작용효과는 한국자료에서만 관찰되었다. 논의에서는 본 연구의 학문적 가치와 제한점에 대해 논의하였다.

연속 영상에서 학습 효과를 이용한 제스처 인식 (Gesture Recognition using Training-effect on image sequences)

  • 이현주;이칠우
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.222-225
    • /
    • 2000
  • Human frequently communicate non-linguistic information with gesture. So, we must develop efficient and fast gesture recognition algorithms for more natural human-computer interaction. However, it is difficult to recognize gesture automatically because human's body is three dimensional object with very complex structure. In this paper, we suggest a method which is able to detect key frames and frame changes, and to classify image sequence into some gesture groups. Gesture is classifiable according to moving part of body. First, we detect some frames that motion areas are changed abruptly and save those frames as key frames, and then use the frames to classify sequences. We symbolize each image of classified sequence using Principal Component Analysis(PCA) and clustering algorithm since it is better to use fewer components for representation of gestures. Symbols are used as the input symbols for the Hidden Markov Model(HMM) and recognized as a gesture with probability calculation.

  • PDF

Visual Tracking Using Improved Multiple Instance Learning with Co-training Framework for Moving Robot

  • Zhou, Zhiyu;Wang, Junjie;Wang, Yaming;Zhu, Zefei;Du, Jiayou;Liu, Xiangqi;Quan, Jiaxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권11호
    • /
    • pp.5496-5521
    • /
    • 2018
  • Object detection and tracking is the basic capability of mobile robots to achieve natural human-robot interaction. In this paper, an object tracking system of mobile robot is designed and validated using improved multiple instance learning algorithm. The improved multiple instance learning algorithm which prevents model drift significantly. Secondly, in order to improve the capability of classifiers, an active sample selection strategy is proposed by optimizing a bag Fisher information function instead of the bag likelihood function, which dynamically chooses most discriminative samples for classifier training. Furthermore, we integrate the co-training criterion into algorithm to update the appearance model accurately and avoid error accumulation. Finally, we evaluate our system on challenging sequences and an indoor environment in a laboratory. And the experiment results demonstrate that the proposed methods can stably and robustly track moving object.

다중 감각 피드백을 통한 원격 가상객체 조작 시 무게 정보 전달 (Virtual Object Weight Information with Multi-modal Sensory Feedback during Remote Manipulation)

  • 박창현;박재영
    • 인터넷정보학회논문지
    • /
    • 제25권1호
    • /
    • pp.9-15
    • /
    • 2024
  • 가상현실 기술의 대중화에 따라 가상환경과 사용자 간의 자연스럽고 효율적인 상호작용에 대한 수요가 높아지고 있다. 이러한 수요에 대응하는 솔루션 중 하나인 공중 조작(mid-air manipulation)은 사용자가 객체와 접촉하지 않은 상태에서 3차원 공간의 가상객체를 조작할 수 있도록 한다. 본 논문에서는 시각적으로 객체를 표현하고 객체의 무게에 대한 촉감 정보를 제공하면서 원격의 가상객체를 조작하는 데 초점을 맞췄다. 본 연구진은 사용자 손끝에 가상객체 무게에 대한 촉감 또는 진동 촉감 피드백을 제공할 수 있는 두 가지 유형의 착용 가능한 인터페이스를 개발했다. 가상객체 조작 중에 원격 객체 무게에 대한 지각을 평가하기 위해 인지 실험을 수행했다. 실험 결과는 촉감 정보 전달이 원격 가상객체 조작 중 무게 인지에 유의한 영향을 미친다는 것을 나타낸다.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권2호
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.