• Title/Summary/Keyword: human-machine interaction

Search Result 166, Processing Time 0.024 seconds

The Social Implication of New Media Art in Forming a Community (공동체 형성에 있어서 뉴미디어아트의 사회적 역할에 대한 고찰)

  • Kim, Hee-Young
    • The Journal of Art Theory & Practice
    • /
    • no.14
    • /
    • pp.87-124
    • /
    • 2012
  • This paper focuses on the social implication of new media art, which has evolved with the advance of technology. To understand the notion of human-computer interactivity in media art, it examines the meaning of "cybernetics" theory invented by Norbert Wiener just after WWII, who provided "control and communication" as central components of his theory of messages. It goes on to investigate the application of cybernetics theory onto art since the 1960s, to which Roy Ascott made a significant contribution by developing telematic art, utilizing the network of telecommunication. This paper underlines the significance of the relationship between human and machine, art and technology in transforming the work of art as a site of communication and experience. The interactivity in new media art transforms the viewer into the user of the work, who is now provided free will to make decisions on his or her action with the work. The artist is no longer a godlike figure who determines the meaning of the work, yet becomes another user of his or her own work, with which to interact. This paper believes that the interaction between man and machine, art and technology can lead to various ways of interaction between humans, thereby restoring a sense of community while liberating humans from conventional limitations on their creativity. This paper considers the development of new media art more than a mere invention of new aesthetic styles employing advanced technology. Rather, new media art provides a critical shift in subverting the modernist autonomy that advocates the medium specificity. New media art envisions a new art, which would embrace impurity into art, allowing the coexistence of autonomy and heteronomy, embracing a technological other, thereby expanding human relations. By enabling the birth of the user in experiencing the work, interactive new media art produces an open arena, in which the user can create the work while communicating with the work and other users. The user now has freedom to visit the work, to take a journey on his or her own, and to make decisions on what to choose and what to do with the work. This paper contends that there is a significant parallel between new media artists' interest in creating new experiences of the art and Jacques Ranci$\grave{e}$re's concept of the aesthetic regime of art. In his argument for eliminating hierarchy in art and for embracing impurity, Ranci$\grave{e}$re provides a vision for art, which is related to life and ultimately reshapes life. Ranci$\grave{e}$re's critique of both formalist modernism and Jean-Francois Lyotard's postmodern view underlines the social implication of new media art practices, which seek to form "the common of a community."

  • PDF

Research on Service Extensior of Restaurant Serving Robot - Taking Haidilao Hot Pot Intelligent Restaurant in Beijing as an Example (레스토랑 서빙 로봇의 서비스 확장에 관한 연구 - 중국 베이징 하이디라오 스마트 레스토랑을 사례로 연구)

  • Zhao, Yuqi;Pan, Young-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.17-25
    • /
    • 2020
  • This study focuses on the analysis of the service process and interaction mode of the serving robot used in the restaurant, Through user research, shadowing research and indepth interviews with customers and catering service personnel, this paper analyzes the contact points between catering service machines, people and users, constructs user journey map to understand users' expectations. In addition to the delivery service that can be allocated to the machine and people, the blueprint construction of ordering, reception and table cleaning services can also included in the service process. The final proposal is to improve the existing machine human interface and design a new service scheme.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Motion Pattern Detection for Dynamic Facial Expression Understanding

  • Mizoguchi, Hiroshi;Hiramatsu, Seiyo;Hiraoka, Kazuyuki;Tanaka, Masaru;Shigehara, Takaomi;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1760-1763
    • /
    • 2002
  • In this paper the authors present their attempt io realize a motion pattern detector that finds specified sequence of image from input motion image. The detector is intended to be used for time-varying facial expression understanding. Needless to say, facial expression understanding by machine is crucial and enriches quality of human machine interaction. Among various facial expressions, like blinking, there must be such expressions that can not be recognized if input expression image is static. Still image of blinking can not be distinguished from sleeping. In this paper, the authors discuss implementation of their motion pattern detector and describe experiments using the detector. Experimental results confirm the feasibility of the idea behind the implemented detector.

  • PDF

The Influence of Educational Robot Experience on Children's Robot Image and Relationship Recognition (교육용 로봇 활용 경험이 유아의 로봇 이미지 및 관계 인식에 미치는 영향 연구)

  • Lee, KyungOk;Lee, Byungho
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.70-78
    • /
    • 2015
  • The purpose of this study was to investigate how young children recognize the image of robots, and how they understand the relationship between themselves and robots based on school experience. 20 children from kindergarten A had no direct experience with educational robots, whereas 20 children from kindergarten B had experience in using educational robots in their classroom. Total 40 children from age group 5 class participated in this study. We collected data using interview and drawing test. The findings of the study are as follows: First, participating children recognized robots as having both the character of a machine and a human. But children with previous robot experience provided description of robots as a machine-tool. Both groups were not able to explain the structure of robots in details. Second, participating children understood that they can develop a range of social relationships with robots, including simple help to family replacement. There were mixed views on robots among the children with previous experience, but children with no experience described robots as taking the role of peers or family members. These findings could contribute to the development of robots and related programs in the field of early childhood education.

A Study on Integration Technology for Immersive Human Interaction (몰입형 가시화를 위한 사용자 인터페이스 연동기술 연구)

  • Park, Chan-Seok;Cha, Moo-Hyun;Mun, Du-Hwan;Gu, Gibeom
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.541-542
    • /
    • 2018
  • CAE 와 같은 고충실도 대용량 엔지니어링 데이터의 시공간 정밀 분석검증을 위해서는 고해상도 몰입형 가시화 기술과 더불어 이를 직관적이고 효율적으로 제어하기 위한 휴먼 인터페이스 기술이 중요하다. 최근 대중화에 근접한 HMD 기기 및 컨트롤러를 이용한 응용 연구가 발표되고 있고, 이를 통해 엔지니어 위주의 정적 분석환경을 벗어나, 설계/해석/운용 전문가들의 동적 협업분석 환경 제공이 가능한 몰입형 가시화 환경 및 휴먼 인터페이스 기술이 적용되고 있다. 하지만 CAE 해석지원을 위한 대화면 몰입형 가시화 환경에서 사용가능한 직관적 인터페이스기술에 대한 연구는 미진한 상황이다. 본 연구에서는 신체의 자연스러운 움직임으로 가상현실을 탐색하고 데이터 조작을 구현할 수 있는 몰입형 가시화 전용의 휴먼 인터페이스 및 연동기술에 대한 연구과정을 소개한다.

Development of Adaptive Ground Control System for Multi-UAV Operation and Operator Overload Analysis (복수 무인기 운용을 위한 적응형 지상체 개발 및 운용자 과부하 분석)

  • Oh, Jangjin;Choi, Seong-Hwan;Lim, Hyung-Jin;Kim, Seungkeun;Yang, Ji Hyun;Kim, Byoung Soo
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.6
    • /
    • pp.529-536
    • /
    • 2017
  • The general ground control system has control and information display functions for the operation of a single unmanned aerial vehicle. Recently, the function of the single ground control system extends to the operation of multiple UAVs. As a result, operators have been exposed to more diverse tasks and are subject to task overload due to various factors during their mission. This study proposes an adaptive ground control system that reflects the operator's condition through the task overload measurement of multiple UAV operators. For this, the ground control software is developed to control multiple UAVs at the same time, and the simulator with six degree-of-freedom aircraft dynamics is constructed for realistic human-machine-interface experiments by the operators.

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition (바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계)

  • Kim, Juchang;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.