• Title/Summary/Keyword: Kinect Interaction

Search Result 51, Processing Time 0.027 seconds

Human Gender and Motion Analysis with Ellipsoid and Logistic Regression Method

  • Ansari, Md Israfil;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.3 no.2
    • /
    • pp.9-12
    • /
    • 2016
  • This paper is concerned with the effective and efficient identification of the gender and motion of humans. Tracking this nonverbal behavior is useful for providing clues about the interaction of different types of people and their exact motion. This system can also be useful for security in different places or for monitoring patients in hospital and many more applications. Here we describe a novel method of determining identity using machine learning with Microsoft Kinect. This method minimizes the fitting or overlapping error between an ellipsoid based skeleton.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Presentation Training System based on 3D Virtual Reality (3D 가상현실기반의 발표훈련시스템)

  • Jung, Young-Kee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.309-316
    • /
    • 2018
  • In this study, we propose a 3D virtual reality based presentation training system to help implement the virtual presentation environment, such as the real world, to present it confidently in the real world. The proposed system provided a realistic and highly engaging presentation and interview environment by analyzing the speakers' voice and behavior in real time to be reflected in the audience of the virtual space. Using HMD and VR Controller that become 6DOF Tracking, the presenter can change the timing and interaction of the virtual space using Kinect, and the virtual space can be changed to various settings set by the user. The presenter will look at presentation files and scripts displayed in separate views within the virtual space to understand the content and master the presentation.

Interactive Typography System using Combined Corner and Contour Detection

  • Lim, Sooyeon;Kim, Sangwook
    • International Journal of Contents
    • /
    • v.13 no.1
    • /
    • pp.68-75
    • /
    • 2017
  • Interactive Typography is a process where a user communicates by interacting with text and a moving factor. This research covers interactive typography using real-time response to a user's gesture. In order to form a language-independent system, preprocessing of entered text data presents image data. This preprocessing is followed by recognizing the image data and the setting interaction points. This is done using computer vision technology such as the Harris corner detector and contour detection. User interaction is achieved using skeleton information tracked by a depth camera. By synchronizing the user's skeleton information acquired by Kinect (a depth camera,) and the typography components (interaction points), all user gestures are linked with the typography in real time. An experiment was conducted, in both English and Korean, where users showed an 81% satisfaction level using an interactive typography system where text components showed discrete movements in accordance with the users' gestures. Through this experiment, it was possible to ascertain that sensibility varied depending on the size and the speed of the text and interactive alteration. The results show that interactive typography can potentially be an accurate communication tool, and not merely a uniform text transmission system.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF

Interactive media facade based on image processing (영상처리 기반의 인터랙티브 미디어파사드에 관한 연구)

  • Jun, Ha-Ree;Yun, Hyun-Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.46-54
    • /
    • 2015
  • Development of digital technology is advancing to levels which influence the formation of city landscapes and spaces. The use of media facade is creating modernized city-spaces with contemporary application of various digital mediums. In this way, media facade functions as media-art in an artistic point of view, while also providing the means for buildings to become landmarks in a city-scape point of view. Going a step further, media facade using interaction is drawing a lot of attention as it enables communication with the city inhabitants instead of one-way contents provision. This paper will research such interactive media facade using transparent display glass currently being used as construction building material and its potential growth.

A Study on HCI Application based on Human Body Motion in Flight Game (활강 게임의 인체동작 기반 HCI 적용 연구)

  • Lim, Dohee;Baek, Jongwoo;Choi, Jiyoung;Jung, Heokyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.320-322
    • /
    • 2021
  • With the development of wireless Internet technology and the expansion of the game market, various forms of games are being developed that are mounted on various platforms, including mobile platforms. In this environment, ensuring the immersion of the game user's perspective will secure the game's competitiveness, so it is necessary to increase the immersion by satisfying each area presented by the Human Computer Interaction (HCI) theory. To this end, this high school implemented downhill games and experimented with kiosks by applying an interface that recognizes the human body's movements as a way to secure freedom and immersion of game users.

  • PDF