• Title/Summary/Keyword: Virtual Sensor

Search Result 481, Processing Time 0.04 seconds

Interactive Experience Room Using Infrared Sensors and User's Poses

  • Bang, Green;Yang, Jinsuk;Oh, Kyoungsu;Ko, Ilju
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.876-892
    • /
    • 2017
  • A virtual reality is a virtual space constructed by a computer that provides users the opportunity to indirectly experience a situation they have not experienced in real life through the realization of information for virtual environments. Various studies have been conducted to realize virtual reality, in which the user interface is a major factor in maximizing the sense of immersion and usability. However, most existing methods have disadvantages, such as costliness or being limited to the physical activity of the user due to the use of special devices attached to the user's body. This paper proposes a new type of interface that enables the user to apply their intentions and actions to the virtual space directly without special devices, and test content is introduced using the new system. Users can interact with the virtual space by throwing an object in the space; to do this, moving object detectors are produced using infrared sensors. In addition, the users can control the virtual space with their own postures. The method can heighten interest and concentration, increasing the sense of reality and immersion and maximizing user's physical experiences.

Performance Factor Analysis of Sensing-Data Estimation Algorithm for Walking Robots (보행 로봇을 위한 센서 추정 알고리즘의 성능인자 분석)

  • Shon, Woong-Hee;Yu, Seung-Nam;Lee, Sang-Ho;Han, Chang-Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4087-4094
    • /
    • 2010
  • The sensor data which is measured by Quadruped robot is utilized to recognize the physical environment or other information and to control the posture and walking of robot system. In order to control the robot precisely, high accuracy of sensor data is required, most of these sensors however, belongs to expensive and low-durable products. Moreover, these are exposed excessive load operation in a field condition if it is applied to field robot system. This issue becomes more serious one when the robot system is manufactured as a mass product. As in this context, this study suggests a virtual sensor technology to alternate or assist the main sensor system. This scheme is realized by using back-propagation algorithm of neural network theory, and the quality of estimated sensor data could be improved through the algorithmic and hardware based treatments. This study performs the various trial to identify the effective parameters which effect to the quality and reliability of estimated sensor data and tries to show the possibility of proposed methodology.

Virtual Target Overlay Technique by Matching 3D Satellite Image and Sensor Image (3차원 위성영상과 센서영상의 정합에 의한 가상표적 Overlay 기법)

  • Cha, Jeong-Hee;Jang, Hyo-Jong;Park, Yong-Woon;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1259-1268
    • /
    • 2004
  • To organize training in limited training area for an actuai combat, realistic training simulation plugged in by various battle conditions is essential. In this paper, we propose a virtual target overlay technique which does not use a virtual image, but Projects a virtual target on ground-based CCD image by appointed scenario for a realistic training simulation. In the proposed method, we create a realistic 3D model (for an instructor) by using high resolution Geographic Tag Image File Format(GeoTIFF) satellite image and Digital Terrain Elevation Data (DTED), and extract the road area from a given CCD image (for both an instructor and a trainee). Satellite images and ground-based sensor images have many differences in observation position, resolution, and scale, thus yielding many difficulties in feature-based matching. Hence, we propose a moving synchronization technique that projects the target on the sensor image according to the marked moving path on 3D satellite image by applying Thin-Plate Spline(TPS) interpolation function, which is an image warping function, on the two given sets of corresponding control point pair. To show the experimental result of the proposed method, we employed two Pentium4 1.8MHz personal computer systems equipped with 512MBs of RAM, and the satellite and sensor images of Daejoen area are also been utilized. The experimental result revealed the effective-ness of proposed algorithm.

Development of the Flexible Observation System for a Virtual Reality Excavator Using the Head Tracking System (헤드 트래킹 시스템을 이용한 가상 굴삭기의 편의 관측 시스템 개발)

  • Le, Q.H.;Jeong, Y.M.;Nguyen, C.T.;Yang, S.Y.
    • Journal of Drive and Control
    • /
    • v.12 no.2
    • /
    • pp.27-33
    • /
    • 2015
  • Excavators are versatile earthmoving equipment that are used in civil engineering, hydraulic engineering, grading and landscaping, pipeline construction and mining. Effective operator training is essential to ensure safe and efficient operating of the machine. The virtual reality excavator based on simulation using conventional large size monitors is limited by the inability to provide a realistic real world training experience. We proposed a flexible observation method with a head tracking system to improve user feeling and sensation when operating a virtual reality excavator. First, an excavation simulator is designed by combining an excavator SimMechanics model and the virtual world. Second, a head mounted display (HMD) device is presented to replace the cumbersome large screens. Moreover, an Inertial Measurement Unit (IMU) sensor is mounted to the HMD for tracking the movement of the operator's head. These signals consequently change the virtual viewpoint of the virtual reality excavator. Simulation results were used to analyze the performance of the proposed system.

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

A Study on Comparative Experiment of Hand-based Interface in Immersive Virtua Reality (몰입형 가상현실에서 손 기반 인터페이스의 비교 실험에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.1-9
    • /
    • 2019
  • This study compares hand-based interfaces to improve a user's virtual reality (VR) presence by enhancing user immersion in VR interactions. To provide an immersive experience, in which users can more directly control the virtual environment and objects within that environment using their hands and, to simultaneously minimize the device burden on users using immersive VR systems, we designed two experimental interfaces (hand motion recognition sensor- and controller-based interactions). Hand motion recognition sensor-based interaction reflects accurate hand movements, direct gestures, and motion representations in the virtual environment, and it does not require using a device in addition to the VR head mounted display (HMD). Controller-based interaction designs a generalized interface that maps the gesture to the controller's key for easy access to the controller provided with the VR HMD. The comparative experiments in this study confirm the convenience and intuitiveness of VR interactions using the user's hand.

Investigation of the body distribution of load pressure and virtual wear design of short pants harnesses in flying condition (플라잉 상태에서 바지형태의 하네스에 대한 하중압력 분포 측정 및 가상착의 적용)

  • Kwon, MiYeon
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.23 no.3
    • /
    • pp.11-21
    • /
    • 2021
  • Virtual reality is currently mainly used in games, but is starting to be applied as a variety of media fields, such as broadcasting and film. Virtual reality provides more fun than reality, and can provide new experiences in areas that cannot be experienced in reality due to the constraints of time, space, and environment. In particular, as the social non-contact arena has increased due to COVID-19, it is being applied to education, health, and medical industries. The contents are further expanding into design and military fields. Therefore, the purpose of this study was to observe the change in distribution of load and pressure felt by the body in the flying state while wearing a short pants harness, which are mainly used in the game and entertainment industry. In the experiment, the average pressure in the flying state was measured by attaching a pressure sensor to the back and front of a human mannequin. As a result, it was confirmed that the load concentrated on the waist in the flying state was 44 N, with a pressure of 1353 kPa. The pressure distribution was concentrated in front of the center of gravity, and was measured was at 98% by the pressure sensors, with an average pressure value of approximately 15 kPa, and a pressure value of approximately 12 kPa at the back, which was measured at 67% by the pressure sensor. The results of the load and pressure distribution measurement are presented as fundamental data to improve the wearability and comfort of harnesses in the future, and are compared to actual measured pressure values by analyzing the clothing pressure in flight through virtual wear of harnesses through the CLO 3D program.

A Study on Trainer and Cover Recognition Algorithm for Posture Recognition of Virtual Shooting Trainer (가상 사격 훈련자 자세인식을 위한 훈련자와 엄폐물 인식 알고리즘 연구)

  • Kim, Hyung-O;Hong, ChangHo;Cho, Sung Ho;Park, Youster
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.298-300
    • /
    • 2021
  • The Ministry of National Defense decided to build a realistic combat simulation training system based on virtual reality and augmented reality in accordance with the expansion of the scientific training system of "Defense Reform 2.0". The realistic combat simulation training system should be able to maximize the tension and training effect as in actual combat through engagement between trainers. In addition, it should be possible to increase the effectiveness of survival training at the same time as shooting training similar to actual combat through cover training. Previous studies are suitable techniques to improve the shooting precision of the trainee, but it is difficult to practice bilateral engagement like in actual combat, and it is particularly insufficient for combat shooting training using cover. Therefore, in this paper, we propose a S/W algorithm for generating a virtual avatar by recognizing the shooting posture of the opponent on the screen of the virtual shooting trainer. This S/W algorithm can recognize the trainer and the cover based on the depth information acquired through the depth sensor and estimate the trainer's posture.

  • PDF