• Title/Summary/Keyword: Sensor Motion Recognition

Search Result 166, Processing Time 0.027 seconds

Logical Activity Recognition Model for Smart Home Environment

  • Choi, Jung-In;Lim, Sung-Ju;Yong, Hwan-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.67-72
    • /
    • 2015
  • Recently, studies that interact with human and things through motion recognition are increasing due to the expansion of IoT(Internet of Things). This paper proposed the system that recognizes the user's logical activity in home environment by attaching some sensors to various objects. We employ Arduino sensors and appreciate the logical activity by using the physical activitymodel that we processed in the previous researches. In this System, we can cognize the activities such as watching TV, listening music, talking, eating, cooking, sleeping and using computer. After we produce experimental data through setting virtual scenario, then the average result of recognition rate was 95% but depending on experiment sensor situation and physical activity errors the consequence could be changed. To provide the recognized results to user, we visualized diverse graphs.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

EF Sensor-Based Hand Motion Detection and Automatic Frame Extraction (EF 센서기반 손동작 신호 감지 및 자동 프레임 추출)

  • Lee, Hummin;Jung, Sunil;Kim, Youngchul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.102-108
    • /
    • 2020
  • In this paper, we propose a real-time method of detecting hand motions and extracting the signal frame induced by EF(Electric Field) sensors. The signal induced by hand motion includes not only noises caused by various environmental sources as well as sensor's physical placement, but also different initial off-set conditions. Thus, it has been considered as a challenging problem to detect the motion signal and extract the motion frame automatically in real-time. In this study, we remove the PLN(Power Line Noise) using LPF with 10Hz cut-off and successively apply MA(Moving Average) filter to obtain clean and smooth input motion signals. To sense a hand motion, we use two thresholds(positive and negative thresholds) with offset value to detect a starting as well as an ending moment of the motion. Using this approach, we can achieve the correct motion detection rate over 98%. Once the final motion frame is determined, the motion signals are normalized to be used in next process of classification or recognition stage such as LSTN deep neural networks. Our experiment and analysis show that our proposed methods produce better than 98% performance in correct motion detection rate as well as in frame-matching rate.

Recognition of Gap between base Plates for Automated Welding of Thick Plates (후판 자동용접을 위한 용접물의 갭 측정)

  • Yi, Hwa-Cho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.4 s.97
    • /
    • pp.37-45
    • /
    • 1999
  • Many automated welding equipment are used in the industry. However, there are some problems to get quality welds because of the geometric error, thermal distortion, and incorrect joint fit-up. These factors can make the gap between base plates in case of a thick plate welding. The welding product with the quality welds can not be obtained without consideration of the gap. In this paper, the robot path and welding conditions are modified to get the quality weld by detecting the position and size of the gap. In this work, a low-priced laser range sensor is used. The 3-dimensional information is obtained using the motion of a robot, which holds a laser range sensor. The position and size of the gap is calculated using signal processing of the measured 3-dimensional information of joint profile geometry. The data measured by a laser range sensor is segmented by an iterative end point method. The segmented data is optimized by the least square method. The existence of gap is detected by comparing the data with the segmented shape of template. The effects of robot measuring speed and gap size are also tested. The recognizability fo the gap is verified as good by comparing the real joint profile and the calculated joint profile using the signal processing.

  • PDF

A Implementation of User Exercise Motion Recognition System Using Smart-Phone (스마트폰을 이용한 사용자 운동 모션 인식 시스템 구현)

  • Kwon, Seung-Hyun;Choi, Yue-Soon;Lim, Soon-Ja;Joung, Suck-Tae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.10
    • /
    • pp.396-402
    • /
    • 2016
  • Recently, as the performance of smart phones has advanced and their distribution has increased, various functions in existing devices are accumulated. In particular, functions in smart devices have matured through improvement of diverse sensors. Various applications with the development of smart phones get fleshed out. As a result, services from applications promoting physical activity in users have gotten attention from the public. However, these services are about diet alone, and because these have no exercise motion recognition capability to detect movement in the correct position, the user has difficulty obtaining the benefits of exercise. In this paper, we develop exercise motion-recognition software that can sense the user's motion using a sensor built into a smart phone. In addition, we implement a system to offer exercise with friends who are connected via web server. The exercise motion recognition utilizes a Kalman filter algorithm to correct the user's motion data, and compared to data that exist in sampling, determines whether the user moves in the correct position by using a DTW algorithm.

A Study on Smart Phone Real-Time Motion Analysis System using Acceleration and Gyro Sensors (가속도센서와 자이로센서를 이용한 스마트폰 실시간 모션 분석 시스템에 관한 연구)

  • Park, Ju-Man;Park, Koo-Rack
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.01a
    • /
    • pp.63-65
    • /
    • 2013
  • 본 논문에서는 가속도센서와 자이로센서를 통해 측정된 값을 무선 통신을 이용하여 스마트폰으로 전달하여 실시간으로 모션을 분석하는 스마트폰 실시간 모션 분석 알고리즘을 제안한다. 3축 가속도 센서의 실시간 모션 분석과 중력가속도를 사용한 모션 분석의 경우에는 장소나 높이 또는 주변의 자력에 따라 정확한 값을 획득하여 분석하기 어려운 점이 있다. 이에 본 논문에서는 가속도 센서와 자이로 센서를 통하여 보다 정밀한 모션 분석을 하였으며, 이를 이용하여 모션을 실시간으로 분석하여 활용하면 스포츠와 의학 등 다양한 분야에서 활용할 수 있을 것이다.

  • PDF

Marionette Control System using Gesture Mode Change (제스처 할당 모드를 이용한 마리오네트 조정 시스템)

  • Cheon, Kyeong-Min;Kwak, Su Hui;Rew, Keun-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.150-156
    • /
    • 2015
  • In this paper, a marionette control system using wrist and finger gestures through an IMU sensor is studied. The signals from the sensor device are conditioned and recognized, then the commands are sent to the 8 motors of the marionette via Bluetooth (5 motors control the motion of the marionette, and 3 motors control the location of the marionette). It is revealed that the degree of freedom of fingers are not independent from each other, therefore, some gestures are hardly made. Gesture mode changes for difficult postures of the fingers in cases of a lack of finger DOF are proposed. Therefore, the gesture mode change switches the assignment of gesture as required. Experimental results show that gesture mode change is successful for appropriate postures of a marionette.

Controlling Position of Virtual Reality Contents with Mouth-Wind and Acceleration Sensor

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.57-63
    • /
    • 2019
  • In this paper, we propose a new framework to control VR(Virtual reality) contents in real time using user's mouth-wind and acceleration sensor of mobile device. In VR, user interaction technology is important, but various user interface methods is still lacking. Most of the interaction technologies are hand touch screen touch or motion recognition. We propose a new interface technology that can interact with VR contents in real time using user's mouth-wind method with acceleration sensor. The direction of the mouth-wind is determined using the angle and position between the user and the mobile device, and the control position is adjusted using the acceleration sensor of the mobile device. Noise included in the size of the mouth wind is refined using a simple average filter. In order to demonstrate the superiority of the proposed technology, we show the result of interacting with contents in game and simulation in real time by applying control position and mouth-wind external force to the game.

Object Recognition and Target Tracking Using Motion Synchronization between Virtual and Real Robots (가상로봇과 실제로봇 사이의 운동 동기화를 통한 물체 인식 및 목표물 추적방안)

  • Ahn, Hyeo Gyeong;Kang, Hyeon Jun;Kim, Jin Beom;Jung, Ji Won;Ok, Seo Won;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.20-29
    • /
    • 2017
  • Motion synchronization between developed real and virtual robots for object recognition and target tracking is introduced. ASUS's XTION PRO Live is implemented as a sensor and configured to recognize walls and obstacles, and perceive objects. In order to create virtual reality, Unity 3D is adopted to be associated with the real robot, and the virtual object is controlled by using an input device. A Bluetooth serial communication module is used for wireless communication between the PC and the real robot. The motion information of a virtual object controlled by the user is sent to the robot. Then, the robot moves in the same way as the virtual object according to the motion information. Through motion synchronization, two scenarios, which map the real space and current object information with virtual objects and space, were demonstrated, yielding good agreement between the two spaces.

Development of Joint-Based Motion Prediction Model for Home Co-Robot Using SVM (SVM을 이용한 가정용 협력 로봇의 조인트 위치 기반 실행동작 예측 모델 개발)

  • Yoo, Sungyeob;Yoo, Dong-Yeon;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.491-498
    • /
    • 2019
  • Digital twin is a technology that virtualizes physical objects of the real world on a computer. It is used by collecting sensor data through IoT, and using the collected data to connect physical objects and virtual objects in both directions. It has an advantage of minimizing risk by tuning an operation of virtual model through simulation and responding to varying environment by exploiting experiments in advance. Recently, artificial intelligence and machine learning technologies have been attracting attention, so that tendency to virtualize a behavior of physical objects, observe virtual models, and apply various scenarios is increasing. In particular, recognition of each robot's motion is needed to build digital twin for co-robot which is a heart of industry 4.0 factory automation. Compared with modeling based research for recognizing motion of co-robot, there are few attempts to predict motion based on sensor data. Therefore, in this paper, an experimental environment for collecting current and inertia data in co-robot to detect the motion of the robot is built, and a motion prediction model based on the collected sensor data is proposed. The proposed method classifies the co-robot's motion commands into 9 types based on joint position and uses current and inertial sensor values to predict them by accumulated learning. The data used for accumulating learning is the sensor values that are collected when the co-robot operates with margin in input parameters of the motion commands. Through this, the model is constructed to predict not only the nine movements along the same path but also the movements along the similar path. As a result of learning using SVM, the accuracy, precision, and recall factors of the model were evaluated as 97% on average.