• Title/Summary/Keyword: Mobile Motion Recognition

Search Result 56, Processing Time 0.034 seconds

Motion Recognition of Mobile Phone for data sharing based on Google Cloud Message Service (Google 클라우드 메시지 서비스 기반의 데이터 공유를 위한 모바일 폰의 모션 인식)

  • Seo, Jung-Hee;Park, Hung-Bog
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.205-212
    • /
    • 2019
  • With the rapid spread of mobile phones, users are continuously interested in using the mobile phone in connection with personal activities. Also, increasingly users want to share (transmit and receive) and save data more easily and simply in the mobile environment. This paper suggests motion recognition of mobile phone to share personal information with any people located within a certain distance using location-based service with GCM service. The suggested application is based on Google Cloud Messaging which enables asynchronous communication with the mobile applications executed in Android operating system. The requirements of light-weight mechanism can be satisfied as it is possible to access sharing of personal information easily, simply and in real time through all mobile devices anywhere.

Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot (이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가)

  • Park, Jae-Hong;Bhan, Wook;Choi, Tae-Young;Kwon, Hyun-Il;Cho, Dong-Il;Kim, Kwang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

Implementation of Rule-based Smartphone Motion Detection Systems

  • Lee, Eon-Ju;Ryou, Seung-Hui;Lee, So-Yun;Jeon, Sung-Yoon;Park, Eun-Hwa;Hwang, Jung-Ha;Choi, Doo-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.45-55
    • /
    • 2021
  • Information obtained through various sensors embedded in a smartphone can be used to identify and analyze user's movements and situations. In this paper, we propose two rule-based motion detection systems that can detect three alphabet motions, 'I', 'S', and 'Z' by analyzing data obtained by the acceleration and gyroscope sensors in a smartphone. First of all, the characteristics of acceleration and angular velocity for each motion are analyzed. Based on the analysis, two rule-based systems are proposed and implemented as an android application and it is used to verify the detection performance for each motion. Two rule-based systems show high recognition rate over 90% for each motion and the rule-based system using ensemble shows better performance than another one.

A Study on Stable Motion Control of Mobile-Manipulators Robot System (모바일-매니퓰레이터 구조 로봇시스템의 안정한 모션제어에 관한연구)

  • Park, Moon-Youl;hwang, Won-Jun;Park, In-Man;Kang, Un-Wook
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.17 no.4
    • /
    • pp.217-226
    • /
    • 2014
  • Since the world has changed to a society of 21st century high-tech industries, the modern people have become reluctant to work in a difficult and dirty environment. Therefore, unmanned technologies through robots are being demanded. Now days, effects such as voice, control, obstacle avoidance are being suggested, and especially, voice recognition technique that enables convenient interaction between human and machines is very important. In this study, in order to conduct study on the stable motion control of the robot system that has mobile-manipulator structure and is voice command-based, kinetic interpretation and dynamic modeling of two-armed manipulator and three-wheel mobile robot were conducted. In addition, autonomous driving of three-wheel mobile robot and motion control system of two-armed manipulator were designed, and combined robot control through voice command was conducted. For the performance experiment method, driving control and simulation mock experiment of manipulator that has two-armed structure was conducted, and for experiment of combined robot motion control which is voice command-based, through driving control, motion control of two-armed manipulator, and combined control based on voice command, experiment on stable motion control of voice command-based robot system that has mobile-manipulator structure was verified.

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Analysis of the Impact of Motion Recognition Sensor on Mobile Game by Compare Valuation Experiment (비교평가 실험으로 동작인식센서가 모바일게임에 미치는 영향분석)

  • Lee, Dae-Young;Sung, Jung-Hawn
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.63-72
    • /
    • 2009
  • I-phone presented a new mobile control type by the introducing motion recognition sensor and that influenced on game application development so that led a lot of games using kinds of sensors. In this paper, we divided the game enjoyment into 5 factors for the embodiment of the sensor's influence direction on the enjoyment. We experimented two game, which have same content, different devices in using motion recognition sensor or not for clarifying the distinction between devices. As a experiment source game, we used Cooking Mama, as a experimental device, we used I-pod touch and NDS. This experiment shows a motion recognition sensor control's enjoyment is far superior to touch sensor. This sensor got high marks on every fun factors, stimulus, absorption, empathy, accomplishment and variation. Especially, stimulus and empathy showed great differences. In this case, we found the fact that extended communication between the gamer and the device can be fun.

  • PDF

Generation of Adaptive Motion Using Quasi-simultaneous Recognition of Plural Targets

  • Mizushima, T.;Minami, M.;Mae, Y.;Sakamoto, Y.;Song, W.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.882-887
    • /
    • 2005
  • The paper describes Quasi-simultaneous recognition of plural targets and motion control of robot based on the recognition. The method searches for targets by model-based matching method using the hybrid GA, and the motion of the robot is generated based on the targets' positions on the image. The method is applied to a soccer robot, and targets are a ball, a goal, and an enemy in the experiment. The Experimental results show robustness and reliability of the proposed method.

  • PDF

Implementing Augmented Reality By Using Face Detection, Recognition And Motion Tracking (얼굴 검출과 인식 및 모션추적에 의한 증강현실 구현)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.97-104
    • /
    • 2012
  • Natural User Interface(NUI) technologies introduce new trends in using devices such as computer and any other electronic devices. In this paper, an augmented reality on a mobile device is implemented by using face detection, recognition and motion tracking. The face detection is obtained by using Viola-Jones algorithm from the images of the front camera. The Eigenface algorithm is employed for face recognition and face motion tracking. The augmented reality is implemented by overlapping the rear camera image and GPS, accelerator sensors' data with the 3D graphic object which is correspond with the recognized face. The algorithms and methods are limited by the mobile device specification such as processing ability and main memory capacity.

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

GripLaunch: a Novel Sensor-Based Mobile User Interface with Touch Sensing Housing

  • Chang, Wook;Park, Joon-Ah;Lee, Hyun-Jeong;Cho, Joon-Kee;Soh, Byung-Seok;Shim, Jung-Hyun;Yang, Gyung-Hye;Cho, Sung-Jung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.304-313
    • /
    • 2006
  • This paper describes a novel way of applying capacitive sensing technology to a mobile user interface. The key idea is to use grip-pattern, which is naturally produced when a user tries to use the mobile device, as a clue to determine an application to be launched. To this end, a capacitive touch sensing system is carefully designed and installed underneath the housing of the mobile device to capture the information of the user's grip-pattern. The captured data is then recognized by dedicated recognition algorithms. The feasibility of the proposed user interface system is thoroughly evaluated with various recognition tests.