• Title/Summary/Keyword: Movement Recognition

Search Result 495, Processing Time 0.025 seconds

EEG Signals Measurement and Analysis Method for Brain-Computer Interface (뇌와 컴퓨터의 인터페이스를 위한 뇌파 측정 및 분석 방법)

  • Sim, Kwee-Bo;Yeom, Hong-Gi;Lee, In-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.605-610
    • /
    • 2008
  • There are many methods for Human-Computer Interface. Recently, many researchers are studying about Brain-Signal this is because not only the disabled can use a computer by their thought without their limbs but also it is convenient to general people. But, studies about it are early stages. This paper proposes an EEG signals measurement and analysis methods for Brain-Computer Interface. Our purpose of this research is recognition of subject's intention when they imagine moving their arms. EEG signals are recorded during imaginary movement of subject's arms at electrode positions Fp1, Fp2, C3, C4. We made an analysis ERS(Event-Related Synchronization) and ERD(Event-Related Desynchronization) which are detected when people move their limbs in the ${\mu}$ waves and ${\beta}$ waves. Results of this research showed that ${\mu}$ waves are decreased and ${\beta}$ waves are increased at left brain during the imaginary movement of right hand. In contrast, ${\mu}$ waves are decreased and ${\beta}$ waves are increased at right brain during the imaginary movement of left hand.

Implementation of Facility Movement Recognition Accuracy Analysis and Utilization Service using Drone Image (드론 영상 활용 시설물 이동 인식 정확도 분석 및 활용 서비스 구현)

  • Kim, Gwang-Seok;Oh, Ah-Ra;Choi, Yun-Soo
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.5
    • /
    • pp.88-96
    • /
    • 2021
  • Advanced Internet of Things (IoT) technology is being used in various ways for the safety of the energy industry. At the center of safety measures, drones play various roles on behalf of humans. Drones are playing a role in reaching places that are difficult to reach due to large-scale facilities and space restrictions that are difficult for humans to inspect. In this study, the accuracy and completeness of movement of dangerous facilities were tested using drone images, and it was confirmed that the movement recognition accuracy was 100%, the average data analysis accuracy was 95.8699%, and the average completeness was 100%. Based on the experimental results, a future-oriented facility risk analysis system combined with ICT technology was implemented and presented. Additional experiments with diversified conditions are required in the future, and ICT convergence analysis system implementation is required.

A Study of an MEMS-based finger wearable computer input devices (MEMS 기반 손가락 착용형 컴퓨터 입력장치에 관한 연구)

  • Kim, Chang-su;Jung, Se-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.791-793
    • /
    • 2016
  • In the development of various types of sensor technology, the general users smartphone, the environment is increased, which can be seen in contact with the movement recognition device, such as a console game machine (Nintendo Wii), an increase in the user needs of the action recognition-based input device there is a tendency to have. Mouse existing behavior recognition, attached to the outside, is mounted in the form of mouse button is deformed, the left mouse was the role of the right button and a wheel, an acceleration sensor (or a gyro sensor) inside to, plays the role of a mouse cursor, is to manufacture a compact, there is a difficulty in operating the button, to apply a motion recognition technology is used to operate recognition technology only pointing cursor is limited. Therefore, in this paper, using a MEMS-based motion-les Koguni tion sensor (Motion Recognition Sensor), to recognize the behavior of the two points of the human body (thumb and forefinger), to generate the motion data, and this to the foundation, compared to the pre-determined matching table (moving and mouse button events cursor), and generates a control signal by determining, were studied the generated control signal input device of the computer wirelessly transmitting.

  • PDF

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Gesture Recognition based on Mixture-of-Experts for Wearable User Interface of Immersive Virtual Reality (몰입형 가상현실의 착용식 사용자 인터페이스를 위한 Mixture-of-Experts 기반 제스처 인식)

  • Yoon, Jong-Won;Min, Jun-Ki;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • As virtual realty has become an issue of providing immersive services, in the area of virtual realty, it has been actively investigated to develop user interfaces for immersive interaction. In this paper, we propose a gesture recognition based immersive user interface by using an IR LED embedded helmet and data gloves in order to reflect the user's movements to the virtual reality environments effectively. The system recognizes the user's head movements by using the IR LED embedded helmet and IR signal transmitter, and the hand gestures with the data gathered from data gloves. In case of hand gestures recognition, it is difficult to recognize accurately with the general recognition model because there are various hand gestures since human hands consist of many articulations and users have different hand sizes and hand movements. In this paper, we applied the Mixture-of-Experts based gesture recognition for various hand gestures of multiple users accurately. The movement of the user's head is used to change the perspection in the virtual environment matching to the movement in the real world, and the gesture of the user's hand can be used as inputs in the virtual environment. A head mounted display (HMD) can be used with the proposed system to make the user absorbed in the virtual environment. In order to evaluate the usefulness of the proposed interface, we developed an interface for the virtual orchestra environment. The experiment verified that the user can use the system easily and intuituvely with being entertained.

  • PDF

Basic Principles of the 『Spleen-stomach theory』 by Li Dong-yuan (이동원(李東垣) 『비위론(脾胃論)』에 담겨 있는 생리기반이론)

  • Choi, Hee-Yun;Kim, Kwang-Joong
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.24 no.6
    • /
    • pp.911-920
    • /
    • 2010
  • The basic principles in the "Spleen-stomach theory(脾胃論)" sets up the phases and roles of spleen-stomach (脾胃) by establishing Earth(地 坤 土) and exposing the reality of spleen-stomach(脾胃) of human body which has its own shape and form with Heaven's reality exhibited. The meaning of Earth is based on the constant meaning of Earth in 'Earth Original-Earth as extended and stable ground(坤元一正之土)' giving form and shape, and Earth's movement with circulation, then exposes itself as 'Earth as plowing land(耕種之土)' concerning both the application of Five Phases and the physical characteristics of Earth. The Yin-Yang recognition on Earth is revealed as Yin Earth(陰土)-Yang Earth(陽土). Spleen(脾) was established as Yin Earth(陰土) and Stomach(胃) as Yang Earth(陽土). The seasonal assignment of Earth is Indian Summer(長夏), which is divided from Summer, and becomes Heat(熱), and the Yin-Yang recognition of Earth comes to be the meaning of the center and border. According to the Five Phasic recognition, it becomes Earth(土) and gets to be Dampness(濕) in accordance with Six Qi(六氣). 'Extreme Yin(至陰)' indicates Qi's status exposing the fundamental meaning regarding the role of creating, changing, and propelling Spleen-Stomach(脾胃) as a characteristic Yin Earth. Earth comprehends 'Four Courses(四維)' meaning, recognizes them as four parts of the 12 Earth's Branches(辰戌丑未) and the terminals of four seasons(四季之末), and has the meaning of the president of the change in four seasons. The theory of principle in the "Spleen-stomach theory(脾胃論)" stands on the basis of the 'Form Qi theory(形氣論)' and that of 'Upbearing, Downbearing, Floating, and Sinking theory(升降浮沈論)'. It manifests the theory of movement in the interaction between Form(形) and Qi(氣), and 'Qi Interior Form Exterior(氣裏形表)' indicates that Qi(氣) moves interiorly and Form(形) exteriorly.

User interface of Home-Automation for the physically handicapped Person in wearable computing environment (웨어러블 환경에서의 수족사용 불능자를 위한 홈오토메이션 사용자 인터페이스)

  • Kang, Sun-Kyung;Kim, Young-Un;Han, Dae-Kyung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.187-193
    • /
    • 2008
  • Interface technologies for a user to control home automation system in wearable computing environment has been studied recently. This paper proposes a new interface method for a disabled person to control home automation system in wearable computing environment by using EOG sensing circuit and marker recognition. In the proposed interface method, the operations of a home network device are represented with human readable markers and displayed around the device. A user wearing a HMD, a video camera, and a computer selects the desired operation by seeing the markers and selecting one of them with eye movement from the HMD display The requested operation is executed by sending the control command for the selected marker to the home network control device. By using the EOG sensing circuit and the marker recognition system a user having problem with moving hands and fit can manipulate a home automation system with only eye movement.

  • PDF

A climbing movement detection system through efficient cow behavior recognition based on YOLOX and OC-SORT (YOLOX와 OC-SORT 기반의 효율적인 소 행동 인식을 통한 승가 운동 감지시스템)

  • LI YU;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.18-26
    • /
    • 2023
  • In this study, we propose a cow behavior recognition system based on YOLOX and OC-SORT. YOLO X detects targets in real-time and provides information on cow location and behavior. The OC-SORT module tracks cows in the video and assigns unique IDs. The quantitative analysis module analyzes the behavior and location information of cows. Experimental results show that our system demonstrates high accuracy and precision in target detection and tracking. The average precision (AP) of YOLOX was 82.2%, the average recall (AR) was 85.5%, the number of parameters was 54.15M, and the computation was 194.16GFLOPs. OC-SORT was able to maintain high-precision real-time target tracking in complex environments and occlusion situations. By analyzing changes in cow movement and frequency of mounting behavior, our system can help more accurately discern the estrus behavior of cows.

Development of Smart Tape Attachment Robot in the Cold Rolled Coil with 3D Non-Contact Recognition (3D 비접촉 인식을 이용한 냉연코일 테이프부착 로봇 개발)

  • Shin, Chan-Bai;Kim, Jin-Dae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1122-1129
    • /
    • 2009
  • Recently taping robot with smart recognition function have been studied in the coil manufacturing field. Due to the difficulty of 3D surface processing from the complicated working environment, it is not easy to accomplish smart tape attachment motion with non-contact sensor. To solve these problems the applicable surface recognition algorithm and a flexible sensing device has been recommended. In this research, the fusion method between 1D displacement and 3D laser scanner is applied for robust tape attachment about cold rolled coil. With these sensors we develop a two-step exploration and the smart algorithm for the awareness of non-aligned coil's information. In the proposed robot system for tape attachment, the problem is reduced to coil's radius searching with laser displacement sensor at first, and then position and orientation detection with 3D laser scanner. To get the movement at the robot's base frame, the hand-eye compensation between robot's end effector and sensing device should be also carried out respectively. In this paper, we examine the auto-coordinate transformation method in the calibration step for the real environment usage. From the experimental results, it was shown that the taping motion of robot had a robust under the non-aligned cold rolled coil.

Design of the Multimodal Input System using Image Processing and Speech Recognition (음성인식 및 영상처리 기반 멀티모달 입력장치의 설계)

  • Choi, Won-Suk;Lee, Dong-Woo;Kim, Moon-Sik;Na, Jong-Whoa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.743-748
    • /
    • 2007
  • Recently, various types of camera mouse are developed using the image processing. The camera mouse showed limited performance compared to the traditional optical mouse in terms of the response time and the usability. These problems are caused by the mismatch between the size of the monitor and that of the active pixel area of the CMOS Image Sensor. To overcome these limitations, we designed a new input device that uses the face recognition as well as the speech recognition simultaneously. In the proposed system, the area of the monitor is partitioned into 'n' zones. The face recognition is performed using the web-camera, so that the mouse pointer follows the movement of the face of the user in a particular zone. The user can switch the zone by speaking the name of the zone. The multimodal mouse is analyzed using the Keystroke Level Model and the initial experiments was performed to evaluate the feasibility and the performance of the proposed system.