• Title/Summary/Keyword: Human Vision

Search Result 1,035, Processing Time 0.028 seconds

Event recognition of entering and exiting (출입 이벤트 인식)

  • Cui, Yaohuan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.199-204
    • /
    • 2008
  • Visual surveillance is an active topic recently in Computer Vision. Event detection and recognition is one important and useful application of visual surveillance system. In this paper, we propose a new method to recognize the entering and exiting events based on the human's movement feature and the door's state. Without sensors, the proposed approach is based on novel and simple vision method as a combination of edge detection, motion history image and geometrical characteristic of the human shape. The proposed method includes several applications such as access control in visual surveillance and computer vision fields.

  • PDF

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

Development of a Color Stereo Head-Eye System with Vergence (눈동자 운동이 가능한 컬러 스테레오 머리-눈 시스템의 개발)

  • HwangBo, Myung;You, Bum-Jae;Oh, Sang-Rok;Lee, Jong-Won
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2370-2372
    • /
    • 1998
  • Recently we have developed an active stereo head-eye system with vergence, name it KIST HECter(Head-Eye System with Colored Stero Vision), based on the analysis of human's neck and eye motion at visual behavior. Our HECter is a five degree-of-freedom system composed of pan and tilt motion in neck part and independent vergence motion of binocular cameras and commonly shared elevation axis in eye part. And stereo vision Provides two color image, which are processed by powerful each TMS32080 vision board. The shape and size are designed to be almost same as human face. The ability to vergence has significant importance and gives many beneficial merits. On its mechanical implementation we adapt a non-parallelogram 4-bar linkage mechanism since it provides high accuracy in transfering motion and enables compact and flexible design.

  • PDF

A Computer Vision Approach for Identifying Acupuncture Points on the Face and Hand Using the MediaPipe Framework (MediaPipe Framework를 이용한 얼굴과 손의 경혈 판별을 위한 Computer Vision 접근법)

  • Hadi S. Malekroodi;Myunggi Yi;Byeong-il Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.563-565
    • /
    • 2023
  • Acupuncture and acupressure apply needles or pressure to anatomical points for therapeutic benefit. The over 350 mapped acupuncture points in the human body can each treat various conditions, but anatomical variations make precisely locating these acupoints difficult. We propose a computer vision technique using the real-time hand and face tracking capabilities of the MediaPipe framework to identify acupoint locations. Our model detects anatomical facial and hand landmarks, and then maps these to corresponding acupoint regions. In summary, our proposed model facilitates precise acupoint localization for self-treatment and enhances practitioners' abilities to deliver targeted acupuncture and acupressure therapies.

Vision steered micro robot for MIROSOT (화상처리에 의한 등곡률반경 방식의 로봇 제어)

  • 차승엽;김병수;김경태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.825-827
    • /
    • 1997
  • This paper presents a robot which is steered by vision system. The proposed robot system has an AM188ES CPU(5.3 MIPS) and 2DC motors with encoder and turns accurately at any speed and shows a movement like a human controlled car using a steering wheel. To the robot only steering angle value is sent without considering the speed. We present how to control this robot using our real time vision system.

  • PDF

3D Vision Inspection Algorithm Using the Geometrical Pattern Matching (기하학적 패턴 매칭을 이용한 3차원 비전 검사 알고리즘)

  • 정철진;허경무
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2533-2536
    • /
    • 2003
  • In this paper, we suggest the 3D Vision Inspection Algorithm which is based on the external shape feature, and is able to recognize the object. Because many objects made by human have the regular shape, if we posses the database of pattern and we recognize the object using the database of the object's pattern, we could inspect the objects of many fields. Thus, this paper suggest the 3D Vision inspection Algorithm using the Geometrical Pattern Matching by making the 3D database.

  • PDF

Monocular 3D Vision Unit for Correct Depth Perception by Accommodation

  • Hosomi, Takashi;Sakamoto, Kunio;Nomura, Shusaku;Hirotomi, Tetsuya;Shiwaku, Kuninori;Hirakawa, Masahito
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1334-1337
    • /
    • 2009
  • The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  • PDF

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Human Detection in the Images of a Single Camera for a Corridor Navigation Robot (복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출)

  • Kim, Jeongdae;Do, Yongtae
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.