• Title/Summary/Keyword: vision-based control

Search Result 690, Processing Time 0.021 seconds

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF

Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment (DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용)

  • Park, Soon-Yong;Choi, Sung-In;Jang, Jae-Seok;Jung, Soon-Ki;Kim, Jun;Chae, Jeong-Sook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.7
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.

Design of Safe Autonomous Navigation System for Deployable Bio-inspired Robot (전개형 생체모방로봇을 위한 안전한 자율주행시스템 설계)

  • Choi, Keun Ha;Han, Sang Kwon;Lee, Jinyi;Lee, Jin Woo;Ahn, Jung Do;Kim, Kyung-Soo;Kim, Soohyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.4
    • /
    • pp.456-462
    • /
    • 2014
  • In this paper, we present a deployable bio-inspired robot called the Pillbot-light, which utilizes a safe autonomous navigation system. The Pillbot-light is mounted the station robot, and can be operated in a disaster relief operation or military operation. However, the Pilbot-light has a challenge to navigate autonomously because the Pilbot-light cannot be equipped with various sensors. As a result, we propose a new robot system for autonomous navigation that the station robot controls Pillbot-light equipped with vision camera and CPU of high performance. This system detects obstacles based on the edge extraction using vision camera. Also, it cannot only achieve path planning using the hazard cost function, but also localization using the Particle Filter. And this system is verified by simulation and experiment.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

On-the-go Nitrogen Sensing and Fertilizer Control for Site-specific Crop Management

  • Kim, Y.;Reid, J.F.;Han, S.
    • Agricultural and Biosystems Engineering
    • /
    • v.7 no.1
    • /
    • pp.18-26
    • /
    • 2006
  • In-field site-specific nitrogen (N) management increases crop yield, reduces N application to minimize the risk of nitrate contamination of ground water, and thus reduces farming cost. Real-time N sensing and fertilization is required for efficient N management. An 'on-the-go' site-specific N management system was developed and evaluated for the supplemental N application to com (Zea mays L.). This real-time N sensing and fertilization system monitored and assessed N fertilization needs using a vision-based spectral sensor and controlled the appropriate variable N rate according to N deficiency level estimated from spectral signature of crop canopies. Sensor inputs included ambient illumination, camera parameters, and image histogram of three spectral regions (red, green, and near-infrared). The real-time sensor-based supplemental N treatment improved crop N status and increased yield over most plots. The largest yield increase was achieved in plots with low initial N treatment combined with supplemental variable-rate application. Yield data for plots where N was applied the latest in the season resulted in a reduced impact on supplemental N. For plots with no supplemental N application, yield increased gradually with initial N treatment, but any N application more than 101 kg/ha had minimal impact on yield.

  • PDF

AdaBoost-based Real-Time Face Detection & Tracking System (AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발)

  • Kim, Jeong-Hyun;Kim, Jin-Young;Hong, Young-Jin;Kwon, Jang-Woo;Kang, Dong-Joong;Lho, Tae-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

Detection of Pulmonary Region in Medical Images through Improved Active Control Model

  • Kwon Yong-Jun;Won Chul-Ho;Kim Dong-Hun;Kim Pil-Un;Park Il-Yong;Park Hee-Jun;Lee Jyung-Hyun;Kim Myoung-Nam;Cho Jin-HO
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.6
    • /
    • pp.357-363
    • /
    • 2005
  • Active contour models have been extensively used to segment, match, and track objects of interest in computer vision and image processing applications, particularly to locate object boundaries. With conventional methods an object boundary can be extracted by controlling the internal energy and external energy based on energy minimization. However, this still leaves a number of problems, such as initialization and poor convergence in concave regions. In particular, a contour is unable to enter a concave region based on the stretching and bending characteristic of the internal energy. Therefore, this study proposes a method that controls the internal energy by moving the local perpendicular bisector point of each control point on the contour, and determines the object boundary by minimizing the energy relative to the external energy. Convergence at a concave region can then be effectively implemented as regards the feature of interest using the internal energy, plus several objects can be detected using a multi-detection method based on the initial contour. The proposed method is compared with other conventional methods through objective validation and subjective consideration. As a result, it is anticipated that the proposed method can be efficiently applied to the detection of the pulmonary parenchyma region in medical images.

Multi-sensor Fusion based Autonomous Return of SUGV (다중센서 융합기반 소형로봇 자율복귀에 대한 연구)

  • Choi, Ji-Hoon;Kang, Sin-Cheon;Kim, Jun;Shim, Sung-Dae;Jee, Tae-Yong;Song, Jae-Bok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.250-256
    • /
    • 2012
  • Unmanned ground vehicles may be operated by remote control unit through the wireless communication or autonomously. However, the autonomous technology is still challenging and not perfectly developed. For some reason or other, the wireless communication is not always available. If wireless communication is abruptly disconnected, the UGV will be nothing but a lump of junk. What was worse, the UGV can be captured by enemy. This paper suggests a method, autonomous return technology with which the UGV can autonomously go back to a safer position along the reverse path. The suggested autonomous return technology for UGV is based on multi-correlated information based DB creation and matching. While SUGV moves by remote-control, the multi-correlated information based DB is created with the multi-sensor information; the absolute position of the trajectory is stored in DB if GPS is available and the hybrid MAP based on the fusion of VISION and LADAR is stored with the corresponding relative position if GPS is unavailable. In multi-correlated information based autonomous return, SUGV returns autonomously based on DB; SUGV returns along the trajectory based on GPS-based absolute position if GPS is available. Otherwise, the current position of SUGV is first estimated by the relative position using multi-sensor fusion followed by the matching between the query and DB. Then, the return path is created in MAP and SUGV returns automatically based on the MAP. Experimental results on the pre-built trajectory show the possibility of the successful autonomous return.