• Title/Summary/Keyword: Color computer vision

Search Result 214, Processing Time 0.023 seconds

Implementation of Game Interface using Human Head Motion Recognition (사람의 머리 모션 인식을 이용한 게임 인터페이스 구현)

  • Lee, Samual;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.5
    • /
    • pp.9-14
    • /
    • 2014
  • Recently, various contents using human motion are developed in computer vision and game industries. If we try to apply human motion to application programs and contents, users can experience a sense of immersion getting into it so that the users feel a high level of satisfaction from the contents. In this research, we analyze human head motion using images captured from an webcam and then we apply the result of motion recognition to a game without special devices as an interface. The proposed method, first, segments human head region using an image composed of MHI(Motion History Image) and the result of skin color detection, and then we calculate the direction and distance by the MHI sequence. In experiments, the proposed method for human head motion recognition was tested for controlling a game player. From the experimental results we proved that the proposed method can make a gammer feel more immersed into the game. Furthermore, we expect the proposed method can be an interface of a serious game for medical or rehabilitation purposes.

A Remote Control of 6 d.o.f. Robot Arm Based on 2D Vision Sensor (2D 영상센서 기반 6축 로봇 팔 원격제어)

  • Hyun, Woong-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.933-940
    • /
    • 2022
  • In this paper, the algorithm was developed to recognize hand 3D position through 2D image sensor and implemented a system to remotely control the 6 d.o.f. robot arm by using it. The system consists of a camera that acquires hand position in 2D, a computer that controls robot arm that performs movement by hand position recognition. The image sensor recognizes the specific color of the glove putting on operator's hand and outputs the recognized range and position by including the color area of the glove as a shape of rectangle. We recognize the velocity vector of end effector and control the robot arm by the output data of the position and size of the detected rectangle. Through the several experiments using developed 6 axis robot, it was confirmed that the 6 d.o.f. robot arm remote control was successfully performed.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region (제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.121-127
    • /
    • 2013
  • A vision based hand-gesture interface method for substituting a pointing device is proposed in this paper, which is used the ratio and size variation of Gesture Region. Proposed method uses the skin hue&saturation of the hand region from the HSI color model to extract the hand region effectively. This method can remove the non-hand region, and reduces the noise effect by the light source. Also, as the computation quantity is reduced by detecting not the static hand-shape recognition, but the ratio and size variation of hand-moving from the clipped hand region in real time, more response speed is guaranteed. In order to evaluate the performance of the our proposed method, after applying to the computerized self visual acuity testing system as a pointing device. As a result, the proposed method showed the average 86% gesture recognition ratio and 87% coordinate moving recognition ratio.

A New CSR-DCF Tracking Algorithm based on Faster RCNN Detection Model and CSRT Tracker for Drone Data

  • Farhodov, Xurshid;Kwon, Oh-Heum;Moon, Kwang-Seok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1415-1429
    • /
    • 2019
  • Nowadays object tracking process becoming one of the most challenging task in Computer Vision filed. A CSR-DCF (channel spatial reliability-discriminative correlation filter) tracking algorithm have been proposed on recent tracking benchmark that could achieve stat-of-the-art performance where channel spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process with only two simple standard features, HoGs and Color names. However, there are some cases where this method cannot track properly, like overlapping, occlusions, motion blur, changing appearance, environmental variations and so on. To overcome that kind of complications a new modified version of CSR-DCF algorithm has been proposed by integrating deep learning based object detection and CSRT tracker which implemented in OpenCV library. As an object detection model, according to the comparable result of object detection methods and by reason of high efficiency and celerity of Faster RCNN (Region-based Convolutional Neural Network) has been used, and combined with CSRT tracker, which demonstrated outstanding real-time detection and tracking performance. The results indicate that the trained object detection model integration with tracking algorithm gives better outcomes rather than using tracking algorithm or filter itself.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Design and Implementation of the Stop line and Crosswalk Recognition Algorithm for Autonomous UGV (자율 주행 UGV를 위한 정지선과 횡단보도 인식 알고리즘 설계 및 구현)

  • Lee, Jae Hwan;Yoon, Heebyung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.271-278
    • /
    • 2014
  • In spite of that stop line and crosswalk should be aware of the most basic objects in transportation system, its features extracted are very limited. In addition to image-based recognition technology, laser and RF, GPS/INS recognition technology, it is difficult to recognize. For this reason, the limited research in this area has been done. In this paper, the algorithm to recognize the stop line and crosswalk is designed and implemented using image-based recognition technology with the images input through a vision sensor. This algorithm consists of three functions.; One is to select the area, in advance, needed for feature extraction in order to speed up the data processing, 'Region of Interest', another is to process the images only that white color is detected more than a certain proportion in order to remove the unnecessary operation, 'Color Pattern Inspection', the other is 'Feature Extraction and Recognition', which is to extract the edge features and compare this to the previously-modeled one to identify the stop line and crosswalk. For this, especially by using case based feature comparison algorithm, it can identify either both stop line and crosswalk exist or just one exists. Also the proposed algorithm is to develop existing researches by comparing and analysing effect of in-vehicle camera installation and changes in recognition rate of distance estimation and various constraints such as backlight and shadow.

A Study on u-CCTV Fire Prevention System Development of System and Fire Judgement (u-CCTV 화재 감시 시스템 개발을 위한 시스템 및 화재 판별 기술 연구)

  • Kim, Young-Hyuk;Lim, Il-Kwon;Li, Qigui;Park, So-A;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.463-466
    • /
    • 2010
  • In this paper, CCTV based fire surveillance system should aim to development. Advantages and Disadvantages analyzed of Existing sensor-based fire surveillance system and video-based fire surveillance system. To national support U-City, U-Home, U-Campus, etc, spread the ubiquitous environment appropriate to fire surveillance system model and a fire judgement technology. For this study, Microsoft LifeCam VX-1000 using through the capturing images and analyzed for apple and tomato, Finally we used H.264. The client uses the Linux OS with ARM9 S3C2440 board was manufactured, the client's role is passed to the server to processed capturing image. Client and the server is basically a 1:1 video communications. So to multiple receive to video multicast support will be a specification. Is fire surveillance system designed for multiple video communication. Video data from the RGB format to YUV format and transfer and fire detection for Y value. Y value is know movement data. The red color of the fire is determined to detect and calculate the value of Y at the fire continues to detect the movement of flame.

  • PDF

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

Detection of Various Sized Car Number Plates using Edge-based Region Growing (에지 기반 영역확장 기법을 이용한 다양한 크기의 번호판 검출)

  • Kim, Jae-Do;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.122-130
    • /
    • 2009
  • Conventional approaches for car number plate detection have dealt with those input images having similar sizes and simple background acquired under well organized environment. Thus their performance get reduced when input images include number plates with different sizes and when they are acquired under different lighting conditions. To solve these problem, this paper proposes a new scheme that uses the geometrical features of number plates and their topological information with reference to other features of the car. In the first step, those edges constructing a rectangle are detected and several pixels neighboring those edges are selected as the seed pixels for region growing. For region growing, color and intensity are used as the features, and the result regions are merged to construct the candidate for a number plate if their features are within a certain boundary. Once the candidates for the number plates are generated then their topological relations with other parts of the car such as lights are tested to finally determine the number plate region. The experimental results have shown that the proposed method can be used even for detecting small size number plates where characters are not visible.