• Title/Summary/Keyword: Motion recognition image processing

Search Result 64, Processing Time 0.027 seconds

Algorithm and Performance Evaluation of High-speed Distinction for Condition Recognition of Defective Nut (불량 너트의 상태인식을 위한 고속 판별 알고리즘 및 성능평가)

  • Park, Tae-Jin;Lee, Un-Seon;Lee, Sang-Hee;Park, Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.7
    • /
    • pp.895-904
    • /
    • 2011
  • In welding machine that executes existing spot welding, wrong operation of system has often occurs because of their mechanical motion that can be caused by a number of supply like the welding object. In exposed working environment for various situations such as worker or related equipment moving into any place that we are unable to exactly distinguish between good and not condition of nut. Also, in case of defective welding of nut, it needs various evaluation and analysis through image processing because the problem that worker should be inspected every single manually. Therefore in this paper, if the object was not stabilization state correctly, we have purpose to algorithm implementation that it is to reduce the analysis time and exact recognition as to improve system of image processing. As this like, as image analysis for assessment whether it is good or not condition of nut, in his paper, implemented algorithms were suggested and list by group and that it showed the effectiveness through more than one experiment. As the result, recognition rate of normality and error according to the estimation time have been shown as 40%~94.6% and 60%~5.4% from classification 1 of group 1 to classification 11 of group 5, and that estimation time of minimum, maximum, and average have been shown as 1.7sec.~0.08sec., 3.6sec.~1.2sec., and 2.5sec.~0.1sec.

Position Detection and Gathering Swimming Control of Fish Robot Using Color Detection Algorithm (색상 검출 알고리즘을 활용한 물고기로봇의 위치인식과 군집 유영제어)

  • Akbar, Muhammad;Shin, Kyoo Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.510-513
    • /
    • 2016
  • Detecting of the object in image processing is substantial but it depends on the object itself and the environment. An object can be detected either by its shape or color. Color is an essential for pattern recognition and computer vision. It is an attractive feature because of its simplicity and its robustness to scale changes and to detect the positions of the object. Generally, color of an object depends on its characteristics of the perceiving eye and brain. Physically, objects can be said to have color because of the light leaving their surfaces. Here, we conducted experiment in the aquarium fish tank. Different color of fish robots are mimic the natural swim of fish. Unfortunately, in the underwater medium, the colors are modified by attenuation and difficult to identify the color for moving objects. We consider the fish motion as a moving object and coordinates are found at every instinct of the aquarium to detect the position of the fish robot using OpenCV color detection. In this paper, we proposed to identify the position of the fish robot by their color and use the position data to control the fish robot gathering in one point in the fish tank through serial communication using RF module. It was verified by the performance test of detecting the position of the fish robot.

A Study of Hand Gesture Recognition for Human Computer Interface (컴퓨터 인터페이스를 위한 Hand Gesture 인식에 관한 연구)

  • Chang, Ho-Jung;Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3041-3043
    • /
    • 2000
  • GUI(graphical user interface) has been the dominant platform for HCI(human computer interaction). The GUI-based style of interaction has made computers simpler and easier to use. However GUI will not easily support the range of interaction necessary to meet users' needs that are natural, intuitive, and adaptive. In this paper we study an approach to track a hand in an image sequence and recognize it, in each video frame for replacing the mouse as a pointing device to virtual reality. An algorithm for real time processing is proposed by estimating of the position of the hand and segmentation, considering the orientation of motion and color distribution of hand region.

  • PDF

Development of Apple Harvesting Robot(I) - Development of Robot Hand for Apple Harvesting - (사과 수확 로봇의 핸드 개발(I) - 사과 수확용 로봇의 핸드 개발 -)

  • 장익주;김태한;권기영
    • Journal of Biosystems Engineering
    • /
    • v.22 no.4
    • /
    • pp.411-420
    • /
    • 1997
  • The mechanization efficiency using high ability machines such as tractors or combines in a paddy field rice farm is high. Mechanization in harvesting fruits and vegetables is difficult, because they are easy to be damaged. Therefore, Advanced techniques for careful handling fruits and vegetables are necessary in automation and robotization. An apple harvesting robot must have a recognition device to detect the positioning of fruit, manipulators which function like human arms, and hand to take off the fruit. This study is related to the development of a rotatic hand as the first stage in developing the apple harvesting robot. The results are summarized as follows. 1. It was found that a hand that was eccentric in rotatory motion, was better than a hand of semicircular up-and-down motion in harvesting efficiency. 2. The hand was developed to control changes in grasp forces by using tape-type switch sensor which was attatched to fingers' inside. 3. Initial finger positioning was set up to control accurate harvesting by using a tow step fingering position. 4. This study showed the possibility of apple harvesting using the developed robot hand.

  • PDF

Antiblurry Dejitter Image Stabilization Method of Fuzzy Video for Driving Recorders

  • Xiong, Jing-Ying;Dai, Ming;Zhao, Chun-Lei;Wang, Ruo-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3086-3103
    • /
    • 2017
  • Video images captured by vehicle cameras often contain blurry or dithering frames due to inadvertent motion from bumps in the road or by insufficient illumination during the morning or evening, which greatly reduces the perception of objects expression and recognition from the records. Therefore, a real-time electronic stabilization method to correct fuzzy video from driving recorders has been proposed. In the first stage of feature detection, a coarse-to-fine inspection policy and a scale nonlinear diffusion filter are proposed to provide more accurate keypoints. Second, a new antiblurry binary descriptor and a feature point selection strategy for unintentional estimation are proposed, which brought more discriminative power. In addition, a new evaluation criterion for affine region detectors is presented based on the percentage interval of repeatability. The experiments show that the proposed method exhibits improvement in detecting blurry corner points. Moreover, it improves the performance of the algorithm and guarantees high processing speed at the same time.

Flame and Smoke Detection for Early Fire Recognition (조기 화재인식을 위한 화염 및 연기 검출)

  • Park, Jang-Sik;Kim, Hyun-Tae;Choi, Soo-Young;Kang, Chang-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.427-430
    • /
    • 2007
  • Many victims and property damages are caused in fires every year. In this paper, flame and smoke detection algorithm by using image processing technique is proposed to early alarm fires. The first decision of proposed algorithms is to check candidate of flame region with its unique color distribution distinguished from artificial lights. If it is not a flame region then we can check to candidate of smoke region by measuring difference of brightness and chroma at present frame. If we just check flame and smoke with only simple brightness and hue, we will occasionally get false alarms. Therefore we also use motion information about candidate of flame and smoke regions. Finally, to determine the flame after motion detection, activity information is used. And in order to determine the smoke, edges detection method is adopted. As a result of simulation with real CCTV video signal, it is shown that the proposed algorithm is useful for early fire recognition.

  • PDF

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Implementation of Intelligent Image Surveillance System based Context (컨텍스트 기반의 지능형 영상 감시 시스템 구현에 관한 연구)

  • Moon, Sung-Ryong;Shin, Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • This paper is a study on implementation of intelligent image surveillance system using context information and supplements temporal-spatial constraint, the weak point in which it is hard to process it in real time. In this paper, we propose scene analysis algorithm which can be processed in real time in various environments at low resolution video(320*240) comprised of 30 frames per second. The proposed algorithm gets rid of background and meaningless frame among continuous frames. And, this paper uses wavelet transform and edge histogram to detect shot boundary. Next, representative key-frame in shot boundary is selected by key-frame selection parameter and edge histogram, mathematical morphology are used to detect only motion region. We define each four basic contexts in accordance with angles of feature points by applying vertical and horizontal ratio for the motion region of detected object. These are standing, laying, seating and walking. Finally, we carry out scene analysis by defining simple context model composed with general context and emergency context through estimating each context's connection status and configure a system in order to check real time processing possibility. The proposed system shows the performance of 92.5% in terms of recognition rate for a video of low resolution and processing speed is 0.74 second in average per frame, so that we can check real time processing is possible.

Robust Hand Region Extraction Using a Joint-based Model (관절 기반의 모델을 활용한 강인한 손 영역 추출)

  • Jang, Seok-Woo;Kim, Sul-Ho;Kim, Gye-Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.525-531
    • /
    • 2019
  • Efforts to utilize human gestures to effectively implement a more natural and interactive interface between humans and computers have been ongoing in recent years. In this paper, we propose a new algorithm that accepts consecutive three-dimensional (3D) depth images, defines a hand model, and robustly extracts the human hand region based on six palm joints and 15 finger joints. Then, the 3D depth images are adaptively binarized to exclude non-interest areas, such as the background, and accurately extracts only the hand of the person, which is the area of interest. Experimental results show that the presented algorithm detects only the human hand region 2.4% more accurately than the existing method. The hand region extraction algorithm proposed in this paper is expected to be useful in various practical applications related to computer vision and image processing, such as gesture recognition, virtual reality implementation, 3D motion games, and sign recognition.

Gesture-based Table Tennis Game in AR Environment (증강현실과 제스처를 이용한 비전기반 탁구 게임)

  • Yang, Jong-Yeol;Lee, Sang-Kyung;Kyoung, Dong-Wuk;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.5 no.3
    • /
    • pp.3-10
    • /
    • 2005
  • We present the computer table tennis game using player's swing motion. We need to transform a real world coordinate into a virtual world coordinate in order to hit the virtual ball. We can not get a correct 3-dimension position of racket in environment that using one camera or simple image processing. Therefore we use Augmented Reality (AR) concept to develop the game. This paper shows the AR table tennis game using gesture and method to develop the 3D interaction game that only using one camera without any motion detection device or stereo cameras. Also, we use a scan line method to recognize gesture for speedy processing. The game is developed using ARtoolkit and DirectX that is popular tool of SDK for game development.

  • PDF