• Title/Summary/Keyword: Vision-based recognition

Search Result 633, Processing Time 0.036 seconds

Development of a Simple Computer Vision System (컴퓨터 시각 장치의 개발)

  • 박동철;석민수
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.1
    • /
    • pp.1-6
    • /
    • 1983
  • To give the recognition capability of task objects by computer vision to a sensor-based robot system, an image digitizer and some basic software techniques were developed and repofted here. The image digitizer was developed with the CROMEMCO SYSTEM III microcomputer anti C.C.T.V. camera to convert the analog valued scene into digitized image which could be pro-cessed by a digital computer. Basic software techniques for the computer vision system were aimed at the recognition of 3-dimensional objects. Experiments with these techniques were carried out using the image of a cubicle which could be considered as typical simple 3-dimensional object.

  • PDF

A Study on Detection of Object Position and Displacement for Obstacle Recognition of UCT (무인 컨테이너 운반차량의 장애물 인식을 위한 물체의 위치 및 변위 검출에 관한 연구)

  • 이진우;이영진;조현철;손주한;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.321-332
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of UCT(unmanned container transporters) with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects by neural network. We can transform pixel points to objects position of the real space using the proposed viewport. This proposed technique is used by the single vision system based on floor map.

  • PDF

Intelligent User Pattern Recognition based on Vision, Audio and Activity for Abnormal Event Detections of Single Households

  • Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.59-66
    • /
    • 2019
  • According to the KT telecommunication statistics, people stayed inside their houses on an average of 11.9 hours a day. As well as, according to NSC statistics in the united states, people regardless of age are injured for a variety of reasons in their houses. For purposes of this research, we have investigated an abnormal event detection algorithm to classify infrequently occurring behaviors as accidents, health emergencies, etc. in their daily lives. We propose a fusion method that combines three classification algorithms with vision pattern, audio pattern, and activity pattern to detect unusual user events. The vision pattern algorithm identifies people and objects based on video data collected through home CCTV. The audio and activity pattern algorithms classify user audio and activity behaviors using the data collected from built-in sensors on their smartphones in their houses. We evaluated the proposed individual pattern algorithm and fusion method based on multiple scenarios.

Analysis of 3D Motion Recognition using Meta-analysis for Interaction (기존 3차원 인터랙션 동작인식 기술 현황 파악을 위한 메타분석)

  • Kim, Yong-Woo;Whang, Min-Cheol;Kim, Jong-Hwa;Woo, Jin-Cheol;Kim, Chi-Jung;Kim, Ji-Hye
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.925-932
    • /
    • 2010
  • Most of the research on three-dimensional interaction field have showed different accuracy in terms of sensing, mode and method. Furthermore, implementation of interaction has been a lack of consistency in application field. Therefore, this study is to suggest research trends of three-dimensional interaction using meta-analysis. Searching relative keyword in database provided with 153 domestic papers and 188 international papers covering three-dimensional interaction. Analytical coding tables determined 18 domestic papers and 28 international papers for analysis. Frequency analysis was carried out on method of action, element, number, accuracy and then verified accuracy by effect size of the meta-analysis. As the results, the effect size of sensor-based was higher than vision-based, but the effect size was extracted to small as 0.02. The effect size of vision-based using hand motion was higher than sensor-based using hand motion. Therefore, implementation of three-dimensional sensor-based interaction and vision-based using hand motions more efficient. This study was significant to comprehensive analysis of three-dimensional motion recognition for interaction and suggest to application directions of three-dimensional interaction.

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

A Study on Vision Based Gesture Recognition Interface Design for Digital TV (동작인식기반 Digital TV인터페이스를 위한 지시동작에 관한 연구)

  • Kim, Hyun-Suk;Hwang, Sung-Won;Moon, Hyun-Jung
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.257-268
    • /
    • 2007
  • The development of Human Computer Interface has been relied on the development of technology. Mice and keyboards are the most popular HCI devices for personal computing. However, device-based interfaces are quite different from human to human interaction and very artificial. To develop more intuitive interfaces which mimic human to human interface has been a major research topic among HCI researchers and engineers. Also, technology in the TV industry has rapidly developed and the market penetration rate for big size screen TVs has increased rapidly. The HDTV and digital TV broadcasting are being tested. These TV environment changes require changes of Human to TV interface. A gesture recognition-based interface with a computer vision system can replace the remote control-based interface because of its immediacy and intuitiveness. This research focuses on how people use their hands or arms for command gestures. A set of gestures are sampled to control TV set up by focus group interviews and surveys. The result of this paper can be used as a reference to design a computer vision based TV interface.

  • PDF

Object Recognition Using 3D RFID System (3D REID 시스템을 이용한 사물 인식)

  • Roh Se-gon;Lee Young Hoon;Choi Hyouk Ryeol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1027-1038
    • /
    • 2005
  • Object recognition in the field of robotics generally has depended on a computer vision system. Recently, RFID(Radio Frequency IDentification) has been suggested as technology that supports object recognition. This paper, introduces the advanced RFID-based recognition using a novel tag which is named a 3D tag. The 3D tag was designed to facilitate object recognition. The proposed RFID system not only detects the existence of an object, but also estimates the orientation and position of the object. These characteristics allow the robot to reduce considerably its dependence on other sensors for object recognition. In this paper, we analyze the characteristics of the 3D tag-based RFID system. In addition, the estimation methods of position and orientation using the system are discussed.

Unsupervised Transfer Learning for Plant Anomaly Recognition

  • Xu, Mingle;Yoon, Sook;Lee, Jaesu;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.30-37
    • /
    • 2022
  • Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Tool Condition Monitoring Technique Using Computer Vision and Pattern Recognition (컴퓨터 비젼 및 패턴인식기법을 이용한 공구상태 판정시스템 개발)

  • 권오달;양민양
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.1
    • /
    • pp.27-37
    • /
    • 1993
  • In unmanned machining, One of the most essential issue is the tool management system which includes controlling. identification, presetting and monitoring of cutting tools. Especially the monitoring of tool wear and fracture may be the heart of the system. In this study a computer vision based tool monitoring system is developed. Also an algorithm which can determine the tool condition using this system is presented. In order to enhance practical adaptability the vision system through which two modes of images are taken is located over the rake face of a tool insert. And they are analysed quantitatively and qualitatively with image processing technique. In fact the morphologies of tool fracture or wear are occurred so variously that it is difficult to predict them. For the purpose of this problem the pattern recognition is introduced to classify the modes of the tool such as fracture, crater, chipping and flank wear. The experimental results performed in the CNC turning machine have proved the effectiveness of the proposed system.