• 제목/요약/키워드: Computer Vision

검색결과 2,218건 처리시간 0.031초

A Computer Vision-Based Banknote Recognition System for the Blind with an Accuracy of 98% on Smartphone Videos

  • Sanchez, Gustavo Adrian Ruiz
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.67-72
    • /
    • 2019
  • This paper proposes a computer vision-based banknote recognition system intended to assist the blind. This system is robust and fast in recognizing banknotes on videos recorded with a smartphone on real-life scenarios. To reduce the computation time and enable a robust recognition in cluttered environments, this study segments the banknote candidate area from the background utilizing a technique called Pixel-Based Adaptive Segmenter (PBAS). The Speeded-Up Robust Features (SURF) interest point detector is used, and SURF feature vectors are computed only when sufficient interest points are found. The proposed algorithm achieves a recognition accuracy of 98%, a 100% true recognition rate and a 0% false recognition rate. Although Korean banknotes are used as a working example, the proposed system can be applied to recognize other countries' banknotes.

2D 비전과 3D 동작인식을 결합한 하이브리드 실시간 모니터링 시스템 (Hybrid Real-time Monitoring System Using2D Vision and 3D Action Recognition)

  • 임종헌;성만규;이준재
    • 한국멀티미디어학회논문지
    • /
    • 제18권5호
    • /
    • pp.583-598
    • /
    • 2015
  • We need many assembly lines to produce industrial product such as automobiles that require a lot of composited parts. Big portion of such assembly line are still operated by manual works of human. Such manual works sometimes cause critical error that may produce artifacts. Also, once the assembly is completed, it is really hard to verify whether of not the product has some error. In this paper, for monitoring behaviors of manual human work in an assembly line automatically, we proposes a realtime hybrid monitoring system that combines 2D vision sensor tracking technique with 3D motion recognition sensors.

평면상에 있는 점위치 결정을 위한 컴퓨터장 비젼의 응용 (Application of Computer Vision System for the Point Position Determination in the Plane)

  • 장완식;장종근;유창규
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1995년도 추계학술대회 논문집
    • /
    • pp.1124-1128
    • /
    • 1995
  • This paper presents the appplication of computer vision for the purpose of determing the position of the unknown point in the plane. The presented contrik method is estimate the six view parameters reqresenting the relationship between the image plane coordinates and the real physical coordinates. The estimation of six parameters is indispensable for transforming the 2-dimensional camera coordinates to the 3-dimensional spatial coordinates. Then, the position of unknown point is estimated based on the estimated parameters depending on the cameras. The suitability of this control scheme is demonstrated experimentally by determining of position the unknown point in the plane.

  • PDF

평면상에 있는 물체 위치 결정을 위한 컴퓨터 비젼 시스템의 응용 (An Application of Computer Vision System for the Determination of Object Position in the Plane)

  • 장완식
    • 한국생산제조학회지
    • /
    • 제7권2호
    • /
    • pp.62-68
    • /
    • 1998
  • This paper presents the application of computer vision for the purpose of determining the position of the unknown object in the plane. The presented control method is to estimate the six view parameters representing the relationship between the image plane coordinates and the real physical coordinates. The estimation of six parameters is indispensable for transforming the 2-dimensional camera coordinates to the 3-dimensional spatial coordinates. Then, the position of unknown point is estimated based on the estimated parameters depending on the cameras. The suitability of this control scheme is demonstrated experimentally by determining position of the unknown object in the plane.

  • PDF

컴퓨터 시각을 이용한 버얼리종 건조 잎 담배의 등급판별 가능성 (Feasibility in Grading the Burley Type Dried Tobacco Leaf Using Computer Vision)

  • 조한근;백국현
    • Journal of Biosystems Engineering
    • /
    • 제22권1호
    • /
    • pp.30-40
    • /
    • 1997
  • A computer vision system was built to automatically grade the leaf tobacco. A color image processing algorithm was developed to extract shape, color and texture features. An improved back propagation algorithm in an artificial neural network was applied to grade the Burley type dried leaf tobacco. The success rate of grading in three-grade classification(1, 3, 5) was higher than the rate of grading in six-grade classification(1, 2, 3, 4, 5, off), on the average success rate of both the twenty-five local pixel-set and the sixteen local pixel-set. And, the average grading success rate using both shape and color features was higher than the rate using shape, color and texture features. Thus, the texture feature obtained by the spatial gray level dependence method was found not to be important in grading leaf tobacco. Grading according to the shape, color and texture features obtained by machine vision system seemed to be inadequate for replacing manual grading of Burely type dried leaf tobacco.

  • PDF

지능형 로봇 비전 프레임워크: VisionNEO (An Intelligent Robot Vision Framework)

  • 장세인;박충식;우영운;김광백
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 춘계학술대회
    • /
    • pp.429-432
    • /
    • 2009
  • 오늘날 지능형 로봇은 국, 내외로 많은 관심을 받고 있는 분야이다. 지능형 로봇이란 외부환경을 인식하고 스스로 판단하여 자율적으로 동작을 하는 로봇을 의미한다. 이에 대한 연구 개발이 활성화 됨에 따라 로봇 소프트웨어 개발을 효과적으로 지원하기위한 로봇 소프트웨어 플랫폼에 대한 연구가 활발해지고 있다. 시시각각 변화하는 환경에서 민감하게 반응하기 위해서는 시각센서를 이용하여야 하고, 자신의 행위를 적절히 대응시키기 위해서는 주변 상황과 알맞은 행동을 추론하고 학습해야 한다. 본 연구에서는 인공지능 규칙처리 추론엔진을 토대로 한 NEO 시스템에 영상 처리 시스템을 올려 지능형 로봇을 제어하는 루틴을 추가한 VisionNEO를 개발하였다. 그리하여 주변 환경을 이해하고 알맞은 행동을 추론, 학습해 지식을 축적하는 지능형 로봇 비전 프레임워크를 제안한다.

  • PDF

객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구 (A Survey on Vision Transformers for Object Detection Task)

  • 하정민;이현종;엄정민;이재구
    • 대한임베디드공학회논문지
    • /
    • 제17권6호
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

홀위치 측정을 위한 레이져비젼 시스템 개발 (A Laser Vision System for the High-Speed Measurement of Hole Positions)

  • 노영식;서영수;최원태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.333-335
    • /
    • 2006
  • In this page, we developed the inspection system for automobile parts using the laser vision sensor. Laser vision sensor has gotten 2 dimensions information and third dimension information of laser vision camera using the vision camera. Used JIG and ROBOT for inspection position transfer. Also, computer integration system developed that control system component pal1s and manage data measurement information. Compare sensor measurement result with CAD Data and verified measurement result effectiveness taking advantage of CAD to get information of measurement object.

  • PDF

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • 제24권6호
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Automation for Oyster Hinge Breaking System

  • So, J.D.;Wheaton, F.W.
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1996년도 International Conference on Agricultural Machinery Engineering Proceedings
    • /
    • pp.658-667
    • /
    • 1996
  • A computer vision system was developed to automatically detect and locate the oyster hinge line, one step in shucking an oyster. The computer vision system consisted of a personal computer, a color frame grabber, a color CCD video camera with a zoom lens, two video monitor, a specially designed fixture to hold the oyster, a lighting system to illuminate the oyster and the system software. The software consisted of a combination of commercially available programs and custom designed programs developed using the Microsoft CTM . Test results showed that the image resolution was the most important variable influencing hinge detection efficiency. Whether or not the trimmed -off-flat-white surface area was dry or wet, the oyster size relative to the image size selected , and the image processing methods used all influenced the hinge locating efficiency. The best computer software and hardware combination used successfully located 97% of the oyster hinge lines tested. This efficienc was achieve using camera field of view of 1.9 by 1.5cm , a 180 by 170 pixel image window, and a dry trimmed -off oyster hinge end surface.

  • PDF