• 제목/요약/키워드: vision-based techniques

검색결과 288건 처리시간 0.025초

스테레오 비전센서를 이용한 차선감지 시스템 연구 (A Study on Lane Sensing System Using Stereo Vision Sensors)

  • 하건수;박재식;이광운;박재학
    • 대한기계학회논문집A
    • /
    • 제28권3호
    • /
    • pp.230-237
    • /
    • 2004
  • Lane Sensing techniques based on vision sensors are regarded promising because they require little infrastructure on the highway except clear lane markers. However, they require more intelligent processing algorithms in vehicles to generate the previewed roadway from the vision images. In this paper, a lane sensing algorithm using vision sensors is developed to improve the sensing robustness. The parallel stereo-camera is utilized to regenerate the 3-dimensional road geometry. The lane geometry models are derived such that their parameters represent the road curvature, lateral offset and heading angle, respectively. The parameters of the lane geometry models are estimated by the Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from the image plane to the global coordinate considers roll and pitch motions of a vehicle so that the mapping error is minimized during acceleration, braking or steering. The proposed sensing system has been built and implemented on a 1/10-scale model car.

DEVELOPMENT OF A MACHINE VISION SYSTEM FOR WEED CONTROL USING PRECISION CHEMICAL APPLICATION

  • Lee, Won-Suk;David C. Slaughter;D.Ken Giles
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1996년도 International Conference on Agricultural Machinery Engineering Proceedings
    • /
    • pp.802-811
    • /
    • 1996
  • Farmers need alternatives for weed control due to the desire to reduce chemicals used in farming. However, conventional mechanical cultivation cannot selectively remove weeds located in the seedline between crop plants and there are no selective heribicides for some crop/weed situations. Since hand labor is costly , an automated weed control system could be feasible. A robotic weed control system can also reduce or eliminate the need for chemicals. Currently no such system exists for removing weeds located in the seedline between crop plants. The goal of this project is to build a real-time , machine vision weed control system that can detect crop and weed locations. remove weeds and thin crop plants. In order to accomplish this objective , a real-time robotic system was developed to identify and locate outdoor plants using machine vision technology, pattern recognition techniques, knowledge-based decision theory, and robotics. The prototype weed control system is composed f a real-time computer vision system, a uniform illumination device, and a precision chemical application system. The prototype system is mounted on the UC Davis Robotic Cultivator , which finds the center of the seedline of crop plants. Field tests showed that the robotic spraying system correctly targeted simulated weeds (metal coins of 2.54 cm diameter) with an average error of 0.78 cm and the standard deviation of 0.62cm.

  • PDF

A Vision-Based Method to Find Fingertips in a Closed Hand

  • Chaudhary, Ankit;Vatwani, Kapil;Agrawal, Tushar;Raheja, J.L.
    • Journal of Information Processing Systems
    • /
    • 제8권3호
    • /
    • pp.399-408
    • /
    • 2012
  • Hand gesture recognition is an important area of research in the field of Human Computer Interaction (HCI). The geometric attributes of the hand play an important role in hand shape reconstruction and gesture recognition. That said, fingertips are one of the important attributes for the detection of hand gestures and can provide valuable information from hand images. Many methods are available in scientific literature for fingertips detection with an open hand but very poor results are available for fingertips detection when the hand is closed. This paper presents a new method for the detection of fingertips in a closed hand using the corner detection method and an advanced edge detection algorithm. It is important to note that the skin color segmentation methodology did not work for fingertips detection in a closed hand. Thus the proposed method applied Gabor filter techniques for the detection of edges and then applied the corner detection algorithm for the detection of fingertips through the edges. To check the accuracy of the method, this method was tested on a vast number of images taken with a webcam. The method resulted in a higher accuracy rate of detections from the images. The method was further implemented on video for testing its validity on real time image capturing. These closed hand fingertips detection would help in controlling an electro-mechanical robotic hand via hand gesture in a natural way.

협력적인 상호작용을 위한 테이블-탑 디스플레이 기술 동향 (Survey: The Tabletop Display Techniques for Collaborative Interaction)

  • 김송국;이칠우
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2006년도 추계 종합학술대회 논문집
    • /
    • pp.616-621
    • /
    • 2006
  • 최근에 인간과 컴퓨터 상호작용을 위한 사용자 의도 및 행위 인식에 관한 비전 기반 연구가 활발히 진행되고 있다. 그 중에서도 테이블-탑 디스플레이 시스템은 터치 감지 기술의 발전, 협력적인 작업 추구에 발맞추어 다양한 응용으로 발전하였다. 이전의 테이블-탑 디스플레이는 오직 한명의 사용자만을 지원하였으나 현재에는 멀티터치를 통한 멀티유저를 지원하게 되었다. 따라서 테이블-탑 디스플레이의 궁극적인 목적인 협력적인 작업과 네 가지 원소 (인간, 컴퓨터, 투영된 객체, 물리적 객체)의 상호작용이 실현 가능하게 되었다. 일반적으로 테이블-탑 디스플레이 시스템은 다음의 네 가지 요소; 맨 손을 이용한 멀티터치 상호작용, 동시적인 사용자 상호작용을 통한 협력적인 작업의 구현, 임의의 위치 터치를 이용한 정보 조작, 상호작용의 도구로서 물리적인 객체의 사용을 중심으로 설계되어 있다. 본 논문에서는 테이블-탑 디스플레이 시스템을 위한 최첨단의 멀티터치 센싱 기술을 시각기반 방법, 비-시각 기반 방법으로 분류하고 비판적인 견해에서 분석을 하였다. 또한 테이블-탑 디스플레이 관련 연구들을 시스템 구성방식에 따라 분류하고 그 장단점과 실제 사용되는 응용 분야에 대해 기술하였다.

  • PDF

멀티터치를 위한 테이블-탑 디스플레이 기술 동향 (Survey: Tabletop Display Techniques for Multi-Touch Recognition)

  • 김송국;이칠우
    • 한국콘텐츠학회논문지
    • /
    • 제7권2호
    • /
    • pp.84-91
    • /
    • 2007
  • 최근에 인간과 컴퓨터 상호작용을 위한 사용자 의도 및 행위 인식에 관한 비전 기반 연구가 활발히 진행되고 있다. 그 중에서도 테이블-탑 디스플레이 시스템은 터치 감지 기술의 발전, 협력적인 작업 추구에 발맞추어 다양한 응용으로 발전하였다. 이전의 테이블-탑 디스플레이는 오직 한 명의 사용자만을 지원하였으나 현재에는 멀티터치를 통한 멀티유저를 지원하게 되었다. 따라서 테이블-탑 디스플레이의 궁극적인 목적인 협력적인 작업과 네 가지 원소 (인간, 컴퓨터, 투영된 객체, 물리적 객체) 의 상호작용이 실현 가능하게 되었다. 일반적으로 테이블-탑 디스플레이 시스템은 다음의 네 가지 측면; 맨 손을 이용한 멀티 터치 상호작용, 동시적인 사용자 상호작용을 통한 협력적인 작업의 구현, 임의의 위치 터치를 이용한 정보 조작, 상호작용의 도구로서 물리적인 객체의 사용을 중심으로 설계되어 있다. 본 논문에서는 테이블-탑 디스플레이 시스템을 위한 최첨단의 멀티터치 센싱 기술을 시각기반 방법, 비-시각 기반 방법으로 분류하고 비판적인 견해에서 분석을 하였다. 또한 테이블-탑 디스플레이 관련 연구들을 시스템 구성방식에 따라 분류하고 그 장단점과 실제 사용되는 응용 분야에 대해 기술하였다.

VRML과 영상오버레이를 이용한 로봇의 경로추적 (A Path tracking algorithm and a VRML image overlay method)

  • 손은호;;김영철;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

RAVIP: Real-Time AI Vision Platform for Heterogeneous Multi-Channel Video Stream

  • Lee, Jeonghun;Hwang, Kwang-il
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.227-241
    • /
    • 2021
  • Object detection techniques based on deep learning such as YOLO have high detection performance and precision in a single channel video stream. In order to expand to multiple channel object detection in real-time, however, high-performance hardware is required. In this paper, we propose a novel back-end server framework, a real-time AI vision platform (RAVIP), which can extend the object detection function from single channel to simultaneous multi-channels, which can work well even in low-end server hardware. RAVIP assembles appropriate component modules from the RODEM (real-time object detection module) Base to create per-channel instances for each channel, enabling efficient parallelization of object detection instances on limited hardware resources through continuous monitoring with respect to resource utilization. Through practical experiments, RAVIP shows that it is possible to optimize CPU, GPU, and memory utilization while performing object detection service in a multi-channel situation. In addition, it has been proven that RAVIP can provide object detection services with 25 FPS for all 16 channels at the same time.

머신 비전을 이용한 금형 품질 검사 시스템 개발 (Development of Stamping Die Quality Inspection System Using Machine Vision)

  • 윤협상
    • 산업경영시스템학회지
    • /
    • 제46권4호
    • /
    • pp.181-189
    • /
    • 2023
  • In this paper, we present a case study of developing MVIS (Machine Vision Inspection System) designed for exterior quality inspection of stamping dies used in the production of automotive exterior components in a small to medium-sized factory. While the primary processes within the factory, including machining, transportation, and loading, have been automated using PLCs, CNC machines, and robots, the final quality inspection process still relies on manual labor. We implement the MVIS with general-purpose industrial cameras and Python-based open-source libraries and frameworks for rapid and low-cost development. The MVIS can play a major role on improving throughput and lead time of stamping dies. Furthermore, the processed inspection images can be leveraged for future process monitoring and improvement by applying deep learning techniques.

Sorting for Plastic Bottles Recycling using Machine Vision Methods

  • SanaSadat Mirahsani;Sasan Ghasemipour;AmirAbbas Motamedi
    • International Journal of Computer Science & Network Security
    • /
    • 제24권6호
    • /
    • pp.89-98
    • /
    • 2024
  • Due to the increase in population and consequently the increase in the production of plastic waste, recovery of this part of the waste is an undeniable necessity. On the other hand, the recycling of plastic waste, if it is placed in a systematic process and controlled, can be effective in creating jobs and maintaining environmental health. Waste collection in many large cities has become a major problem due to lack of proper planning with increasing waste from population accumulation and changing consumption patterns. Today, waste management is no longer limited to waste collection, but waste collection is one of the important areas of its management, i.e. training, segregation, collection, recycling and processing. In this study, a systematic method based on machine vision for sorting plastic bottles in different colors for recycling purposes will be proposed. In this method, image classification and segmentation techniques were presented to improve the performance of plastic bottle classification. Evaluation of the proposed method and comparison with previous works showed the proper performance of this method.

EDF와 하프변환 기반의 차선관련 정보 검출 (Extraction of Lane-Reined Information Based on an EDF and Hough Transform)

  • 이준웅;이기용
    • 한국자동차공학회논문집
    • /
    • 제13권3호
    • /
    • pp.48-57
    • /
    • 2005
  • This paper presents a novel algorithm in order to extract lane-related information based on machine vision techniques. The algorithm makes up for the weak points of the former method, the Edge Distribution Function(EDF)-based approach, by introducing a Lane Boundary Pixel Extractor (LBPE) and the well-known Hough Transform(HT). The LBPE that serves as a filter to extract pixels expected to be on lane boundaries enhances the robustness of machine vision, and provides its results to the HT implementation and EDF construction. The HT forms the accumulator arrays and extracts the lane-related parameters composed of orientation and distance. Furthermore, as the histogram of edge magnitude with respect to edge orientation angle, the EDF has peaks at the orientations corresponding to lane slopes on the perspective image domain. Therefore, by fusing the results from the EDF and the HT the proposed algorithm improves the confidence of the extracted lane-related information. The system shows successful results under various degrees of illumination.