• 제목/요약/키워드: vision-based method

검색결과 1,456건 처리시간 0.029초

YOLOv8을 이용한 실시간 화재 검출 방법 (Real-Time Fire Detection Method Using YOLOv8)

  • 이태희;박천수
    • 반도체디스플레이기술학회지
    • /
    • 제22권2호
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터 (Particle Filters using Gaussian Mixture Models for Vision-Based Navigation)

  • 홍경우;김성중;방효충;김진원;서일원;박장호
    • 한국항공우주학회지
    • /
    • 제47권4호
    • /
    • pp.274-282
    • /
    • 2019
  • 무인항공기의 영상 기반 항법은 널리 사용되는 GPS/INS 통합 항법 시스템의 취약점을 보강할 수 있는 중요한 기술로 이에 대한 연구가 활발히 이루어지고 있다. 하지만 일반적인 영상 대조 기법은 실제 항공기 비행 상황들을 적절하게 고려하기 힘들다는 단점이 있다. 따라서 본 논문에서는 영상기반 항법을 위한 가우시안 혼합 모델 기반의 파티클 필터를 제안한다. 제안한 파티클 필터는 영상과 데이터베이스를 가우시안 혼합 모델로 가정하여 둘 간의 유사도를 이용하여 항체의 위치를 추정한다. 또한 몬테카를로 시뮬레이션을 통해 위치 추정 성능을 확인한다.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제35권4호
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

비젼 시스템의 에지 검출 방법을 이용한 도립 진자의 편차 각 (Deviation Angles of Inverted Pendulum by Edge Detection Method of Vision System)

  • 류상문;박종규;한일석;장성환;안태천
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 B
    • /
    • pp.797-799
    • /
    • 1999
  • In this paper, the edge intensification and detection algorithm which is one of image processing operations is considered. Edge detection algorithm is the most useful and important method for image processing or image analysis. The vision system based on these processing and concerned in specific project is proposed and is applied to the inverted pendulum in order to automatically acquire the angles between the bar and the perpendicular reference line. In this paper, the angles that are obtained from some images of computer vision system can offer useful informations for control of real inverted pendulum system. Next, the inverted pendulum will be controlled by the proposed method.

  • PDF

컴퓨터 비젼을 이용한 파이프 형상 검사시스템에 관한 연구 (A Study about Pipe Shape Inspection System for Computer Vision)

  • 김형석;이병룡;양순용;안경관;오현옥
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.946-950
    • /
    • 2003
  • In this paper, a computer-vision based pipe shape inspection algorithm is developed. The algorithm uses the modified Hough transformation and a line-scanning approach to identify the edge line and radius of the pipe image, from which the eccentricity and dimension of the pipe-end is calculated. Line and circle detection was performed using Laplace operator with input image, which are acquired from the front and side cameras. In order to minimize the memory usage and the processing time, a clustering method with the modified Hough transformation for line detection. The dimension of inner and outer radius of pipe is calculated by proposed line-scanning method. The method scans several lines along the X and Y axes, calculating the eccentricity of inner and outer circle. by which pipes with wrong end-shape can be classified removed.

  • PDF

기하학적 패턴 매칭을 이용한 3차원 비전 검사 알고리즘 (3D Vision Inspection Algorithm using Geometrical Pattern Matching Method)

  • 정철진;허경무;김장기
    • 제어로봇시스템학회논문지
    • /
    • 제10권1호
    • /
    • pp.54-59
    • /
    • 2004
  • We suggest a 3D vision inspection algorithm which is based on the external shape feature. Because many electronic parts have the regular shape, if we have the database of pattern and can recognize the object using the database of the object s pattern, we can inspect many types of electronic parts. Our proposed algorithm uses the geometrical pattern matching method and 3D database on the electronic parts. We applied our suggested algorithm fer inspecting several objects including typical IC and capacitor. Through the experiments, we could find that our suggested algorithm is more effective and more robust to the inspection environment(rotation angle, light source, etc.) than conventional 2D inspection methods. We also compared our suggested algorithm with the feature space trajectory method.

컴퓨터 비젼을 이용한 파이프 검사시스템에 대한 연구 (A Study about Pipe inspection System for Computer Vision)

  • 박찬호;이병룡;양순용;안경관;오현옥
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 추계학술대회 논문집
    • /
    • pp.521-525
    • /
    • 2002
  • In this paper, a computer-vision based pipe-inspection algorithm is developed. The algorithm uses the modified Hough transformation and a line-scanning approach to identify the edge line and radius of the pipe image, from which the eccentricity and dimension of the pipe-end is calculated. Line and circle detection was performed using Laplace operator with input image, which are acquired from the front and side cameras. In order to minimize the memory usage and the processing time, a clustering method with the modified Hough transformation for line detection. The dimension of inner and outer radius of pipe is calculated by proposed line-scanning method. The method scans several lines along the X and Y axes, calculating the eccentricity of inner and outer circle, by which pipes with wrong end-shape can be classified removed.

  • PDF

VRML과 영상오버레이를 이용한 로봇의 경로추적 (A Path tracking algorithm and a VRML image overlay method)

  • 손은호;;김영철;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

선삭에서 컴퓨터비젼을 이용한 플랭크 마모 측정에 관한 연구 (A study on the measurement of flank wear by computer vision in turning)

  • 김영일;유봉환
    • 한국정밀공학회지
    • /
    • 제10권3호
    • /
    • pp.168-174
    • /
    • 1993
  • A new digital image processing method for measuring of the flank wear of cutting tool is presented. The method is based on computer vision technology in which the tool is illuminated by two halogen lamps and the wear zone is visualized using a CCD camera. The image is converted into digital pixel and processed to detect the wearland width. As a conclusion, it has been proved that the average wearland area and mzximum peak values of the flank wear width can monitored effectively to a measuring resolution of 0.01mm.

  • PDF