• Title/Summary/Keyword: Vision Detection

Search Result 1,276, Processing Time 0.031 seconds

Forest Fire Detection System using Drone Streaming Images (드론 스트리밍 영상 이미지 분석을 통한 실시간 산불 탐지 시스템)

  • Yoosin Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.685-689
    • /
    • 2023
  • The proposed system in the study aims to detect forest fires in real-time stream data received from the drone-camera. Recently, the number of wildfires has been increasing, and also the large scaled wildfires are frequent more and more. In order to prevent forest fire damage, many experiments using the drone camera and vision analysis are actively conducted, however there were many challenges, such as network speed, pre-processing, and model performance, to detect forest fires from real-time streaming data of the flying drone. Therefore, this study applied image data processing works to capture five good image frames for vision analysis from whole streaming data and then developed the object detection model based on YOLO_v2. As the result, the classification model performance of forest fire images reached upto 93% of accuracy, and the field test for the model verification detected the forest fire with about 70% accuracy.

Development of a Vision Sensor-based Vehicle Detection System (스테레오 비전센서를 이용한 선행차량 감지 시스템의 개발)

  • Hwang, Jun-Yeon;Hong, Dae-Gun;Huh, Kun-Soo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.16 no.6
    • /
    • pp.134-140
    • /
    • 2008
  • Preceding vehicle detection is a crucial issue for driver assistance system as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision. The vision-based preceded vehicle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an preceded vehicle detection system is developed using stereo vision sensors. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the preceded vehicles including a leading vehicle. Then, the position parameters of the preceded vehicles or leading vehicles can be obtained. The proposed preceded vehicle detection system is implemented on a passenger car and its performances is verified experimentally.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Real-Time Fire Detection Method Using YOLOv8 (YOLOv8을 이용한 실시간 화재 검출 방법)

  • Tae Hee Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF

Retina-Motivated CMOS Vision Chip Based on Column Parallel Architecture and Switch-Selective Resistive Network

  • Kong, Jae-Sung;Hyun, Hyo-Young;Seo, Sang-Ho;Shin, Jang-Kyoo
    • ETRI Journal
    • /
    • v.30 no.6
    • /
    • pp.783-789
    • /
    • 2008
  • A bio-inspired vision chip for edge detection was fabricated using 0.35 ${\mu}m$ double-poly four-metal complementary metal-oxide-semiconductor technology. It mimics the edge detection mechanism of a biological retina. This type of vision chip offer several advantages including compact size, high speed, and dense system integration. Low resolution and relatively high power consumption are common limitations of these chips because of their complex circuit structure. We have tried to overcome these problems by rearranging and simplifying their circuits. A vision chip of $160{\times}120$ pixels has been fabricated in $5{\times}5\;mm^2$ silicon die. It shows less than 10 mW of power consumption.

  • PDF

Digital Modelling of Visual Perception in Architectural Environment

  • Seo, Dong-Yeon;Lee, Kyung-Hoi
    • KIEAE Journal
    • /
    • v.3 no.2
    • /
    • pp.59-66
    • /
    • 2003
  • To be the design method supporting aesthetic ability of human, CAAD system should essentially recognize architectural form in the same way of human. In this study, visual perception process of human was analyzed to search proper computational method performing similar step of perception of it. Through the analysis of visual perception, vision was separated to low-level vision and high-level vision. Edge detection and neural network were selected to model after low-level vision and high-level vision. The 24 images of building, tree and landscape were processed by edge detection and trained by neural network. And 24 new images were used to test trained network. The test shows that trained network gives right perception result toward each images with low error rate. This study is on the meaning of artificial intelligence in design process rather than on the design automation strategy through artificial intelligence.

Steering Control of Autonomous Vehicle by the Vision System

  • Kim, Jung-Ha;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.1-91
    • /
    • 2001
  • The subject of this paper is vision system analysis of the autonomous vehicle. But, autonomous vehicle is one of the difficult topics from the point of view of several constrains on mobility, speed of vehicle and lack of environment information. Therefore, we are application of the vision system so that autonomous vehicle. Vision system of autonomous vehicle is likely to eyes of human. This paper can be divided into 2 parts. First, acceleration system and brake control system for longitudinal motion control. Second vision system of real time lane detection is for lateral motion control. This part deals lane detection method and image processing method. Finally, this paper focus on the integration of tole-operating vehicle and autonomous ...

  • PDF

Development of The 3-channel Vision Aligner for Wafer Bonding Process (웨이퍼 본딩 공정을 위한 3채널 비전 얼라이너 개발)

  • Kim, JongWon;Ko, JinSeok
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.1
    • /
    • pp.29-33
    • /
    • 2017
  • This paper presents a development of vision aligner with three channels for the wafer and plate bonding machine in manufacturing of LED. The developed vision aligner consists of three cameras and performs wafer alignment of rotation and translation, flipped wafer detection, and UV Tape detection on the target wafer and plate. Normally the process step of wafer bonding is not defined by standards in semiconductor's manufacturing which steps are used depends on the wafer types so, a lot of processing steps has many unexpected problems by the workers and environment of manufacturing such as the above mentioned. For the mass production, the machine operation related to production time and worker's safety so the operation process should be operated at one time with considering of unexpected problem. The developed system solved the 4 kinds of unexpected problems and it will apply on the massproduction environment.

  • PDF

Automated Vision-based Construction Object Detection Using Active Learning (액티브 러닝을 활용한 영상기반 건설현장 물체 자동 인식 프레임워크)

  • Kim, Jinwoo;Chi, Seokho;Seo, JoonOh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.631-636
    • /
    • 2019
  • Over the last decade, many researchers have investigated a number of vision-based construction object detection algorithms for the purpose of construction site monitoring. However, previous methods require the ground truth labeling, which is a process of manually marking types and locations of target objects from training image data, and thus a large amount of time and effort is being wasted. To address this drawback, this paper proposes a vision-based construction object detection framework that employs an active learning technique while reducing manual labeling efforts. For the validation, the research team performed experiments using an open construction benchmark dataset. The results showed that the method was able to successfully detect construction objects that have various visual characteristics, and also indicated that it is possible to develop the high performance of an object detection model using smaller amount of training data and less iterative training steps compared to the previous approaches. The findings of this study can be used to reduce the manual labeling processes and minimize the time and costs required to build a training database.

Edge-based Method for Human Detection in an Image (영상 내 사람의 검출을 위한 에지 기반 방법)

  • Do, Yongtae;Ban, Jonghee
    • Journal of Sensor Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.