• 제목/요약/키워드: detecting accuracy

검색결과 979건 처리시간 0.025초

Road Damage Detection and Classification based on Multi-level Feature Pyramids

  • Yin, Junru;Qu, Jiantao;Huang, Wei;Chen, Qiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.786-799
    • /
    • 2021
  • Road damage detection is important for road maintenance. With the development of deep learning, more and more road damage detection methods have been proposed, such as Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. However, because shallow and deep layers cannot be extracted at the same time, the existing methods do not perform well in detecting objects with fewer samples. In addition, these methods cannot obtain a highly accurate detecting bounding box. This paper presents a Multi-level Feature Pyramids method based on M2det. Because the feature layer has multi-scale and multi-level architecture, the feature layer containing more information and obvious features can be extracted. Moreover, an attention mechanism is used to improve the accuracy of local boundary boxes in the dataset. Experimental results show that the proposed method is better than the current state-of-the-art methods.

저고도 무인항공기를 이용한 보행자 추적에 관한 연구 (A Study on Pedestrians Tracking using Low Altitude UAV)

  • 서창진
    • 전기학회논문지P
    • /
    • 제67권4호
    • /
    • pp.227-232
    • /
    • 2018
  • In this paper, we propose a faster object detection and tracking method using Deep Learning, UAV(unmanned aerial vehicle), Kalman filter and YOLO(You Only Look Once)v3 algorithms. The performance of the object tracking system is decided by the performance and the accuracy of object detecting and tracking algorithms. So we applied to the YOLOv3 algorithm which is the best detection algorithm now at our proposed detecting system and also used the Kalman Filter algorithm that uses a variable detection area as the tracking system. In the experiment result, we could find the proposed system is an excellent result more than a fixed area detection system.

Labor Vulnerability Assessment through Electroencephalogram Monitoring: a Bispectrum Time-frequency Analysis Approach

  • CHEN, Jiayu;Lin, Zhenghang
    • 국제학술발표논문집
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.179-182
    • /
    • 2015
  • Detecting and assessing human-related risks is critical to improve the on-site safety condition and reduce the loss in lives, time and budget for construction industry. Recent research in neural science and psychology suggest inattentional blindness that caused by overload in working memory is the major cause of unexpected human related accidents. Due to the limitation of human mental workload, laborers are vulnerable to unexpected hazards while focusing on complicated and dangerous construction tasks. Therefore, detecting the risk perception abilities of workers could help to identify vulnerable individuals and reduce unexpected injuries. However, there are no available measurement approaches or devices capable of monitoring construction workers' mental conditions. The research proposed in this paper aims to develop such a measurement framework to evaluate hazards through monitoring electroencephalogram of labors. The research team developed a wearable safety monitoring helmet, which can collect the brain waves of users for analysis. A bispectrum approach has been developed in this paper to enrich the data source and improve accuracy.

  • PDF

Object Detection Using Deep Learning Algorithm CNN

  • S. Sumahasan;Udaya Kumar Addanki;Navya Irlapati;Amulya Jonnala
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.129-134
    • /
    • 2024
  • Object Detection is an emerging technology in the field of Computer Vision and Image Processing that deals with detecting objects of a particular class in digital images. It has considered being one of the complicated and challenging tasks in computer vision. Earlier several machine learning-based approaches like SIFT (Scale-invariant feature transform) and HOG (Histogram of oriented gradients) are widely used to classify objects in an image. These approaches use the Support vector machine for classification. The biggest challenges with these approaches are that they are computationally intensive for use in real-time applications, and these methods do not work well with massive datasets. To overcome these challenges, we implemented a Deep Learning based approach Convolutional Neural Network (CNN) in this paper. The Proposed approach provides accurate results in detecting objects in an image by the area of object highlighted in a Bounding Box along with its accuracy.

관절 정보를 이용한 토크 추정 방식의 트랜스포머 기반 로봇 충돌 검출 방법 (Transformer based Collision Detection Approach by Torque Estimation using Joint Information)

  • 박지원;임대규;박수민;박현준
    • 로봇학회논문지
    • /
    • 제19권3호
    • /
    • pp.266-273
    • /
    • 2024
  • With the rising interaction between robots and humans, detecting collisions has become increasingly vital for ensuring safety. In this paper, we propose a novel approach for detecting collisions without using force torque sensors or tactile sensors, utilizing a Transformer-based neural network architecture. The proposed collision detection approach comprises a torque estimator network that predicts the joint torque in a free-motion state using Synchronous time-step encoding, and a collision discriminator network that predicts collisions by leveraging the difference between estimated and actual torques. The collision discriminator finally creates a binary tensor that predicts collisions frame by frame. In simulations, the proposed network exhibited enhanced collision detection performance relative to the other kinds of networks both in terms of prediction speed and accuracy. This underscores the benefits of using Transformer networks for collision detection tasks, where rapid decision-making is essential.

Adaptive boosting in ensembles for outlier detection: Base learner selection and fusion via local domain competence

  • Bii, Joash Kiprotich;Rimiru, Richard;Mwangi, Ronald Waweru
    • ETRI Journal
    • /
    • 제42권6호
    • /
    • pp.886-898
    • /
    • 2020
  • Unusual data patterns or outliers can be generated because of human errors, incorrect measurements, or malicious activities. Detecting outliers is a difficult task that requires complex ensembles. An ideal outlier detection ensemble should consider the strengths of individual base detectors while carefully combining their outputs to create a strong overall ensemble and achieve unbiased accuracy with minimal variance. Selecting and combining the outputs of dissimilar base learners is a challenging task. This paper proposes a model that utilizes heterogeneous base learners. It adaptively boosts the outcomes of preceding learners in the first phase by assigning weights and identifying high-performing learners based on their local domains, and then carefully fuses their outcomes in the second phase to improve overall accuracy. Experimental results from 10 benchmark datasets are used to train and test the proposed model. To investigate its accuracy in terms of separating outliers from inliers, the proposed model is tested and evaluated using accuracy metrics. The analyzed data are presented as crosstabs and percentages, followed by a descriptive method for synthesis and interpretation.

Development of YOLOv5s and DeepSORT Mixed Neural Network to Improve Fire Detection Performance

  • Jong-Hyun Lee;Sang-Hyun Lee
    • International Journal of Advanced Culture Technology
    • /
    • 제11권1호
    • /
    • pp.320-324
    • /
    • 2023
  • As urbanization accelerates and facilities that use energy increase, human life and property damage due to fire is increasing. Therefore, a fire monitoring system capable of quickly detecting a fire is required to reduce economic loss and human damage caused by a fire. In this study, we aim to develop an improved artificial intelligence model that can increase the accuracy of low fire alarms by mixing DeepSORT, which has strengths in object tracking, with the YOLOv5s model. In order to develop a fire detection model that is faster and more accurate than the existing artificial intelligence model, DeepSORT, a technology that complements and extends SORT as one of the most widely used frameworks for object tracking and YOLOv5s model, was selected and a mixed model was used and compared with the YOLOv5s model. As the final research result of this paper, the accuracy of YOLOv5s model was 96.3% and the number of frames per second was 30, and the YOLOv5s_DeepSORT mixed model was 0.9% higher in accuracy than YOLOv5s with an accuracy of 97.2% and number of frames per second: 30.

Indoor Environment Drone Detection through DBSCAN and Deep Learning

  • Ha Tran Thi;Hien Pham The;Yun-Seok Mun;Ic-Pyo Hong
    • 전기전자학회논문지
    • /
    • 제27권4호
    • /
    • pp.439-449
    • /
    • 2023
  • In an era marked by the increasing use of drones and the growing demand for indoor surveillance, the development of a robust application for detecting and tracking both drones and humans within indoor spaces becomes imperative. This study presents an innovative application that uses FMCW radar to detect human and drone motions from the cloud point. At the outset, the DBSCAN (Density-based Spatial Clustering of Applications with Noise) algorithm is utilized to categorize cloud points into distinct groups, each representing the objects present in the tracking area. Notably, this algorithm demonstrates remarkable efficiency, particularly in clustering drone point clouds, achieving an impressive accuracy of up to 92.8%. Subsequently, the clusters are discerned and classified into either humans or drones by employing a deep learning model. A trio of models, including Deep Neural Network (DNN), Residual Network (ResNet), and Long Short-Term Memory (LSTM), are applied, and the outcomes reveal that the ResNet model achieves the highest accuracy. It attains an impressive 98.62% accuracy for identifying drone clusters and a noteworthy 96.75% accuracy for human clusters.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

차량용 비전 시스템을 위한 영상 안정화에 관한 연구 (A Study on an Image Stabilization for Car Vision System)

  • 유신;이완주;강현철
    • 한국정보통신학회논문지
    • /
    • 제15권4호
    • /
    • pp.957-964
    • /
    • 2011
  • 영상 안정화(image stabilization)는 흔들림이 있는 영상을 영상처리 기법으로 안정화 시키는 과정을 말한다. PA(projection algorithm)기법을 이용한 디지털 영상 안정화는 쉽게 글로벌 모션을 얻을 수 있어 많이 연구가 되어 왔다. PA기법은 실현이 간단하고 속도가 빠른 장점이 있지만 고정된 탐색범위를 사용함으로 탐색범위를 초과한 떨림을 안정화 시킬 수 없고 또한 큰 떨림을 안정화 하기위하여 탐색범위를 크게 하면 모션 추적에 참여하는 블록이 작아져 적확한 글로벌 모션을 얻지 못하게 된다. 본 논문에서는 기존의 PA기법의 단점을 해결하기 위하여 여러 가지 흔들림의 크기에 절용할 수 있는 IPA(Iterative Projection Algorithm)기법을 제안하여, 차량에서 찍은 연속된 영상 1000프레임에 적용하였을 때 기존의 알고리즘을 사용하고 서로 다른 탐색범위를 사용한 결과보다 PSNR이 최저 6.8%, 최고 28.9% 향상 되었다.