• 제목/요약/키워드: Object-detection

검색결과 2,473건 처리시간 0.028초

Multi-scale Diffusion-based Salient Object Detection with Background and Objectness Seeds

  • Yang, Sai;Liu, Fan;Chen, Juan;Xiao, Dibo;Zhu, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.4976-4994
    • /
    • 2018
  • The diffusion-based salient object detection methods have shown excellent detection results and more efficient computation in recent years. However, the current diffusion-based salient object detection methods still have disadvantage of detecting the object appearing at the image boundaries and different scales. To address the above mentioned issues, this paper proposes a multi-scale diffusion-based salient object detection algorithm with background and objectness seeds. In specific, the image is firstly over-segmented at several scales. Secondly, the background and objectness saliency of each superpixel is then calculated and fused in each scale. Thirdly, manifold ranking method is chosen to propagate the Bayessian fusion of background and objectness saliency to the whole image. Finally, the pixel-level saliency map is constructed by weighted summation of saliency values under different scales. We evaluate our salient object detection algorithm with other 24 state-of-the-art methods on four public benchmark datasets, i.e., ASD, SED1, SED2 and SOD. The results show that the proposed method performs favorably against 24 state-of-the-art salient object detection approaches in term of popular measures of PR curve and F-measure. And the visual comparison results also show that our method highlights the salient objects more effectively.

YOLOv5 based Anomaly Detection for Subway Safety Management Using Dilated Convolution

  • Nusrat Jahan Tahira;Ju-Ryong Park;Seung-Jin Lim;Jang-Sik Park
    • 한국산업융합학회 논문집
    • /
    • 제26권2_1호
    • /
    • pp.217-223
    • /
    • 2023
  • With the rapid advancement of technologies, need for different research fields where this technology can be used is also increasing. One of the most researched topic in computer vision is object detection, which has widely been implemented in various fields which include healthcare, video surveillance and education. The main goal of object detection is to identify and categorize all the objects in a target environment. Specifically, methods of object detection consist of a variety of significant techniq ues, such as image processing and patterns recognition. Anomaly detection is a part of object detection, anomalies can be found various scenarios for example crowded places such as subway stations. An abnormal event can be assumed as a variation from the conventional scene. Since the abnormal event does not occur frequently, the distribution of normal and abnormal events is thoroughly imbalanced. In terms of public safety, abnormal events should be avoided and therefore immediate action need to be taken. When abnormal events occur in certain places, real time detection is required to prevent and protect the safety of the people. To solve the above problems, we propose a modified YOLOv5 object detection algorithm by implementing dilated convolutional layers which achieved 97% mAP50 compared to other five different models of YOLOv5. In addition to this, we also created a simple mobile application to avail the abnormal event detection on mobile phones.

어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지 (Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image)

  • 최윤원;권기구;김종효;나경진;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권8호
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).

ANALYSIS OF THE FLOOR PLAN DATASET WITH YOLO V5

  • MYUNGHYUN JUNG;MINJUNG GIM;SEUNGHWAN YANG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제27권4호
    • /
    • pp.311-323
    • /
    • 2023
  • This paper introduces the industrial problem, the solution, and the results of the research conducted with Define Inc. The client company wanted to improve the performance of an object detection model on the floor plan dataset. To solve the problem, we analyzed the operational principles, advantages, and disadvantages of the existing object detection model, identified the characteristics of the floor plan dataset, and proposed to use of YOLO v5 as an appropriate object detection model for training the dataset. We compared the performance of the existing model and the proposed model using mAP@60, and verified the object detection results with real test data, and found that the performance increase of mAP@60 was 0.08 higher with a 25% shorter inference time. We also found that the training time of the proposed YOLO v5 was 71% shorter than the existing model because it has a simpler structure. In this paper, we have shown that the object detection model for the floor plan dataset can achieve better performance while reducing the training time. We expect that it will be useful for solving other industrial problems related to object detection in the future. We also believe that this result can be extended to study object recognition in 3D floor plan dataset.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현 (Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments)

  • 배주원;한병길
    • 대한임베디드공학회논문지
    • /
    • 제17권2호
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.

다중 신경망을 이용한 객체 탐지 효율성 개선방안 (Improving Efficiency of Object Detection using Multiple Neural Networks)

  • 박대흠;임종훈;장시웅
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.154-157
    • /
    • 2022
  • 기존의 Tensorflow CNN 환경에서 Object 탐지 방식은 Tensorflow 자체적으로 Object 라벨링 작업과 탐지를 하는 방식이다. 그러나 현재 YOLO의 등장으로 이미지 객체 탐지의 효율성이 높아졌다. 그로 인하여 기존 신경망보다 더 많은 심층 레이어를 구축할 수 있으며 또한 이미지 객체 인식률을 높일 수 있다. 따라서 본 논문에서는 Darknet, YOLO를 기반으로 한 Object 탐지 시스템을 설계하고 기존에 사용하던 합성곱 신경망에 기반한 다중 레이어 구축과 학습을 수행함으로써 탐지능력과 속도를 비교, 분석하였다. 이로 인하여 본 논문에서는 Darknet의 학습을 효율적으로 이용하는 신경망 방법론을 제시하였다.

  • PDF

딥러닝 기반 객체 인식 기술 동향 (Trends on Object Detection Techniques Based on Deep Learning)

  • 이진수;이상광;김대욱;홍승진;양성일
    • 전자통신동향분석
    • /
    • 제33권4호
    • /
    • pp.23-32
    • /
    • 2018
  • Object detection is a challenging field in the visual understanding research area, detecting objects in visual scenes, and the location of such objects. It has recently been applied in various fields such as autonomous driving, image surveillance, and face recognition. In traditional methods of object detection, handcrafted features have been designed for overcoming various visual environments; however, they have a trade-off issue between accuracy and computational efficiency. Deep learning is a revolutionary paradigm in the machine-learning field. In addition, because deep-learning-based methods, particularly convolutional neural networks (CNNs), have outperformed conventional methods in terms of object detection, they have been studied in recent years. In this article, we provide a brief descriptive summary of several recent deep-learning methods for object detection and deep learning architectures. We also compare the performance of these methods and present a research guide of the object detection field.

Extended Support Vector Machines for Object Detection and Localization

  • Feyereisl, Jan;Han, Bo-Hyung
    • 전자공학회지
    • /
    • 제39권2호
    • /
    • pp.45-54
    • /
    • 2012
  • Object detection is a fundamental task for many high-level computer vision applications such as image retrieval, scene understanding, activity recognition, visual surveillance and many others. Although object detection is one of the most popular problems in computer vision and various algorithms have been proposed thus far, it is also notoriously difficult, mainly due to lack of proper models for object representation, that handle large variations of object structure and appearance. In this article, we review a branch of object detection algorithms based on Support Vector Machines (SVMs), a well-known max-margin technique to minimize classification error. We introduce a few variations of SVMs-Structural SVMs and Latent SVMs-and discuss their applications to object detection and localization.

  • PDF

약속된 제스처를 이용한 객체 인식 및 추적 (Object Detection Using Predefined Gesture and Tracking)

  • 배대희;이준환
    • 한국컴퓨터정보학회논문지
    • /
    • 제17권10호
    • /
    • pp.43-53
    • /
    • 2012
  • 본 논문에서는 화면상 약속된 동작을 찾고 추적하는 알고리즘을 이용한 사용자 인터페이스를 제안한다. 현재 frame과 복수의 이전 frame간의 차영상을 이용하여 움직임 영역을 검출하고 약속된 제스처를 취하는 영역을 제어대상으로 인식한다. 이를 통하여 사용자가 장갑을 사용한다던지, 인종, 피부색등에 구애받지 않고 손동작 영역을 검출해 낼 수 있다. 또한 기존 색체 분포 추적 알고리즘을 개량하여 유사한 배경을 가로지르는 경우의 무게중심 위치의 정확성을 높였다. 그 결과 기존 피부색 인식 방법에 비해 약속된 손동작 인식률의 향상이 있었으며 기존 색체 추적 알고리즘에 비교하여 추적 인식률 향상을 확인할 수 있었다.