• Title/Summary/Keyword: YOLO tracking

Search Result 37, Processing Time 0.025 seconds

A climbing movement detection system through efficient cow behavior recognition based on YOLOX and OC-SORT (YOLOX와 OC-SORT 기반의 효율적인 소 행동 인식을 통한 승가 운동 감지시스템)

  • LI YU;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.18-26
    • /
    • 2023
  • In this study, we propose a cow behavior recognition system based on YOLOX and OC-SORT. YOLO X detects targets in real-time and provides information on cow location and behavior. The OC-SORT module tracks cows in the video and assigns unique IDs. The quantitative analysis module analyzes the behavior and location information of cows. Experimental results show that our system demonstrates high accuracy and precision in target detection and tracking. The average precision (AP) of YOLOX was 82.2%, the average recall (AR) was 85.5%, the number of parameters was 54.15M, and the computation was 194.16GFLOPs. OC-SORT was able to maintain high-precision real-time target tracking in complex environments and occlusion situations. By analyzing changes in cow movement and frequency of mounting behavior, our system can help more accurately discern the estrus behavior of cows.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Research on Drivable Road Area Recognition and Real-Time Tracking Techniques Based on YOLOv8 Algorithm (YOLOv8 알고리즘 기반의 주행 가능한 도로 영역 인식과 실시간 추적 기법에 관한 연구)

  • Jung-Hee Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.563-570
    • /
    • 2024
  • This paper proposes a method to recognize and track drivable lane areas to assist the driver. The main topic is designing a deep-based network that predicts drivable road areas using computer vision and deep learning technology based on images acquired in real time through a camera installed in the center of the windshield inside the vehicle. This study aims to develop a new model trained with data directly obtained from cameras using the YOLO algorithm. It is expected to play a role in assisting the driver's driving by visualizing the exact location of the vehicle on the actual road consistent with the actual image and displaying and tracking the drivable lane area. As a result of the experiment, it was possible to track the drivable road area in most cases, but in bad weather such as heavy rain at night, there were cases where lanes were not accurately recognized, so improvement in model performance is needed to solve this problem.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Remaining persons estimation system using object recognition (객체인식을 활용한 잔류인원 추정 시스템)

  • Seong-woo Lee;Gyung-hyung Lee;Jin-hoon Seok;Kyeong-seop Kim;Min-seo Jeon;Seung-oh Choo;Tae-jin Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.269-270
    • /
    • 2023
  • 재해, 재난 발생 시에 구조대가 건물 내부나 지하철 등, 특정 구역 내의 대피하지 못한 잔류인원을 제대로 파악하데 어려움을 겪는다. 이를 개선하고자 YOLO와 DeepSORT를 활용하여 통행자를 인식하여 특정 구역의 잔류인원을 파악하고 이를 서버를 통해 확인할 수 있는 시스템을 개발하였다. 실시간 객체인식 알고리즘인 YOLOv4-tiny와 실시간 객체추적기술인 DeepSORT 알고리즘을 이용하여 제안한 방법을 Ubuntu환경에서 구현하고, 실내 상황에 맞춰 통행자 동선을 고려해서 적용하였다. 개발한 시스템은 인식된 통행자 객체방향으로 출입을 구분하여 데이터를 서버에 저장한다. 이에 따라 재해 발생 시 구역의 잔류인원을 파악하여 빠르고 효율적으로 요구조자 위치와 인원을 예측할 수 있다.

  • PDF

Cat Behavior Pattern Analysis and Disease Prediction System of Home CCTV Images using AI (AI를 이용한 홈CCTV 영상의 반려묘 행동 패턴 분석 및 질병 예측 시스템 연구)

  • Han, Su-yeon;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1266-1271
    • /
    • 2022
  • Cats have strong wildness so they have a characteristic of hiding diseases well. The disease may have already worsened when the guardian finds out that the cat has a disease. It will be of great help in treating the cat's disease if the owner can recognize the cat's polydipsia, polyuria, and frequent urination more quickly. In this paper, 1) Efficient version of DeepLabCut for pose estimation, 2) YOLO v4 for object detection, 3) LSTM is used for behavior prediction, and 4) BoT-SORT is used for object tracking running on an artificial intelligence device. Using artificial intelligence technology, it predicts the cat's next, polyuria and frequency of urination through the analysis of the cat's behavior pattern from the home CCTV video and the weight sensor of the water bowl. And, through analysis of cat behavior patterns, we propose an application that reports disease prediction and abnormal behavior to the guardian and delivers it to the guardian's mobile and the server system.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.