• Title/Summary/Keyword: You only look once

Search Result 124, Processing Time 0.027 seconds

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

The Study of Barista Robots Utilizing Collaborative Robotics and AI Technology (협동로봇과 AI 기술을 활용한 바리스타 로봇 연구)

  • Do Hyeong Kwon;Tae Myeong Ha;Jae Seong Lee;Yun Sang Jeong;Yeong Geon Kim;Hyeon Gak Kim;Seung Jun Song;Dae Gil O;Geonu Lee;Jae Won Jeong;Seungwoon Park;Chul-Hee Lee
    • Journal of Drive and Control
    • /
    • v.21 no.3
    • /
    • pp.36-45
    • /
    • 2024
  • Collaborative robots, designed for direct interaction with humans have limited adaptability to environmental changes. This study addresses this limitation by implementing a barista robot system using AI technology. To overcome limitations of traditional collaborative robots, a model that applies a real-time object detection algorithm to a 6-degree-of-freedom robot arm to recognize and control the position of random cups is proposed. A coffee ordering application is developed, allowing users to place orders through the app, which the robot arm then automatically prepares. The system is connected to ROS via TCP/IP socket communication, performing various tasks through state transitions and gripper control. Experimental results confirmed that the barista robot could autonomously handle processes of ordering, preparing, and serving coffee.

Artificial intelligence in colonoscopy: from detection to diagnosis

  • Eun Sun Kim;Kwang-Sig Lee
    • The Korean journal of internal medicine
    • /
    • v.39 no.4
    • /
    • pp.555-562
    • /
    • 2024
  • This study reviews the recent progress of artificial intelligence for colonoscopy from detection to diagnosis. The source of data was 27 original studies in PubMed. The search terms were "colonoscopy" (title) and "deep learning" (abstract). The eligibility criteria were: (1) the dependent variable of gastrointestinal disease; (2) the interventions of deep learning for classification, detection and/or segmentation for colonoscopy; (3) the outcomes of accuracy, sensitivity, specificity, area under the curve (AUC), precision, F1, intersection of union (IOU), Dice and/or inference frames per second (FPS); (3) the publication year of 2021 or later; (4) the publication language of English. Based on the results of this study, different deep learning methods would be appropriate for different tasks for colonoscopy, e.g., Efficientnet with neural architecture search (AUC 99.8%) in the case of classification, You Only Look Once with the instance tracking head (F1 96.3%) in the case of detection, and Unet with dense-dilation-residual blocks (Dice 97.3%) in the case of segmentation. Their performance measures reported varied within 74.0-95.0% for accuracy, 60.0-93.0% for sensitivity, 60.0-100.0% for specificity, 71.0-99.8% for the AUC, 70.1-93.3% for precision, 81.0-96.3% for F1, 57.2-89.5% for the IOU, 75.1-97.3% for Dice and 66-182 for FPS. In conclusion, artificial intelligence provides an effective, non-invasive decision support system for colonoscopy from detection to diagnosis.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

A Study on Vehicle License Plate Recognition System through Fake License Plate Generator in YOLOv5 (YOLOv5에서 가상 번호판 생성을 통한 차량 번호판 인식 시스템에 관한 연구)

  • Ha, Sang-Hyun;Jeong, Seok Chan;Jeon, Young-Joon;Jang, Mun-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.699-706
    • /
    • 2021
  • Existing license plate recognition system is used as an optical character recognition method, but a method of using deep learning has been proposed in recent studies because it has problems with image quality and Korean misrecognition. This requires a lot of data collection, but the collection of license plates is not easy to collect due to the problem of the Personal Information Protection Act, and labeling work to designate the location of individual license plates is required, but it also requires a lot of time. Therefore, in this paper, to solve this problem, five types of license plates were created using a virtual Korean license plate generation program according to the notice of the Ministry of Land, Infrastructure and Transport. And the generated license plate is synthesized in the license plate part of collectable vehicle images to construct 10,147 learning data to be used in deep learning. The learning data classifies license plates, Korean, and numbers into individual classes and learn using YOLOv5. Since the proposed method recognizes letters and numbers individually, if the font does not change, it can be recognized even if the license plate standard changes or the number of characters increases. As a result of the experiment, an accuracy of 96.82% was obtained, and it can be applied not only to the learned license plate but also to new types of license plates such as new license plates and eco-friendly license plates.

A deep learning-based approach for feeding behavior recognition of weanling pigs

  • Kim, MinJu;Choi, YoHan;Lee, Jeong-nam;Sa, SooJin;Cho, Hyun-chong
    • Journal of Animal Science and Technology
    • /
    • v.63 no.6
    • /
    • pp.1453-1463
    • /
    • 2021
  • Feeding is the most important behavior that represents the health and welfare of weanling pigs. The early detection of feed refusal is crucial for the control of disease in the initial stages and the detection of empty feeders for adding feed in a timely manner. This paper proposes a real-time technique for the detection and recognition of small pigs using a deep-leaning-based method. The proposed model focuses on detecting pigs on a feeder in a feeding position. Conventional methods detect pigs and then classify them into different behavior gestures. In contrast, in the proposed method, these two tasks are combined into a single process to detect only feeding behavior to increase the speed of detection. Considering the significant differences between pig behaviors at different sizes, adaptive adjustments are introduced into a you-only-look-once (YOLO) model, including an angle optimization strategy between the head and body for detecting a head in a feeder. According to experimental results, this method can detect the feeding behavior of pigs and screen non-feeding positions with 95.66%, 94.22%, and 96.56% average precision (AP) at an intersection over union (IoU) threshold of 0.5 for YOLOv3, YOLOv4, and an additional layer and with the proposed activation function, respectively. Drinking behavior was detected with 86.86%, 89.16%, and 86.41% AP at a 0.5 IoU threshold for YOLOv3, YOLOv4, and the proposed activation function, respectively. In terms of detection and classification, the results of our study demonstrate that the proposed method yields higher precision and recall compared to conventional methods.

Dynamic characteristics monitoring of wind turbine blades based on improved YOLOv5 deep learning model

  • W.H. Zhao;W.R. Li;M.H. Yang;N. Hong;Y.F. Du
    • Smart Structures and Systems
    • /
    • v.31 no.5
    • /
    • pp.469-483
    • /
    • 2023
  • The dynamic characteristics of wind turbine blades are usually monitored by contact sensors with the disadvantages of high cost, difficult installation, easy damage to the structure, and difficult signal transmission. In view of the above problems, based on computer vision technology and the improved YOLOv5 (You Only Look Once v5) deep learning model, a non-contact dynamic characteristic monitoring method for wind turbine blade is proposed. First, the original YOLOv5l model of the CSP (Cross Stage Partial) structure is improved by introducing the CSP2_2 structure, which reduce the number of residual components to better the network training speed. On this basis, combined with the Deep sort algorithm, the accuracy of structural displacement monitoring is mended. Secondly, for the disadvantage that the deep learning sample dataset is difficult to collect, the blender software is used to model the wind turbine structure with conditions, illuminations and other practical engineering similar environments changed. In addition, incorporated with the image expansion technology, a modeling-based dataset augmentation method is proposed. Finally, the feasibility of the proposed algorithm is verified by experiments followed by the analytical procedure about the influence of YOLOv5 models, lighting conditions and angles on the recognition results. The results show that the improved YOLOv5 deep learning model not only perform well compared with many other YOLOv5 models, but also has high accuracy in vibration monitoring in different environments. The method can accurately identify the dynamic characteristics of wind turbine blades, and therefore can provide a reference for evaluating the condition of wind turbine blades.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.

Real-time automated detection of construction noise sources based on convolutional neural networks

  • Jung, Seunghoon;Kang, Hyuna;Hong, Juwon;Hong, Taehoon;Lee, Minhyun;Kim, Jimin
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.455-462
    • /
    • 2020
  • Noise which is unwanted sound is a serious pollutant that can affect human health, as well as the working and living environment if exposed to humans. However, current noise management on the construction project is generally conducted after the noise exceeds the regulation standard, which increases the conflicts with inhabitants near the construction site and threats to the safety and productivity of construction workers. To overcome the limitations of the current noise management methods, the activities of construction equipment which is the main source of construction noise need to be managed throughout the construction period in real-time. Therefore, this paper proposed a framework for automatically detecting noise sources in construction sites in real-time based on convolutional neural networks (CNNs) according to the following four steps: (i) Step 1: Definition of the noise sources; (ii) Step 2: Data preparation; (iii) Step 3: Noise source classification using the audio CNN; and (iv) Step 4: Noise source detection using the visual CNN. The short-time Fourier transform (STFT) and temporal image processing are used to contain temporal features of the audio and visual data. In addition, the AlexNet and You Only Look Once v3 (YOLOv3) algorithms have been adopted to classify and detect the noise sources in real-time. As a result, the proposed framework is expected to immediately find construction activities as current noise sources on the video of the construction site. The proposed framework could be helpful for environmental construction managers to efficiently identify and control the noise by automatically detecting the noise sources among many activities carried out by various types of construction equipment. Thereby, not only conflicts between inhabitants and construction companies caused by construction noise can be prevented, but also the noise-related health risks and productivity degradation for construction workers and inhabitants near the construction site can be minimized.

  • PDF

Multi-Class Multi-Object Tracking in Aerial Images Using Uncertainty Estimation

  • Hyeongchan Ham;Junwon Seo;Junhee Kim;Chungsu Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.115-122
    • /
    • 2024
  • Multi-object tracking (MOT) is a vital component in understanding the surrounding environments. Previous research has demonstrated that MOT can successfully detect and track surrounding objects. Nonetheless, inaccurate classification of the tracking objects remains a challenge that needs to be solved. When an object approaching from a distance is recognized, not only detection and tracking but also classification to determine the level of risk must be performed. However, considering the erroneous classification results obtained from the detection as the track class can lead to performance degradation problems. In this paper, we discuss the limitations of classification in tracking under the classification uncertainty of the detector. To address this problem, a class update module is proposed, which leverages the class uncertainty estimation of the detector to mitigate the classification error of the tracker. We evaluated our approach on the VisDrone-MOT2021 dataset,which includes multi-class and uncertain far-distance object tracking. We show that our method has low certainty at a distant object, and quickly classifies the class as the object approaches and the level of certainty increases.In this manner, our method outperforms previous approaches across different detectors. In particular, the You Only Look Once (YOLO)v8 detector shows a notable enhancement of 4.33 multi-object tracking accuracy (MOTA) in comparison to the previous state-of-the-art method. This intuitive insight improves MOT to track approaching objects from a distance and quickly classify them.