• Title/Summary/Keyword: AI camera

Search Result 79, Processing Time 0.026 seconds

A Fire Deteetion System based on YOLOv5 using Web Camera (웹카메라를 이용한 YOLOv5 기반 화재 감지 시스템)

  • Park, Dae-heum;Jang, Si-woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.69-71
    • /
    • 2022
  • Today, the AI market is very large due to the development of AI. Among them, the most advanced AI is image detection. Thus, there are many object detection models using YOLOv5.However, most object detection in AI is focused on detecting objects that are stereotyped.In order to recognize such unstructured data, the object may be recognized by learning and filtering the object. Therefore, in this paper, a fire monitoring system using YOLOv5 was designed to detect and analyze unstructured data fires and suggest ways to improve the fire object detection model.

  • PDF

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Development of a Slope Condition Analysis System using IoT Sensors and AI Camera (IoT 센서와 AI 카메라를 융합한 급경사지 상태 분석 시스템 개발)

  • Seungjoo Lee;Kiyen Jeong;Taehoon Lee;YoungSeok Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.23 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Recent abnormal climate conditions have increased the risk of slope collapses, which frequently result in significant loss of life and property due to the absence of early prediction and warning dissemination. In this paper, we develop a slope condition analysis system using IoT sensors and AI-based camera to assess the condition of slopes. To develop the system, we conducted hardware and firmware design for measurement sensors considering the ground conditions of slopes, designed AI-based image analysis algorithms, and developed prediction and warning solutions and systems. We aimed to minimize errors in sensor data through the integration of IoT sensor data and AI camera image analysis, ultimately enhancing the reliability of the data. Additionally, we evaluated the accuracy (reliability) by applying it to actual slopes. As a result, sensor measurement errors were maintained within 0.1°, and the data transmission rate exceeded 95%. Moreover, the AI-based image analysis system demonstrated nighttime partial recognition rates of over 99%, indicating excellent performance even in low-light conditions. Through this research, it is anticipated that the analysis of slope conditions and smart maintenance management in various fields of Social Overhead Capital (SOC) facilities can be applied.

Maritime Search And Rescue Drone Using Artificial Intelligence (인공지능을 이용한 해양구조 드론)

  • Shin, Gi-hwan;Kim, Jin-hong;Park, Han-gyu;Kang, Sun-kyong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.688-689
    • /
    • 2022
  • This paper proposes the development of an AI drone equipped with motion detection and thermal imaging camera to quickly rescue people from drowning accidents. Currently, when a drowning accident occurs, a large number of manpower must be put in to find the person who needs it, such as conducting a search operation. The time required for this process is too long, and especially the night search is more difficult for a person to do directly. To solve this situation, we are going to use AI drones.

  • PDF

Analysis of Space Use Patterns of Public Library Users through AI Cameras (AI 카메라를 활용한 공공도서관 이용자의 공간이용행태 분석 연구)

  • Gyuhwan Kim;Do-Heon Jeong
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.4
    • /
    • pp.333-351
    • /
    • 2023
  • This study investigates user behavior in library spaces through the lens of AI camera analytics. By leveraging the face recognition and tracking capabilities of AI cameras, we accurately identified the gender and age of visitors and meticulously collected video data to track their movements. Our findings revealed that female users slightly outnumbered male users and the dominant age group was individuals in their 30s. User visits peaked between Tuesday to Friday, with the highest footfall recorded between 14:00 and 15:00 pm, while visits decreased over the weekend. Most visitors utilized one or two specific spaces, frequently consulting the information desk for inquiries, checking out/returning items, or using the rest area for relaxation. The library stacks were used approximately twice as much as they were avoided. The most frequented subject areas were Philosophy(100), Religion(200), Social Sciences(300), Science(400), Technology(500), and Literature(800), with Literature(800) and Religion(200) displaying the most intersections with other areas. By categorizing users into five clusters based on space utilization patterns, we discerned varying objectives and subject interests, providing insights for future library service enhancements. Moreover, the study underscores the need to address the associated costs and privacy concerns when considering the broader application of AI camera analytics in library settings.

To Use AI Camera Block Vision Algorithm Contents Development (AI Camera Block을 사용한 비전 알고리즘 콘텐츠 개발)

  • Lim, Tae Yoon;An, Jae-Yong;Oh, Junhyeok;Kim, Dong-Yeon;Won, JinSub;Hwang, Jun Ho;Do, Youngchae;Woo, Deok Ha;Lee, Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.840-843
    • /
    • 2019
  • IoT 산업이 발전하면서 기존 토이와 IoT 기술을 결합한 스마트토이가 각광 받고 있다. 스마트토이는 수동적인 방식의 기존토이와는 다르게 토이 간 인터렉션이 가능하며 전자 센서들을 사용하여 토이를 사용하는 어린아이들에 코딩을 활용한 콘텐츠를 제공가능하다. 기존 스마트토이는 처음에는 호기심을 자극하지만, 익숙해지면 흥미가 떨어지는 현상을 보인다. 이에 본 논문에서는 기존 스마트토이가 갖는 재미요소 증가와 다양한 콘텐츠의 개발을 위해서 스마트 토이에 Artificial Intelligence(AI) 기능을 접목한 AI 카메라블록을 사용하여 새로운 콘텐츠를 개발하였다.

The Camera Tracking of Real-Time Moving Object on UAV Using the Color Information (컬러 정보를 이용한 무인항공기에서 실시간 이동 객체의 카메라 추적)

  • Hong, Seung-Beom
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.18 no.2
    • /
    • pp.16-22
    • /
    • 2010
  • This paper proposes the real-time moving object tracking system UAV using color information. Case of object tracking, it have studied to recognizing the moving object or moving multiple objects on the fixed camera. And it has recognized the object in the complex background environment. But, this paper implements the moving object tracking system using the pan/tilt function of the camera after the object's region extraction. To do this tracking system, firstly, it detects the moving object of RGB/HSI color model and obtains the object coordination in acquired image using the compact boundary box. Secondly, the camera origin coordination aligns to object's top&left coordination in compact boundary box. And it tracks the moving object using the pan/tilt function of camera. It is implemented by the Labview 8.6 and NI Vision Builder AI of National Instrument co. It shows the good performance of camera trace in laboratory environment.

IoT Open-Source and AI based Automatic Door Lock Access Control Solution

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Young, Ko Eun;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.8-14
    • /
    • 2020
  • Recently, there was an increasing demand for an integrated access control system which is capable of user recognition, door control, and facility operations control for smart buildings automation. The market available door lock access control solutions need to be improved from the current level security of door locks operations where security is compromised when a password or digital keys are exposed to the strangers. At present, the access control system solution providers focusing on developing an automatic access control system using (RF) based technologies like bluetooth, WiFi, etc. All the existing automatic door access control technologies required an additional hardware interface and always vulnerable security threads. This paper proposes the user identification and authentication solution for automatic door lock control operations using camera based visible light communication (VLC) technology. This proposed approach use the cameras installed in building facility, user smart devices and IoT open source controller based LED light sensors installed in buildings infrastructure. The building facility installed IoT LED light sensors transmit the authorized user and facility information color grid code and the smart device camera decode the user informations and verify with stored user information then indicate the authentication status to the user and send authentication acknowledgement to facility door lock integrated camera to control the door lock operations. The camera based VLC receiver uses the artificial intelligence (AI) methods to decode VLC data to improve the VLC performance. This paper implements the testbed model using IoT open-source based LED light sensor with CCTV camera and user smartphone devices. The experiment results are verified with custom made convolutional neural network (CNN) based AI techniques for VLC deciding method on smart devices and PC based CCTV monitoring solutions. The archived experiment results confirm that proposed door access control solution is effective and robust for automatic door access control.

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

Danger Alert Surveillance Camera Service using AI Image Recognition technology (인공지능 이미지 인식 기술을 활용한 위험 알림 CCTV 서비스)

  • Lee, Ha-Rin;Kim, Yoo-Jin;Lee, Min-Ah;Moon, Jae-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.814-817
    • /
    • 2020
  • The number of single-person households is increasing every year, and there are also high concerns about the crime and safety of single-person households. In particular, crimes targeting women are increasing. Although home surveillance camera applications, which are mostly used by single-person households, only provide intrusion detection functions, this service utilizes AI image recognition technologies such as face recognition and object detection to provide theft, violence, stranger and intrusion detection. Users can receive security-related notifications, relieve their anxiety, and prevent crimes through this service.