• Title/Summary/Keyword: CCTV영상

Search Result 603, Processing Time 0.024 seconds

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

A Study on Falling Detection of Workers in the Underground Utility Tunnel using Dual Deep Learning Techniques (이중 딥러닝 기법을 활용한 지하공동구 작업자의 쓰러짐 검출 연구)

  • Jeongsoo Kim;Sangmi Park;Changhee Hong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.498-509
    • /
    • 2023
  • Purpose: This paper proposes a method detecting the falling of a maintenance worker in the underground utility tunnel, by applying deep learning techniques using CCTV video, and evaluates the applicability of the proposed method to the worker monitoring of the utility tunnel. Method: Each rule was designed to detect the falling of a maintenance worker by using the inference results from pre-trained YOLOv5 and OpenPose models, respectively. The rules were then integrally applied to detect worker falls within the utility tunnel. Result: Although the worker presence and falling were detected by the proposed model, the inference results were dependent on both the distance between the worker and CCTV and the falling direction of the worker. Additionally, the falling detection system using YOLOv5 shows superior performance, due to its lower dependence on distance and fall direction, compared to the OpenPose-based. Consequently, results from the fall detection using the integrated dual deep learning model were dependent on the YOLOv5 detection performance. Conclusion: The proposed hybrid model shows detecting an abnormal worker in the utility tunnel but the improvement of the model was meaningless compared to the single model based YOLOv5 due to severe differences in detection performance between each deep learning model

Intelligent Motion Pattern Recognition Algorithm for Abnormal Behavior Detections in Unmanned Stores (무인 점포 사용자 이상행동을 탐지하기 위한 지능형 모션 패턴 인식 알고리즘)

  • Young-june Choi;Ji-young Na;Jun-ho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.73-80
    • /
    • 2023
  • The recent steep increase in the minimum hourly wage has increased the burden of labor costs, and the share of unmanned stores is increasing in the aftermath of COVID-19. As a result, theft crimes targeting unmanned stores are also increasing, and the "Just Walk Out" system is introduced to prevent such thefts, and LiDAR sensors, weight sensors, etc. are used or manually checked through continuous CCTV monitoring. However, the more expensive sensors are used, the higher the initial cost of operating the store and the higher the cost in many ways, and CCTV verification is difficult for managers to monitor around the clock and is limited in use. In this paper, we would like to propose an AI image processing fusion algorithm that can solve these sensors or human-dependent parts and detect customers who perform abnormal behaviors such as theft at low costs that can be used in unmanned stores and provide cloud-based notifications. In addition, this paper verifies the accuracy of each algorithm based on behavior pattern data collected from unmanned stores through motion capture using mediapipe, object detection using YOLO, and fusion algorithm and proves the performance of the convergence algorithm through various scenario designs.

Design of Video Pre-processing Algorithm for High-speed Processing of Maritime Object Detection System and Deep Learning based Integrated System (해상 객체 검출 고속 처리를 위한 영상 전처리 알고리즘 설계와 딥러닝 기반의 통합 시스템)

  • Song, Hyun-hak;Lee, Hyo-chan;Lee, Sung-ju;Jeon, Ho-seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.117-126
    • /
    • 2020
  • A maritime object detection system is an intelligent assistance system to maritime autonomous surface ship(MASS). It detects automatically floating debris, which has a clash risk with objects in the surrounding water and used to be checked by a captain with a naked eye, at a similar level of accuracy to the human check method. It is used to detect objects around a ship. In the past, they were detected with information gathered from radars or sonar devices. With the development of artificial intelligence technology, intelligent CCTV installed in a ship are used to detect various types of floating debris on the course of sailing. If the speed of processing video data slows down due to the various requirements and complexity of MASS, however, there is no guarantee for safety as well as smooth service support. Trying to solve this issue, this study conducted research on the minimization of computation volumes for video data and the increased speed of data processing to detect maritime objects. Unlike previous studies that used the Hough transform algorithm to find the horizon and secure the areas of interest for the concerned objects, the present study proposed a new method of optimizing a binarization algorithm and finding areas whose locations were similar to actual objects in order to improve the speed. A maritime object detection system was materialized based on deep learning CNN to demonstrate the usefulness of the proposed method and assess the performance of the algorithm. The proposed algorithm performed at a speed that was 4 times faster than the old method while keeping the detection accuracy of the old method.

전신 정위 프레임을 이용한 환자의 움직임 및 외부자세 setup 오차 분석

  • 정진범;정원균;서태석;최경식;지영훈;이형구;최보영
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2003.09a
    • /
    • pp.59-59
    • /
    • 2003
  • 목적 : 환자의 호흡에 의한 움직임 및 부정확한 환자 자세 setup 때문에 3 차원 전신 정위 방사선치료,3 차원 입체조형 방사선치료 IMRT와 같은 방사선 치료기술에서 병소에 대한 정확한 표적 위치측정은 매우 어려운 실정이다. 그러므로 본 연구는 방사선 치료시 환자의 움직임을 최대한 고정시켜 줄 수 있으며 환자 자세에 대한 setup 오차를 감소시키고 환자 전신에 산재한 병소의 위치를 좌표화할 수 있는 전신 정위 프레임 제작과 제작한 프레임에 대한 고정효과 및 재현성을 나타내는 환자 자세의 setup 오차를 평가하는데 있다. 재료 및 방법 : 자체 제작한 전신 정위 프레임 구조는 CT 영상 촬영 가능성에 중점을 두고 병소 표적의 좌표실현 및 환자체형에 따른 다양성 그리고 프레임에 대한 견고성 및 안정성 확인에 초점화하여 제작하였다. 이렇게 제작된 전신 정위 프레임에 대한 방사선 투과율 측정 실험과 CCTV 카메라와 DVR(Digital Video Recorder)를 이용해 환자 자세 변화에 대한 영상을 획득하여 matlab으로 구현한 오차분석용 프로그램으로 환자 외부자세에 대한 오차 비교 평가하고 CT 촬영에 의한 가상표적 위치측정 실험을 수행하였다. 또 한 고정벨트 추가 사용으로 인한 환자의 고정효과 정도를 살펴보았다. 결과 : 제작된 전신 정위 프레임에 대한 방사선 투과율은 마그네트론 10, 21 MeV의 에너지에서 95, 96% 의 투과율이 측정되었고 30 $^{\circ}$. 60 。각도의 경사로 빔이 전달될 때는 90.3, 94.4% 가 측정되었다. CCTV 카메라를 이용하여 흉부 및 복부의 움직임을 촬영한 영상을 Matlab프로그램으로 구현한 오차분석 프로그램을 적용한 결과, 환자 자세에 대한 오차의 평균값은 흉부의 lateral 방향에서는 3.63$\pm$1.4 mm, AP 방향에서는 2.1$\pm$0.82 mm이었다. 그리고 복부의 later의 방향에서는 7.0$\pm$2.1 mm, AP 방향에서는 6.5$\pm$2.2 mm 이었다. 또한 표적 위치측정을 위해서 환자의 피부에 임의의 가상표적을 부착하고 CT 촬영한 영상결과, 프레임으로 가상표 적에 대한 위치를 정확히 파악할 수 있었다. 결론 : 제작된 프레임을 적용하여 방사선투과율 측정실험, 환자 외부자세에 대한 오차 측정실험, 가상표적 위치측정 실험 등을 수행하였다. 환자 외부자세에 대한 오차 측정실험 경우, 더 많은 Volunteer를 적용하여 보다 정확한 오차 측정실험이 수행되어야 할 것이며 정확한 표적 위치 측정실험을 위해서 내부 마커를 삽입한 환자를 적용한 임상실험이 수행되어야 할 것이다. 또한 위치결정에서 획득한 좌표값의 정확성을 알아보기 위해서 팬톰을 이용한 방사선조사 실험이 추후에 실행되어져야 할 것이다. 그리고 제작된 프레임에 Rotating X선 시스템과 내부 장기의 움직임을 계량화하고 PTV에서의 최적 여유폭을 설정함으로써 정위 방사선수술 및 3 차원 업체 방사선치료에 대한 병소 위치측정과 환자의 자세에 대한 setup 오차측정 결정에 도움이 될 수 있을 것이라고 사료된다.

  • PDF

Estimation of Incident Detection Time on Expressways Based on Market Penetration Rate of Connected Vehicles (커넥티드 차량 보급률 기반 고속도로 돌발상황 검지시간 추정)

  • Sanggi Nam;Younshik Chung;Hoekyoung Kim;Wonggil Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.3
    • /
    • pp.38-50
    • /
    • 2023
  • Recent advances in artificial intelligence (AI) technology have enabled the integration of AI technology into image sensors, such as Closed-Circuit Television (CCTV), to detect specific traffic incidents. However, most incident detection methods have been carried out using fixed equipment. Therefore, there have been limitations to incident detection for all roadways. Nevertheless, the development of mobile image collection and analysis technology, such as image sensors and edge-computing, is spreading. The purpose of this study is to estimate the reducing effect of the incident detection time according to the introduction level of mobile image collection and analysis equipment (or connected vehicles). To carry out this purpose, we utilized data on the number of incidents collected by the Suwon branch of the Gyeongbu expressway in 2021. The analysis results showed that if the market penetration rate (MPR) of connected vehicles is 4% or higher for two-lane expressway and 3% or higher for three-lane expressways, the incident detection time was less than one minute. Furthermore, if the MPR is 0.4% or higher for two-lane expressways and 0.2% or higher for three-lane expressways, the incident detection time decreased compared to the average incident detection time announced by the Korea Expressway Corporation for both two-lane and three-lane expressways.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.

A Design and Implementation of Camera Information File Creation Tool for Efficient Recording Data Search in Surveillance System (보안 관제 시스템에서 효율적인 영상 검색을 위한 카메라 연동 정보 파일 자동 생성 도구의 설계 및 구현)

  • Hwang, Gi-Jin;Park, Jae-Pyo;Yang, Seung-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.55-61
    • /
    • 2016
  • For the purpose of video security equipment is to protect personal property and life against from terrorism or recent threats. In this study, if you proceed with the recorded data search, we propose a method for increasing the user's search convenience. It has predefined data structure which is between camera's movement path and relationship. Also, design and implement a tool that automatically generates a files which has inter camera related information on the control center in a multi-camera is installed environment. Using generated file, minimize searching time and increase searching efficiency.

An Instruction Extraction Method For Dashboard Camera Prototyping (블랙박스 하드웨어 프로토타이핑을 위한 명령어 추출 기법)

  • Lee, Sangmin;Jung, Daejin;Choi, Jaeyoon;Shim, Jaekyun;Ahn, Jung Ho
    • Annual Conference of KIPS
    • /
    • 2015.10a
    • /
    • pp.6-9
    • /
    • 2015
  • 현대인들의 생활수준 향상과 기술의 발전에 따라 보안, 안전 등 미처 고려되지 않던 분야들이 다양한 영역에서 부각되면서 CCTV, IP카메라, 차량용 블랙박스와 같은 영상기반 시스템에 대한 시장수요가 증가하고 있다. 이에 맞춰 다양한 영상기반 시스템들이 개발되고, 개발 단계에서 낭비되는 시간을 줄이기 위해 프로토타이핑의 필요성이 대두되고 있지만 기존의 프로토타이핑을 위한 도구는 비용이나 속도측면에서 제한적이다. 본 논문에서는 영상기반 시스템 중 블랙박스의 하드웨어를 풀 시스템 에뮬레이터를 이용하여 모델링하고 수행되는 명령어 추출을 통해 시스템의 특성을 예측할 수 있는 하드웨어 프로토타이핑 도구를 제안한다. 또한 ARM 시스템용으로 컴파일 된 프로그램의 실행 여부를 확인하고, 프로그램을 구성하는 명령어와 추출도구를 통해 추출된 명령어를 비교하여 동작을 확인한다.

Development of recognition and alert system for dangerous road object using deep learning algorithms (딥러닝 영상인식을 이용한 도로 위 위험 객체 알림 시스템)

  • Kim, Joong-wan;Jo, Hyun-jun;Hwang, Bo-ouk;Jeong, Jun-ho;Choi, Jong-geon;Yun, Tae-jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.479-480
    • /
    • 2022
  • 고속으로 차량이 주행하는 도로에서 정지 차량이나 낙하물은 큰 사고를 유발하기에 이에 대한 대처 방안이 요구되고 있다. 갑작스런 정지 차량의 경우 예상 불가능하며, 낙하물은 순찰대를 편성하여 주기적으로 수거하고 있으나 즉각적인 대응이 어렵다. 해당 문제 해결을 위해 본 논문에서는 딥러닝 실시간 객체인식기술을 적용하여 정지 차량 및 도로 위 낙하물을 인식하며 이에 대한 정보를 제공하는 시스템을 개발하였다. 실시간 객체인식 알고리즘인 YOLOX와 실시간 객체추적기술인 deepSORT 알고리즘을 데스크톱 PC에 적용하여 구현하였다. 개발한 시스템은 정지 차량 및 낙하물에 대한 인식 결과를 제공한다. 기존 설치된 CCTV 영상을 대상으로 시스템 적용이 가능하여 저비용으로 넓은 지역에 대한 도로 위험 상황 인식을 기대할 수 있다.

  • PDF