• Title/Summary/Keyword: vision tracking system

Search Result 437, Processing Time 0.029 seconds

Implementation of an alarm system with AI image processing to detect whether a helmet is worn or not and a fall accident (헬멧 착용 여부 및 쓰러짐 사고 감지를 위한 AI 영상처리와 알람 시스템의 구현)

  • Yong-Hwa Jo;Hyuek-Jae Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.150-159
    • /
    • 2022
  • This paper presents an implementation of detecting whether a helmet is worn and there is a fall accident through individual image analysis in real-time from extracting the image objects of several workers active in the industrial field. In order to detect image objects of workers, YOLO, a deep learning-based computer vision model, was used, and for whether a helmet is worn or not, the extracted images with 5,000 different helmet learning data images were applied. For whether a fall accident occurred, the position of the head was checked using the Pose real-time body tracking algorithm of Mediapipe, and the movement speed was calculated to determine whether the person fell. In addition, to give reliability to the result of a falling accident, a method to infer the posture of an object by obtaining the size of YOLO's bounding box was proposed and implemented. Finally, Telegram API Bot and Firebase DB server were implemented for notification service to administrators.

The Technique of Human tracking using ultrasonic sensor for Human Tracking of Cooperation robot based Mobile Platform (모바일 플랫폼 기반 협동로봇의 사용자 추종을 위한 초음파 센서 활용 기법)

  • Yum, Seung-Ho;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.638-648
    • /
    • 2020
  • Currently, the method of user-follwoing in intelligent cooperative robots usually based in vision system and using Lidar is common and have excellent performance. But in the closed space of Corona 19, which spread worldwide in 2020, robots for cooperation with medical staff were insignificant. This is because Medical staff are all wearing protective clothing to prevent virus infection, which is not easy to apply with existing research techniques. Therefore, in order to solve these problems in this paper, the ultrasonic sensor is separated from the transmitting and receiving parts, and based on this, this paper propose that estimating the user's position and can actively follow and cooperate with people. However, the ultrasonic sensors were partially applied by improving the Median filter in order to reduce the error caused by the short circuit in communication between hard reflection and the number of light reflections, and the operation technology was improved by applying the curvature trajectory for smooth operation in a small area. Median filter reduced the error of degree and distance by 70%, vehicle running stability was verified through the training course such as 'S' and '8' in the result.

Technical-note : Real-time Evaluation System for Quantitative Dynamic Fitting during Pedaling (단신 : 페달링 시 정량적인 동적 피팅을 위한 실시간 평가 시스템)

  • Lee, Joo-Hack;Kang, Dong-Won;Bae, Jae-Hyuk;Shin, Yoon-Ho;Choi, Jin-Seung;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.24 no.2
    • /
    • pp.181-187
    • /
    • 2014
  • In this study, a real-time evaluation system for quantitative dynamic fitting during pedaling was developed. The system is consisted of LED markers, a digital camera connected to a computer and a marker detecting program. LED markers are attached to hip, knee, ankle joint and fifth metatarsal in the sagittal plane. Playstation3 eye which is selected as a main digital camera in this paper has many merits for using motion capture, such as high FPS (Frame per second) about 180FPS, $320{\times}240$ resolution, and low-cost with easy to use. The maker detecting program was made by using Labview2010 with Vision builder. The program was made up of three parts, image acquisition & processing, marker detection & joint angle calculation, and output section. The digital camera's image was acquired in 95FPS, and the program was set-up to measure the lower-joint angle in real-time, providing the user as a graph, and allowing to save it as a test file. The system was verified by pedalling at three saddle heights (knee angle: 25, 35, $45^{\circ}$) and three cadences (30, 60, 90 rpm) at each saddle heights by using Holmes method, a method of measuring lower limbs angle, to determine the saddle height. The result has shown low average error and strong correlation of the system, respectively, $1.18{\pm}0.44^{\circ}$, $0.99{\pm}0.01^{\circ}$. There was little error due to the changes in the saddle height but absolute error occurred by cadence. Considering the average error is approximately $1^{\circ}$, it is a suitable system for quantitative dynamic fitting evaluation. It is necessary to decrease error by using two digital camera with frontal and sagittal plane in future study.

The Mirror-based real-time dynamic projection mapping design and dynamic object detection system research (미러 방식의 실시간 동적 프로젝션 매핑 설계 및 동적 사물 검출 시스템 연구)

  • Soe-Young Ahn;Bum-Suk Seo;Sung Dae Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.85-91
    • /
    • 2024
  • In this paper, we studied projection mapping, which is being utilized as a digital canvas beyond space and time for theme parks, mega events, and exhibition performances. Since the existing projection technology used for fixed objects has the limitation that it is difficult to map moving objects in terms of utilization, it is urgent to develop a technology that can track and map moving objects and a real-time dynamic projection mapping system based on dynamically moving objects so that it can respond to various markets such as performances, exhibitions, and theme parks. In this paper, we propose a system that can track real-time objects in real time and eliminate the delay phenomenon by developing hardware and performing high-speed image processing. Specifically, we develop a real-time object image analysis and projection focusing control unit, an integrated operating system for a real-time object tracking system, and an image processing library for projection mapping. This research is expected to have a wide range of applications in the technology-intensive industry that utilizes real-time vision machine-based detection technology, as well as in the industry where cutting-edge science and technology are converged and produced.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

A Moving Path Control of an Automatic Guided Vehicle Using Relative Distance Fingerprinting (상대거리 지문 정보를 이용한 무인이송차량의 주행 경로 제어)

  • Hong, Youn Sik;Kim, Da Jung;Hong, Sang Hyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.10
    • /
    • pp.427-436
    • /
    • 2013
  • In this paper, a method of moving path control of an automatic guided vehicle in an indoor environment through recognition of marker images using vision sensors is presented. The existing AGV moving control system using infrared-ray sensors and landmarks have faced at two critical problems. Since there are many windows in a crematorium, they are going to let in too much sunlight in the main hall which is the moving area of AGVs. Sunlight affects the correct recognition of landmarks due to refraction and/or reflection of sunlight. The second one is that a crematorium has a narrow indoor environment compared to typical industrial fields. Particularly when an AVG changes its direction to enter the designated furnace the information provided by guided sensors cannot be utilized to estimate its location because the rotating space is too narrow to get them. To resolve the occurrences of such circumstances that cannot access sensing data in a WSN environment, a relative distance from marker to an AGV will be used as fingerprinting used for location estimation. Compared to the existing fingerprinting method which uses RSS, our proposed method may result in a higher reliable estimation of location. Our experimental results show that the proposed method proves the correctness and applicability. In addition, our proposed approach will be applied to the AGV system in the crematorium so that it can transport a dead body safely from the loading place to its rightful destination.

Improved CS-RANSAC Algorithm Using K-Means Clustering (K-Means 클러스터링을 적용한 향상된 CS-RANSAC 알고리즘)

  • Ko, Seunghyun;Yoon, Ui-Nyoung;Alikhanov, Jumabek;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.6
    • /
    • pp.315-320
    • /
    • 2017
  • Estimating the correct pose of augmented objects on the real camera view efficiently is one of the most important questions in image tracking area. In computer vision, Homography is used for camera pose estimation in augmented reality system with markerless. To estimating Homography, several algorithm like SURF features which extracted from images are used. Based on extracted features, Homography is estimated. For this purpose, RANSAC algorithm is well used to estimate homography and DCS-RANSAC algorithm is researched which apply constraints dynamically based on Constraint Satisfaction Problem to improve performance. In DCS-RANSAC, however, the dataset is based on pattern of feature distribution of images manually, so this algorithm cannot classify the input image, pattern of feature distribution is not recognized in DCS-RANSAC algorithm, which lead to reduce it's performance. To improve this problem, we suggest the KCS-RANSAC algorithm using K-means clustering in CS-RANSAC to cluster the images automatically based on pattern of feature distribution and apply constraints to each image groups. The suggested algorithm cluster the images automatically and apply the constraints to each clustered image groups. The experiment result shows that our KCS-RANSAC algorithm outperformed the DCS-RANSAC algorithm in terms of speed, accuracy, and inlier rate.