• Title/Summary/Keyword: Object recognition system

Search Result 717, Processing Time 0.025 seconds

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

An Object Tracking Method using Stereo Images (스테레오 영상을 이용한 물체 추적 방법)

  • Lee, Hak-Chan;Park, Chang-Han;Namkung, Yun;Namkyung, Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.522-534
    • /
    • 2002
  • In this paper, we propose a new object tracking system using stereo images to improve the performance of the automatic object tracking system. The existing object tracking system has optimum characteristics, but it requires a lot of computation. In the case of the image with a single eye, the system is difficult to estimate and track for the various transformation of the object. Because the stereo image by both eyes is difficult to estimate the translation and the rotation, this paper deals with the tracking method, which has the ability to track the image for translation for real time, with block matching algorithm in order to decrease the calculation. The experimental results demonstrate the usefulness of proposed system with the recognition rate of 88% in the rotation, 89% in the translation, 88% in various image, and with the mean rate of 88.3%.

Implementation and Verification of Artificial Intelligence Drone Delivery System (인공지능 드론 배송 시스템의 구현 및 검증)

  • Sungnam Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.33-38
    • /
    • 2024
  • In this paper, we propose the implementation of a drone delivery system using artificial intelligence in a situation where the use of drones is rapidly increasing and human errors are occurring. This system requires the implementation of an accurate control algorithm, assuming that last-mile delivery is delivered to the apartment veranda. To recognize the delivery location, a recognition system using the YOLO algorithm was implemented, and a delivery system was installed on the drone to measure the distance to the object and increase the delivery distance to ensure stable delivery even at long distances. As a result of the experiment, it was confirmed that the recognition system recognized the marker with a match rate of more than 60% at a distance of less than 10m while the drone hovered stably. In addition, the drone carrying a 500g package was able to withstand the torque applied as the rail lengthened, extending to 1.5m and then stably placing the package down on the veranda at the end of the rail.

A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor (CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구)

  • Kim, Jin-Dae;Lee, Jeh-Won;Shin, Chan-Bai
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.11 s.188
    • /
    • pp.58-67
    • /
    • 2006
  • Due to the variety of signal processing and complicated mathematical analysis, it is not easy to accomplish 3D bin-picking with non-contact sensor. To solve this difficulties the reliable signal processing algorithm and a good sensing device has been recommended. In this research, 3D laser scanner and CCD camera is applied as a sensing device respectively. With these sensor we develop a two-step bin-picking method and reliable algorithm for the recognition of 3D bin object. In the proposed bin-picking, the problem is reduced to 2D intial recognition with CCD camera at first, and then 3D pose detection with a laser scanner. To get a good movement in the robot base frame, the hand eye calibration between robot's end effector and sensing device should be also carried out. In this paper, we examine auto-calibration technique in the sensor calibration step. A new thinning algorithm and constrained hough transform is also studied for the robustness in the real environment usage. From the experimental results, we could see the robust bin-picking operation under the non-aligned 3D hole object.

Development Smart Sensor & Estimation Method to Recognize Materials (대상물 인식을 위한 지능센서 및 평가기법 개발)

  • Hwang, Seong-Youn;Hong, Dong-Pyo;Chung, Tae-Jin;Kim, Young-Moon
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.15 no.3
    • /
    • pp.73-81
    • /
    • 2006
  • This paper describes our primary study for a new method of recognizing materials, which is need for precision work system. This is a study of dynamic characteristics of smart sensors, new method$(R_{SAI})$ has the sensing ability of distinguishing materials. Experiment and analysis are executed for finding the proper dynamic sensing condition. First, we developed advanced smart sensor. We made smart sensors for experiment. The type of smart sensor is HH type. The smart sensor was developed for recognition of material. Second, we develop new estimation methods that have a sensing ability of distinguish materials. Dynamic characteristics of sensor are evaluated through new recognition index$(R_{SAI})$ that ratio of sensing ability index. Distinguish of object is executed with $R_{SAI}$ method relatively. We can use the $R_{SAI}$ method for finding materials. Applications of this method are finding abnormal condition of object (auto-manufacturing), feeling of object(medical product), robotics, safety diagnosis of structure, etc.

Remaining persons estimation system using object recognition (객체인식을 활용한 잔류인원 추정 시스템)

  • Seong-woo Lee;Gyung-hyung Lee;Jin-hoon Seok;Kyeong-seop Kim;Min-seo Jeon;Seung-oh Choo;Tae-jin Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.269-270
    • /
    • 2023
  • 재해, 재난 발생 시에 구조대가 건물 내부나 지하철 등, 특정 구역 내의 대피하지 못한 잔류인원을 제대로 파악하데 어려움을 겪는다. 이를 개선하고자 YOLO와 DeepSORT를 활용하여 통행자를 인식하여 특정 구역의 잔류인원을 파악하고 이를 서버를 통해 확인할 수 있는 시스템을 개발하였다. 실시간 객체인식 알고리즘인 YOLOv4-tiny와 실시간 객체추적기술인 DeepSORT 알고리즘을 이용하여 제안한 방법을 Ubuntu환경에서 구현하고, 실내 상황에 맞춰 통행자 동선을 고려해서 적용하였다. 개발한 시스템은 인식된 통행자 객체방향으로 출입을 구분하여 데이터를 서버에 저장한다. 이에 따라 재해 발생 시 구역의 잔류인원을 파악하여 빠르고 효율적으로 요구조자 위치와 인원을 예측할 수 있다.

  • PDF

Expressway Falling Object recognition system using Deep Learning (딥러닝을 이용한 고속도로 낙하물 객체 인식 시스템)

  • Sang-min Choi;Min-gyun Kim;Seung-yeop Lee;Seong-Kyoo Kim;Jae-wook Shin;Woo-jin Kim;Seong-oh Choo;Yang-woo Park
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.451-452
    • /
    • 2023
  • 고속도로에 낙하물이 있으면 사고 방지를 위해 바로 치워야 하지만 순찰차가 발견하거나 신고가 들어오기 전까진 낙하물을 바로 발견하기 힘들며, 대다수의 사람들은 신고하지 않고 지나치는 경우가 있기에 이러한 문제점들을 개선하기 위해 드론과 YOLO를 이용하여 도로의 낙하물을 인식하고 낙하물에 대한 정보를 보내 줄 수 있는 시스템을 개발하였다. 실시간 객체 인식 알고리즘인 YOLOv5를 데스크톱 PC에 적용하여 구현하였고, F450 프레임에 픽스호크와 모듈, 카메라를 장착하여 실시간으로 도로를 촬영할 수 있는 드론을 직접 제작하였다. 개발한 시스템은 낙하물에 대한 인식 결과와 정보를 제공하며 지상관제 시스템과 웹을 통해 확인할 수 있다. 적은 인력으로 더 빠르게 낙하물을 발견할 수 있으므로 빠른 상황 조치를 기대할 수 있다.

  • PDF

Tracking and Face Recognition of Multiple People Based on GMM, LKT and PCA

  • Lee, Won-Oh;Park, Young-Ho;Lee, Eui-Chul;Lee, Hee-Kyung;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.449-471
    • /
    • 2012
  • In intelligent surveillance systems, it is required to robustly track multiple people. Most of the previous studies adopted a Gaussian mixture model (GMM) for discriminating the object from the background. However, it has a weakness that its performance is affected by illumination variations and shadow regions can be merged with the object. And when two foreground objects overlap, the GMM method cannot correctly discriminate the occluded regions. To overcome these problems, we propose a new method of tracking and identifying multiple people. The proposed research is novel in the following three ways compared to previous research: First, the illuminative variations and shadow regions are reduced by an illumination normalization based on the median and inverse filtering of the L*a*b* image. Second, the multiple occluded and overlapped people are tracked by combining the GMM in the still image and the Lucas-Kanade-Tomasi (LKT) method in successive images. Third, with the proposed human tracking and the existing face detection & recognition methods, the tracked multiple people are successfully identified. The experimental results show that the proposed method could track and recognize multiple people with accuracy.

Natural Object Recognition for Augmented Reality Applications (증강현실 응용을 위한 자연 물체 인식)

  • Anjan, Kumar Paul;Mohammad, Khairul Islam;Min, Jae-Hong;Kim, Young-Bum;Baek, Joong-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.143-150
    • /
    • 2010
  • Markerless augmented reality system must have the capability to recognize and match natural objects both in indoor and outdoor environment. In this paper, a novel approach is proposed for extracting features and recognizing natural objects using visual descriptors and codebooks. Since the augmented reality applications are sensitive to speed of operation and real time performance, our work mainly focused on recognition of multi-class natural objects and reduce the computing time for classification and feature extraction. SIFT(scale invariant feature transforms) and SURF(speeded up robust feature) are used to extract features from natural objects during training and testing, and their performance is compared. Then we form visual codebook from the high dimensional feature vectors using clustering algorithm and recognize the objects using naive Bayes classifier.

Tactile localization Using Whisker Tactile Sensors (수염 촉각 센서를 이용한 물체 위치 판별 그리고 이에 따른 로봇의 상대적 위치 제어 방법)

  • Kim, Dae-Eun;Moeller, Ralf
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1061-1062
    • /
    • 2008
  • Rodents demonstrate an outstanding capability for tactile perceptions using their whiskers. The mechanoreceptors in the whisker follicles are responsive to the deflections or vibrations of the whisker beams. It is believed that the sensor processing can determine the location of an object in touch, that is, the angular position and direction of the object. We designed artificial whiskers modelling the real whiskers and tested tactile localization. The robotic system needs to adjust its position against an object to help the shape recognition. We show a robotic adjustment of position based on tactile localization. The behaviour uses deflection curves of the whisker sensors for every sweep of whiskers and estimates the location of a target object.

  • PDF