• 제목/요약/키워드: Real-time object recognition

Search Result 280, Processing Time 0.024 seconds

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.

Visual Servoing of a Mobile Manipulator Based on Stereo Vision (스테레오 영상을 이용한 이동형 머니퓰레이터의 시각제어)

  • Lee Hyun Jeong;Park Min Gyu;Lee Min Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.411-417
    • /
    • 2005
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the potion of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. Color information is useful for simple recognition in real-time visual servoing. This paper addresses object recognition using colors, stereo matching method to reduce its calculation time, recovery of 3D space and the visual servoing.

A Task Scheduling Strategy in a Multi-core Processor for Visual Object Tracking Systems (시각물체 추적 시스템을 위한 멀티코어 프로세서 기반 태스크 스케줄링 방법)

  • Lee, Minchae;Jang, Chulhoon;Sunwoo, Myoungho
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.127-136
    • /
    • 2016
  • The camera based object detection systems should satisfy the recognition performance as well as real-time constraints. Particularly, in safety-critical systems such as Autonomous Emergency Braking (AEB), the real-time constraints significantly affects the system performance. Recently, multi-core processors and system-on-chip technologies are widely used to accelerate the object detection algorithm by distributing computational loads. However, due to the advanced hardware, the complexity of system architecture is increased even though additional hardwares improve the real-time performance. The increased complexity also cause difficulty in migration of existing algorithms and development of new algorithms. In this paper, to improve real-time performance and design complexity, a task scheduling strategy is proposed for visual object tracking systems. The real-time performance of the vision algorithm is increased by applying pipelining to task scheduling in a multi-core processor. Finally, the proposed task scheduling algorithm is applied to crosswalk detection and tracking system to prove the effectiveness of the proposed strategy.

A Study on the Application of Task Offloading for Real-Time Object Detection in Resource-Constrained Devices (자원 제약적 기기에서 자율주행의 실시간 객체탐지를 위한 태스크 오프로딩 적용에 관한 연구)

  • Jang Shin Won;Yong-Geun Hong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.12
    • /
    • pp.363-370
    • /
    • 2023
  • Object detection technology that accurately recognizes the road and surrounding conditions is a key technology in the field of autonomous driving. In the field of autonomous driving, object detection technology requires real-time performance as well as accuracy of inference services. Task offloading technology should be utilized to apply object detection technology for accuracy and real-time on resource-constrained devices rather than high-performance machines. In this paper, experiments such as performance comparison of task offloading, performance comparison according to input image resolution, and performance comparison according to camera object resolution were conducted and the results were analyzed in relation to the application of task offloading for real-time object detection of autonomous driving in resource-constrained devices. In this experiment, the low-resolution image could derive performance improvement through the application of the task offloading structure, which met the real-time requirements of autonomous driving. The high-resolution image did not meet the real-time requirements for autonomous driving due to the increase in communication time, although there was an improvement in performance. Through these experiments, it was confirmed that object recognition in autonomous driving affects various conditions such as input images and communication environments along with the object recognition model used.

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

Research on Intelligent Anomaly Detection System Based on Real-Time Unstructured Object Recognition Technique (실시간 비정형객체 인식 기법 기반 지능형 이상 탐지 시스템에 관한 연구)

  • Lee, Seok Chang;Kim, Young Hyun;Kang, Soo Kyung;Park, Myung Hye
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.546-557
    • /
    • 2022
  • Recently, the demand to interpret image data with artificial intelligence in various fields is rapidly increasing. Object recognition and detection techniques using deep learning are mainly used, and video integration analysis to determine unstructured object recognition is a particularly important problem. In the case of natural disasters or social disasters, there is a limit to the object recognition structure alone because it has an unstructured shape. In this paper, we propose intelligent video integration analysis system that can recognize unstructured objects based on video turning point and object detection. We also introduce a method to apply and evaluate object recognition using virtual augmented images from 2D to 3D through GAN.

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

Real-Time Haptic Rendering of Slowly Deformable Bodies Based on Two Dimensional Visual Information for Telemanipulation (원격조작을 위한 2차원 영상정보에 기반한 저속 변형체의 실시간 햅틱 렌더링)

  • Kim, Jung-Sik;Kim, Young-Jin;Kim, Jung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.8
    • /
    • pp.855-861
    • /
    • 2007
  • Haptic rendering is a process providing force feedback during interactions between a user and a virtual object. This paper presents a real-time haptic rendering technique for deformable objects based on visual information of intervention between a tool and a real object in a remote place. A user can feel the artificial reaction force through a haptic device in real-time when a slave system exerts manipulation tasks on a deformable object. The models of the deformable object and the manipulator are created from the captured image obtained with a CCD camera and the recognition of objects is achieved using image processing techniques. The force at a rate of 1 kHz for stable haptic interaction is deduced using extrapolation of forces at a low update rate. The rendering algorithm developed was tested and validated on a test platform consisting of a one-dimensional indentation device and an off-the shelf force feedback device. This software system can be used in a cellular manipulation system providing artificial force feedback to enhance a success rate of operations.

Joint frame rate adaptation and object recognition model selection for stabilized unmanned aerial vehicle surveillance

  • Gyu Seon Kim;Haemin Lee;Soohyun Park;Joongheon Kim
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.811-821
    • /
    • 2023
  • We propose an adaptive unmanned aerial vehicle (UAV)-assisted object recognition algorithm for urban surveillance scenarios. For UAV-assisted surveillance, UAVs are equipped with learning-based object recognition models and can collect surveillance image data. However, owing to the limitations of UAVs regarding power and computational resources, adaptive control must be performed accordingly. Therefore, we introduce a self-adaptive control strategy to maximize the time-averaged recognition performance subject to stability through a formulation based on Lyapunov optimization. Results from performance evaluations on real-world data demonstrate that the proposed algorithm achieves the desired performance improvements.