• 제목/요약/키워드: Object recognition system

Search Result 714, Processing Time 0.026 seconds

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

A Study on Building 3-D Object Recognition System Using the Orientation Information (방향정보를 이용한 3차원 물체 인식시스템의 구축에 관한 연구)

  • 박종훈;이상훈;최연성;최종수
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.5
    • /
    • pp.757-766
    • /
    • 1990
  • In this paper a new knowledge based vision system using orientation information on each surface of the 3-dimensional object is discussed. The measurement of the orientation information is performed by photometric stereo method. And then the obtained orientations are segmented using Gaussian curvature and mean curvature. A hierarchical knowledge base which is based on the characteristics, shape, area and length of the surface is built up, and then the knowledge based system infers by the condition interprete system (CIS). As the results, an easier and more accurate 3-D object recognition system is implemented, because it uses the characteristics and shapes as units of the surface in the recognition process.

  • PDF

FPGA-based Object Recognition System (FPGA기반 객체인식 시스템)

  • Shin, Seong-Yoon;Cho, Gwang-Hyun;Cho, Seung-Pyo;Shin, Kwang-Seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.407-408
    • /
    • 2022
  • In this paper, we will look at the components of the FPGA-based object recognition system one by one. Let's take a look at each function of the components camera, DLM, service system, video output monitor, deep trainer software, and external deep learning software.

  • PDF

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

Collaborative Place and Object Recognition in Video using Bidirectional Context Information (비디오에서 양방향 문맥 정보를 이용한 상호 협력적인 위치 및 물체 인식)

  • Kim, Sung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.172-179
    • /
    • 2006
  • In this paper, we present a practical place and object recognition method for guiding visitors in building environments. Recognizing places or objects in real world can be a difficult problem due to motion blur and camera noise. In this work, we present a modeling method based on the bidirectional interaction between places and objects for simultaneous reinforcement for the robust recognition. The unification of visual context including scene context, object context, and temporal context is also. The proposed system has been tested to guide visitors in a large scale building environment (10 topological places, 80 3D objects).

  • PDF

Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

  • Kim, Bae-Sung;Woo, Yun-Tae;Yu, Yung-Ho;Hwang, Hun-Gyu
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.1
    • /
    • pp.91-97
    • /
    • 2021
  • Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

A Study on 2-D Objects Recognition Using Polygonal Approximation and Coordinates Transition (다각근사화와 좌표이동을 이용한 겹친 2차원 물체인식)

  • 박원진;김보현;이대영
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1986.10a
    • /
    • pp.45-52
    • /
    • 1986
  • This paper presents an experimental model-based vision system which can identify and locate object in scenes containing multiple occluded parts. The objent are assumed to be regid, planar parta. In any recognition system the type of object that might appear in the image dictates the type of knowledge that is needed to recognize the object. The data is reduced to a seguential list of points or pixel that appear on the boundary of the objects. Next the boundary of the object is smoothed using a polygonal approximation algorithm. Recognition consists in finding the prototype that matches model to image. The best match is obtained by optimising some similarity measure.

  • PDF

Learning Rules for Partially Occluded Object Recognition (부분적으로 가려진 물체의 인식 룰의 습득)

  • 정재영;김문현
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.6
    • /
    • pp.954-962
    • /
    • 1990
  • Experties of recognizing an object despite of every possible occlusions among objects is difficult to be provided directly to a system. In this paper, we propose a method for inferring inherent shape-characteirstics of an object from training views provided. The method learns rules incrementally by alternating the rule induction process from limited number of training views and the rule verification process from the following taining views. The learned rules are represented using logical expressions to enhance the readability. Thr proposed method is tested by simulating occlusions on 2-dimensional objects to examine the learning process and to show improvement of recognition rate. Thr result shows that it can be applied to a practical system for 3-dimensional object recognition.

  • PDF

Object Recognition using 3D Depth Measurement System. (3차원 거리 측정 장치를 이용한 물체 인식)

  • Gim, Seong-Chan;Ko, Su-Hong;Kim, Hyong-Suk
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.941-942
    • /
    • 2006
  • A depth measurement system to recognize 3D shape of objects using single camera, line laser and a rotating mirror has been investigated. The camera and the light source are fixed, facing the rotating mirror. The laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The camera detects the laser light location on object surfaces through the same mirror. The scan over the area to be measured is done by mirror rotation. The Segmentation process of object recognition is performed using the depth data of restored 3D data. The Object recognition domain can be reduced by separating area of interest objects from complex background.

  • PDF