• Title/Summary/Keyword: Object Recognition Technology

Search Result 465, Processing Time 0.03 seconds

A Study on Rotational Alignment Algorithm for Improving Character Recognition (문자 인식 향상을 위한 회전 정렬 알고리즘에 관한 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.79-84
    • /
    • 2019
  • Video image based technology is being used in various fields with continuous development. The demand for vision system technology that analyzes and discriminates image objects acquired through cameras is rapidly increasing. Image processing is one of the core technologies of vision systems, and is used for defect inspection in the semiconductor manufacturing field, object recognition inspection such as the number of tire surfaces and symbols. In addition, research into license plate recognition is ongoing, and it is necessary to recognize objects quickly and accurately. In this paper, propose a recognition model through the rotational alignment of objects after checking the angle value of the tilt of the object in the input video image for the recognition of inclined objects such as numbers or symbols marked on the surface. The proposed model can perform object recognition of the rotationally sorted image after extracting the object region and calculating the angle of the object based on the contour algorithm. The proposed model extracts the object region based on the contour algorithm, calculates the angle of the object, and then performs object recognition on the rotationally aligned image. In future research, it is necessary to study template matching through machine learning.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Improved Statistical Grey-Level Models for PCB Inspection (PCB 검사를 위한 개선된 통계적 그레이레벨 모델)

  • Bok, Jin Seop;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Grey-level statistical models have been widely used in many applications for object location and identification. However, conventional models yield some problems in model refinement when training images are not properly aligned, and have difficulties for real-time recognition of arbitrarily rotated models. This paper presents improved grey-level statistical models that align training images using image or feature matching to overcome problems in model refinement of conventional models, and that enable real-time recognition of arbitrarily rotated objects using efficient hierarchical search methods. Edges or features extracted from a mean training image are used for accurate alignment of models in the search image. On the aligned position and orientation, fitness measure based on grey-level statistical models is computed for object recognition. It is demonstrated in various experiments in PCB inspection that proposed methods are superior to conventional methods in recognition accuracy and speed.

A threshold decision of the object image by using the smart tag

  • Im, Chang-Jun;Kim, Jin-Young;Joung, Kwan-Young;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2368-2372
    • /
    • 2005
  • We proposed a novel method for object recognition using the Smart tag system in the previous research. We identified the object easily, but could not assure the object pose, because the threshold problem was not solved. So we propose a new method to solve this threshold problem. This method uses a smart tag to decide the threshold by recording color information of the image when the object feature is extracted. This method records the original of the object color information at the smart tag first. And then it records the object image information, the circumstance image information and the sensors information continuously when the object feature is extracted through the experiments. Finally, it estimates the current threshold by recorded information. This method can be applied the threshold to each objects. And it can solve the difficult threshold decision problem easily. To approve the possibility of our method, we implemented our approach by using easy and simple techniques as possible.

  • PDF

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

The Robust Derivative Code for Object Recognition

  • Wang, Hainan;Zhang, Baochang;Zheng, Hong;Cao, Yao;Guo, Zhenhua;Qian, Chengshan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.272-287
    • /
    • 2017
  • This paper proposes new methods, named Derivative Code (DerivativeCode) and Derivative Code Pattern (DCP), for object recognition. The discriminative derivative code is used to capture the local relationship in the input image by concatenating binary results of the mathematical derivative value. Gabor based DerivativeCode is directly used to solve the palmprint recognition problem, which achieves a much better performance than the state-of-art results on the PolyU palmprint database. A new local pattern method, named Derivative Code Pattern (DCP), is further introduced to calculate the local pattern feature based on Dervativecode for object recognition. Similar to local binary pattern (LBP), DCP can be further combined with Gabor features and modeled by spatial histogram. To evaluate the performance of DCP and Gabor-DCP, we test them on the FERET and PolyU infrared face databases, and experimental results show that the proposed method achieves a better result than LBP and some state-of-the-arts.

A Salient Based Bag of Visual Word Model (SBBoVW): Improvements toward Difficult Object Recognition and Object Location in Image Retrieval

  • Mansourian, Leila;Abdullah, Muhamad Taufik;Abdullah, Lilli Nurliyana;Azman, Azreen;Mustaffa, Mas Rina
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.769-786
    • /
    • 2016
  • Object recognition and object location have always drawn much interest. Also, recently various computational models have been designed. One of the big issues in this domain is the lack of an appropriate model for extracting important part of the picture and estimating the object place in the same environments that caused low accuracy. To solve this problem, a new Salient Based Bag of Visual Word (SBBoVW) model for object recognition and object location estimation is presented. Contributions lied in the present study are two-fold. One is to introduce a new approach, which is a Salient Based Bag of Visual Word model (SBBoVW) to recognize difficult objects that have had low accuracy in previous methods. This method integrates SIFT features of the original and salient parts of pictures and fuses them together to generate better codebooks using bag of visual word method. The second contribution is to introduce a new algorithm for finding object place based on the salient map automatically. The performance evaluation on several data sets proves that the new approach outperforms other state-of-the-arts.

Object Recognition Using Planar Surface Segmentation and Stereo Vision

  • Kim, Do-Wan;Kim, Sung-Il;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1920-1925
    • /
    • 2004
  • This paper describes a new method for 3D object recognition which used surface segment-based stereo vision. The position and orientation of an objects is identified accurately enabling a robot to pick up, even though the objects are multiple and partially occluded. The stereo vision is used to get the 3D information as 3D sensing, and CAD model with its post processing is used for building models. Matching is initially performed using the model and object features, and calculate roughly the object's position and orientation. Though the fine adjustment step, the accuracy of the position and orientation are improved.

  • PDF

Overview of Image-based Object Recognition AI technology for Autonomous Vehicles (자율주행 차량 영상 기반 객체 인식 인공지능 기술 현황)

  • Lim, Huhnkuk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.8
    • /
    • pp.1117-1123
    • /
    • 2021
  • Object recognition is to identify the location and class of a specific object by analyzing the given image when a specific image is input. One of the fields in which object recognition technology is actively applied in recent years is autonomous vehicles, and this paper describes the trend of image-based object recognition artificial intelligence technology in autonomous vehicles. The image-based object detection algorithm has recently been narrowed down to two methods (a single-step detection method and a two-step detection method), and we will analyze and organize them around this. The advantages and disadvantages of the two detection methods are analyzed and presented, and the YOLO/SSD algorithm belonging to the single-step detection method and the R-CNN/Faster R-CNN algorithm belonging to the two-step detection method are analyzed and described. This will allow the algorithms suitable for each object recognition application required for autonomous driving to be selectively selected and R&D.

Implementation of Moving Object Recognition based on Deep Learning (딥러닝을 통한 움직이는 객체 검출 알고리즘 구현)

  • Lee, YuKyong;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.2
    • /
    • pp.67-70
    • /
    • 2018
  • Object detection and tracking is an exciting and interesting research area in the field of computer vision, and its technologies have been widely used in various application systems such as surveillance, military, and augmented reality. This paper proposes and implements a novel and more robust object recognition and tracking system to localize and track multiple objects from input images, which estimates target state using the likelihoods obtained from multiple CNNs. As the experimental result, the proposed algorithm is effective to handle multi-modal target appearances and other exceptions.