• Title/Summary/Keyword: Real-time object recognition

Search Result 280, Processing Time 0.031 seconds

RECOGNITION ALGORITHM OF DRIED OAK MUSHROOM GRADINGS USING GRAY LEVEL IMAGES

  • Lee, C.H.;Hwang, H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.773-779
    • /
    • 1996
  • Dried oak mushroom have complex and various visual features. Grading and sorting of dried oak mushrooms has been done by the human expert. Though actions involved in human grading looked simple, a decision making underneath the simple action comes from the result of the complex neural processing of the visual image. Through processing details involved in human visual recognition has not been fully investigated yet, it might say human can recognize objects via one of three ways such as extracting specific features or just image itself without extracting those features or in a combined manner. In most cases, extracting some special quantitative features from the camera image requires complex algorithms and processing of the gray level image requires the heavy computing load. This fact can be worse especially in dealing with nonuniform, irregular and fuzzy shaped agricultural products, resulting in poor performance because of the sensitiveness to the crisp criteria or specific ules set up by algorithms. Also restriction of the real time processing often forces to use binary segmentation but in that case some important information of the object can be lost. In this paper, the neuro net based real time recognition algorithm was proposed without extracting any visual feature but using only the directly captured raw gray images. Specially formated adaptable size of grids was proposed for the network input. The compensation of illumination was also done to accomodate the variable lighting environment. The proposed grading scheme showed very successful results.

  • PDF

A Study on Image Segmentation and Tracking based on Intelligent Method (지능기법을 이용한 영상분활 및 물체추적에 관한 연구)

  • Lee, Min-Jung;Hwang, Gi-Hyun;Kim, Jeong-Yoon;Jin, Tae-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.311-312
    • /
    • 2007
  • This dissertation proposes a global search and a local search method to track the object in real-time. The global search recognizes a target object among the candidate objects through the entire image search, and the local search recognizes and track only the target object through the block search. This dissertation uses the object color and feature information to achieve fast object recognition. Finally we conducted an experiment for the object tracking system based on a pan/tilt structure.

  • PDF

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Development of Application to guide Putting Aiming using Object Detection Technology (객체 인지 기술을 이용한 퍼팅 조준 가이드 애플리케이션 개발)

  • Jae-Moon Lee;Kitae Hwang;Inhwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.2
    • /
    • pp.21-27
    • /
    • 2023
  • This paper is a study on the development of an app that assists in putting alignment in golf. The proposed app measures the position and size of the hole cup on the green to provide the distance between the hole cup and the aiming point. To achieve this, artificial intelligence object recognition technology was applied in the development process. The app measures the position and size of the hole cup in real-time using object recognition technology on the camera image of the smartphone. The app then displays the distance between the aiming point and the hole cup on the camera image to assist in putting alignment. The proposed app was developed for iOS on the iPhone. Performance testing of the developed app showed that it could sufficiently recognize the hole cup in real-time and accurately display the distance to provide helpful information for putting alignment.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

Natural Object Recognition for Augmented Reality Applications (증강현실 응용을 위한 자연 물체 인식)

  • Anjan, Kumar Paul;Mohammad, Khairul Islam;Min, Jae-Hong;Kim, Young-Bum;Baek, Joong-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.143-150
    • /
    • 2010
  • Markerless augmented reality system must have the capability to recognize and match natural objects both in indoor and outdoor environment. In this paper, a novel approach is proposed for extracting features and recognizing natural objects using visual descriptors and codebooks. Since the augmented reality applications are sensitive to speed of operation and real time performance, our work mainly focused on recognition of multi-class natural objects and reduce the computing time for classification and feature extraction. SIFT(scale invariant feature transforms) and SURF(speeded up robust feature) are used to extract features from natural objects during training and testing, and their performance is compared. Then we form visual codebook from the high dimensional feature vectors using clustering algorithm and recognize the objects using naive Bayes classifier.

Recognition Direction Improvement of Target Object for Machine Vision based Automatic Inspection (머신비전 자동검사를 위한 대상객체의 인식방향성 개선)

  • Hong, Seung-Beom;Hong, Seung-Woo;Lee, Kyou-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1384-1390
    • /
    • 2019
  • This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This enables the automatic machine vision inspection to detect the image of the inspection object regardless of the position and orientation of the object, eliminating the need for a separate inspection jig and improving the automation level of the inspection process. This study develops the technology and method that can be applied to the wire harness manufacturing process as the inspection object and present the result of real system. The results of the system implementation was evaluated by the accredited institution. This includes successful measurement in the accuracy, detection recognition, reproducibility and positioning success rate, and achievement the goal in ten kinds of color discrimination ability, inspection time within one second and four automatic mode setting, etc.

(Real Time Classification System for Lead Pin Images) (실시간 Lead Pin 영상 분류 시스템)

  • 장용훈
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1177-1188
    • /
    • 2002
  • To classify real time Lead pin images in this paper, The image acquisition system was composed to C.C.D, image frame grabber(DT3153), P.C(PentiumIII). I proposed image processing algorithms. This algorithms were composed to real time monitoring, Lead Pin image acquisition, image noise deletion, object area detection, point detection and pattern classification algorithm. The raw images were acquired from Lead pin images using the system. The result images were obtained from raw images by image processing algorithms. In implemental result, The right recognition was 97 of 100 acceptable products, 95 of 100 defective products. The recognition rate was 96% for total 200 Lead Pins.

  • PDF

Development of a Real-Time Automatic Passenger Counting System using Head Detection Based on Deep Learning

  • Kim, Hyunduk;Sohn, Myoung-Kyu;Lee, Sang-Heon
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.428-442
    • /
    • 2022
  • A reliable automatic passenger counting (APC) system is a key point in transportation related to the efficient scheduling and management of transport routes. In this study, we introduce a lightweight head detection network using deep learning applicable to an embedded system. Currently, object detection algorithms using deep learning have been found to be successful. However, these algorithms essentially need a graphics processing unit (GPU) to make them performable in real-time. So, we modify a Tiny-YOLOv3 network using certain techniques to speed up the proposed network and to make it more accurate in a non-GPU environment. Finally, we introduce an APC system, which is performable in real-time on embedded systems, using the proposed head detection algorithm. We implement and test the proposed APC system on a Samsung ARTIK 710 board. The experimental results on three public head datasets reflect the detection accuracy and efficiency of the proposed head detection network against Tiny-YOLOv3. Moreover, to test the proposed APC system, we measured the accuracy and recognition speed by repeating 50 instances of entering and 50 instances of exiting. These experimental results showed 99% accuracy and a 0.041-second recognition speed despite the fact that only the CPU was used.

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF