• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.033 seconds

Object Detection based on Mask R-CNN from Infrared Camera (적외선 카메라 영상에서의 마스크 R-CNN기반 발열객체검출)

  • Song, Hyun Chul;Knag, Min-Sik;Kimg, Tae-Eun
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1213-1218
    • /
    • 2018
  • Recently introduced Mask R - CNN presents a conceptually simple, flexible, general framework for instance segmentation of objects. In this paper, we propose an algorithm for efficiently searching objects of images, while creating a segmentation mask of heat generation part for an instance which is a heating element in a heat sensed image acquired from a thermal infrared camera. This method called a mask R - CNN is an algorithm that extends Faster R - CNN by adding a branch for predicting an object mask in parallel with an existing branch for recognition of a bounding box. The mask R - CNN is added to the high - speed R - CNN which training is easy and fast to execute. Also, it is easy to generalize the mask R - CNN to other tasks. In this research, we propose an infrared image detection algorithm based on R - CNN and detect heating elements which can not be distinguished by RGB images. As a result of the experiment, a heat-generating object which can not be discriminated from Mask R-CNN was detected normally.

Measurement of Target Objects Based on Recognition of Curvature and Plane Surfaces using a Single Slit Beam Projection (슬릿광 투영법을 이용한 곡면과 평면의 식별에 의한 대상물체의 계측)

  • Choi, Yong-Woon;Kim, Young-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.568-576
    • /
    • 1999
  • Using a laser sheet beam projector combined with a CCD-Camera, an efficient technique to recognize complex surface of curvature and lane has been demonstrated for the purpose of mobile robot navigation. In general, obstacles of indoor environments in the field of SLIT-RAY plane are captured as segments of an elliptical arc and a line in the camera image. The robot has been capable of moving along around the obstacle in front of it, by recognizing the original shape of each segment with the differential coefficient by means of least squares method. In this technique, the imaged pixels of each segment, particularly elliptical arc, have been converted into a corresponding circular arc in the real-world coordinates so as to make more feasible the image processing for the position and radius measurement than conventional way based on direct elliptical are analyses. Advantages over direct elliptical cases include 1) higher measurement accuracy and shorter processing time because the circular arc process can reduce the shape-specifying parameters, 2) no complicated factor such as the tilt of elliptical arc axis in the image plane, which produces the capability to find column position and radiua regardless of the camera location . These are essentially required for a mobile robot application. This technique yields an accuracy less than 2cm for a 28.5cm radius column located in the range of 70-250cm distance from the robot. The accuracy obtained in this study is sufficient enough to navigate a cleaning robot which operates in indoor environments.

  • PDF

Tracking of Walking Human Based on Position Uncertainty of Dynamic Vision Sensor of Quadcopter UAV (UAV기반 동적영상센서의 위치불확실성을 통한 보행자 추정)

  • Lee, Junghyun;Jin, Taeseok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.24-30
    • /
    • 2016
  • The accuracy of small and low-cost CCD cameras is insufficient to provide data for precisely tracking unmanned aerial vehicles (UAVs). This study shows how a quad rotor UAV can hover on a human targeted tracking object by using data from a CCD camera rather than imprecise GPS data. To realize this, quadcopter UAVs need to recognize their position and posture in known environments as well as unknown environments. Moreover, it is necessary for their localization to occur naturally. It is desirable for UAVs to estimate their position by solving uncertainty for quadcopter UAV hovering, as this is one of the most important problems. In this paper, we describe a method for determining the altitude of a quadcopter UAV using image information of a moving object like a walking human. This method combines the observed position from GPS sensors and the estimated position from images captured by a fixed camera to localize a UAV. Using the a priori known path of a quadcopter UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated quadcopter UAV's altitude. Since the equations are based on the geometric constraint equation, measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the quadcopter UAV. The Kalman filter scheme is applied for this method. Its performance is verified by a computer simulation and experiments.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

A Study on Transport Robot for Autonomous Driving to a Destination Based on QR Code in an Indoor Environment (실내 환경에서 QR 코드 기반 목적지 자율주행을 위한 운반 로봇에 관한 연구)

  • Se-Jun Park
    • Journal of Platform Technology
    • /
    • v.11 no.2
    • /
    • pp.26-38
    • /
    • 2023
  • This paper is a study on a transport robot capable of autonomously driving to a destination using a QR code in an indoor environment. The transport robot was designed and manufactured by attaching a lidar sensor so that the robot can maintain a certain distance during movement by detecting the distance between the camera for recognizing the QR code and the left and right walls. For the location information of the delivery robot, the QR code image was enlarged with Lanczos resampling interpolation, then binarized with Otsu Algorithm, and detection and analysis were performed using the Zbar library. The QR code recognition experiment was performed while changing the size of the QR code and the traveling speed of the transport robot while the camera position of the transport robot and the height of the QR code were fixed at 192cm. When the QR code size was 9cm × 9cm The recognition rate was 99.7% and almost 100% when the traveling speed of the transport robot was less than about 0.5m/s. Based on the QR code recognition rate, an experiment was conducted on the case where the destination is only going straight and the destination is going straight and turning in the absence of obstacles for autonomous driving to the destination. When the destination was only going straight, it was possible to reach the destination quickly because there was little need for position correction. However, when the destination included a turn, the time to arrive at the destination was relatively delayed due to the need for position correction. As a result of the experiment, it was found that the delivery robot arrived at the destination relatively accurately, although a slight positional error occurred while driving, and the applicability of the QR code-based destination self-driving delivery robot was confirmed.

  • PDF

Human Gesture Recognition Technology Based on User Experience for Multimedia Contents Control (멀티미디어 콘텐츠 제어를 위한 사용자 경험 기반 동작 인식 기술)

  • Kim, Yun-Sik;Park, Sang-Yun;Ok, Soo-Yol;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1196-1204
    • /
    • 2012
  • In this paper, a series of algorithms are proposed for controlling different kinds of multimedia contents and realizing interact between human and computer by using single input device. Human gesture recognition based on NUI is presented firstly in my paper. Since the image information we get it from camera is not sensitive for further processing, we transform it to YCbCr color space, and then morphological processing algorithm is used to delete unuseful noise. Boundary Energy and depth information is extracted for hand detection. After we receive the image of hand detection, PCA algorithm is used to recognize hand posture, difference image and moment method are used to detect hand centroid and extract trajectory of hand movement. 8 direction codes are defined for quantifying gesture trajectory, so the symbol value will be affirmed. Furthermore, HMM algorithm is used for hand gesture recognition based on the symbol value. According to series of methods we presented, we can control multimedia contents by using human gesture recognition. Through large numbers of experiments, the algorithms we presented have satisfying performance, hand detection rate is up to 94.25%, gesture recognition rate exceed 92.6%, hand posture recognition rate can achieve 85.86%, and face detection rate is up to 89.58%. According to these experiment results, we can control many kinds of multimedia contents on computer effectively, such as video player, MP3, e-book and so on.

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

The Identifier Recognition from Shipping Container Image by Using Contour Tracking and Self-Generation Supervised Learning Algorithm Based on Enhanced ART1 (윤곽선 추적과 개선된 ART1 기반 자가 생성 지도 학습 알고리즘을 이용한 운송 컨테이너 영상의 식별자 인식)

  • 김광백
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.3
    • /
    • pp.65-79
    • /
    • 2003
  • In general, the extraction and recognition of identifier is very hard work, because the scale or location of identifier is not fixed-form. And, because the provided image is contained by camera, it has some noises. In this paper, we propose methods for automatic detecting edge using canny edge mask. After detecting edges, we extract regions of identifier by detected edge information's. In regions of identifier, we extract each identifier using contour tracking algorithm. The self-generation supervised learning algorithm is proposed for recognizing them, which has the algorithm of combining the enhanced ART1 and the supervised teaming method. The proposed method has applied to the container images. The extraction rate of identifier obtained by using contour tracking algorithm showed better results than that from the histogram method. Furthermore, the recognition rate of the self-generation supervised teaming method based on enhanced ART1 was improved much more than that of the self-generation supervised learning method based conventional ART1.

  • PDF

Design of a Motion Recognition System for the Realistic Biathlon Simulator System (실감형 바이애슬론 시뮬레이터를 위한 동작 인식 시스템 설계)

  • Kim, Cheol-min;Lee, Min-tae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.396-399
    • /
    • 2018
  • In this paper, we propose a motion recognition system for identification and interaction with simulator used in the realistic biathlon simulator. The proposed system tried to improve the motions data which is obstructed by the obstacles or overlapping joints and the motion due to the fast motion in the process of recognizing the various motion patterns in the biathlon. In this paper, we constructed a multi-camera motion recognition system based on IoT devices, and then we applied a skeletal area interpolation method for normal motion identification. We designed a system that can increase the recognition rate of motion from the biathlon. The proposed system can be applied to the analysis of snow sports motion and it will be used to develop realistic biathlon simulator system.

  • PDF

A Study on Tangible Gesture Interface Prototype Development of the Quiz Game (퀴즈게임의 체감형 제스처 인터페이스 프로토타입 개발)

  • Ahn, Jung-Ho;Ko, Jae-Pil
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.235-245
    • /
    • 2012
  • This paper introduce a quiz game contents based on gesture interface. We analyzed the off-line quiz games, extracted its presiding components, and digitalized them so that the proposed game contents is able to substitute for the off-line quiz games. We used the Kinect camera to obtain the depth images and performed the preprocessing including vertical human segmentation, head detection and tracking and hand detection, and gesture recognition for hand-up, hand vertical movement, fist shape, pass and fist-and-attraction. Especially, we defined the interface gestures designed as a metaphor for natural gestures in real world so that users are able to feel abstract concept of movement, selection and confirmation tangibly. Compared to our previous work, we added the card compensation process for completeness, improved the vertical hand movement and the fist shape recognition methods for the example selection and presented an organized test to measure the recognition performance. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.