• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.024 seconds

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

A Novel Algorithm for Face Recognition From Very Low Resolution Images

  • Senthilsingh, C.;Manikandan, M.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.2
    • /
    • pp.659-669
    • /
    • 2015
  • Face Recognition assumes much significance in the context of security based application. Normally, high resolution images offer more details about the image and recognizing a face from a reasonably high resolution image would be easier when compared to recognizing images from very low resolution images. This paper addresses the problem of recognizing faces from a very low resolution image whose size is as low as $8{\times}8$. With the use of CCTV(Closed Circuit Television) and with other surveillance camera-based application for security purposes, the need to overcome the shortcomings with very low resolution images has been on the rise. The present day face recognition algorithms could not provide adequate performance when employed to recognize images from VLR images. Existing methods use super-resolution (SR) methods and Relation Based Super Resolution methods to construct from very low resolution images. This paper uses a learning based super resolution method to extract and construct images from very low resolution images. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Active Contours Level Set Based Still Human Body Segmentation from Depth Images For Video-based Activity Recognition

  • Siddiqi, Muhammad Hameed;Khan, Adil Mehmood;Lee, Seok-Won
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2839-2852
    • /
    • 2013
  • Context-awareness is an essential part of ubiquitous computing, and over the past decade video based activity recognition (VAR) has emerged as an important component to identify user's context for automatic service delivery in context-aware applications. The accuracy of VAR significantly depends on the performance of the employed human body segmentation algorithm. Previous human body segmentation algorithms often engage modeling of the human body that normally requires bulky amount of training data and cannot competently handle changes over time. Recently, active contours have emerged as a successful segmentation technique in still images. In this paper, an active contour model with the integration of Chan Vese (CV) energy and Bhattacharya distance functions are adapted for automatic human body segmentation using depth cameras for VAR. The proposed technique not only outperforms existing segmentation methods in normal scenarios but it is also more robust to noise. Moreover, it is unsupervised, i.e., no prior human body model is needed. The performance of the proposed segmentation technique is compared against conventional CV Active Contour (AC) model using a depth-camera and obtained much better performance over it.

Recognition of Car License Plate Using Geometric Information from Portable Device Image (휴대단말기 영상에서의 기하학적 정보를 이용한 차량 번호판 인식)

  • Yeom, Hee-Jung;Eun, Sung-Jong;WhangBo, Taeg-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.10
    • /
    • pp.1-8
    • /
    • 2010
  • Recently, the character image processing technology using portable device camera image at home and abroad are actively conducted, but Practical use are lower rate because of accuracy and time-consuming process problems. In this paper, we propose the license plate recognition method based on geometric information from portable device camera image. In the extracted license plate region we recognize characters using the chain code and the Thickness information through the cumulative projected edge after performing the pre-processing work considering the angle difference, the contrast enhancement and the low resolution from portable device camera image. The proposed algorithm is effective and accurate recognition by light and reducing the processing time. And, the results from the character recognition success rate was 95%. In the future, we will research about license plate recognition algorithm using long distance image or added motion blur image.

A Novel Method for Hand Posture Recognition Based on Depth Information Descriptor

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.763-774
    • /
    • 2015
  • Hand posture recognition has been a wide region of applications in Human Computer Interaction and Computer Vision for many years. The problem arises mainly due to the high dexterity of hand and self-occlusions created in the limited view of the camera or illumination variations. To remedy these problems, a hand posture recognition method using 3-D point cloud is proposed to explicitly utilize 3-D information from depth maps in this paper. Firstly, hand region is segmented by a set of depth threshold. Next, hand image normalization will be performed to ensure that the extracted feature descriptors are scale and rotation invariant. By robustly coding and pooling 3-D facets, the proposed descriptor can effectively represent the various hand postures. After that, SVM with Gaussian kernel function is used to address the issue of posture recognition. Experimental results based on posture dataset captured by Kinect sensor (from 1 to 10) demonstrate the effectiveness of the proposed approach and the average recognition rate of our method is over 96%.

Automatic Recognition of the Front/Back Sides and Stalk States for Mushrooms(Lentinus Edodes L.) (버섯 전후면과 꼭지부 상태의 자동 인식)

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.19 no.2
    • /
    • pp.124-137
    • /
    • 1994
  • Visual features of a mushroom(Lentinus Edodes, L.) are critical in grading and sorting as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. To realize the automatic handling and grading of mushrooms in real time, the computer vision system should be utilized and the efficient and robust processing of the camera captured visual information be provided. Since visual features of a mushroom are distributed over the front and back sides, recognizing sides and states of the stalk including the stalk orientation from the captured image is a prime process in the automatic task processing. In this paper, the efficient and robust recognition process identifying the front and back side and the state of the stalk was developed and its performance was compared with other recognition trials. First, recognition was tried based on the rule set up with some experimental heuristics using the quantitative features such as geometry and texture extracted from the segmented mushroom image. And the neural net based learning recognition was done without extracting quantitative features. For network inputs the segmented binary image obtained from the combined type automatic thresholding was tested first. And then the gray valued raw camera image was directly utilized. The state of the stalk seriously affects the measured size of the mushroom cap. When its effect is serious, the stalk should be excluded in mushroom cap sizing. In this paper, the stalk removal process followed by the boundary regeneration of the cap image was also presented. The neural net based gray valued raw image processing showed the successful results for our recognition task. The developed technology through this research may open the new way of the quality inspection and sorting especially for the agricultural products whose visual features are fuzzy and not uniquely defined.

  • PDF

Development of Face Recognition System based on Real-time Mini Drone Camera Images (실시간 미니드론 카메라 영상을 기반으로 한 얼굴 인식 시스템 개발)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.12
    • /
    • pp.17-23
    • /
    • 2019
  • In this paper, I propose a system development methodology that accepts images taken by camera attached to drone in real time while controlling mini drone and recognize and confirm the face of certain person. For the development of this system, OpenCV, Python related libraries and the drone SDK are used. To increase face recognition ratio of certain person from real-time drone images, it uses Deep Learning-based facial recognition algorithm and uses the principle of Triples in particular. To check the performance of the system, the results of 30 experiments for face recognition based on the author's face showed a recognition rate of about 95% or higher. It is believed that research results of this paper can be used to quickly find specific person through drone at tourist sites and festival venues.

Fingerprint Segmentation and Ridge Orientation Estimation with a Mobile Camera for Fingerprint Recognition (모바일 카메라를 이용한 지문인식을 위한 지문영역 추출 및 융선방향 추출 알고리즘)

  • Lee Chulhan;Lee Sanghoon;Kim Jaihie;Kim Sung-Jae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.89-98
    • /
    • 2005
  • Fingerprint segmentation and ridge orientation estimation algorithms with images from a mobile camera are proposed. The fingerprint images from a mobile camera are quite different from those from conventional sensor, called touch based sensor such as optical, capacitive, and thermal. For example, the images from a mobile camera are colored and the backgrounds or non-finger regions are very erratic depending on how the image capture time and place. Also the contrast between ridge and valley of a mobile camera image are lower than that of touch based sensor image. To segment fingerprint region, we first detect the initial region using color information and texture information. The LUT (Look Up Table) is used to model the color distribution of fingerprint images using manually segmented images and frequency information is extracted to discriminate between in focused fingerprint regions and out of focused background regions. With the detected initial region, the region growing algerian is executed to segment final fingerprint region. In fingerprint orientation estimation, the problem of gradient based method is very sensitive to outlier that occurred by scar and camera noise. To solve this problem, we propose a robust regression method that removes the outlier iteratively and effectively. In the experiments, we evaluated the result of the proposed fingerprint segmentation algerian using 600 manually segmented images and compared the orientation algorithms in terms of recognition accuracy.

Navigation of a Mobile Robot Using Hand Gesture Recognition (손 동작 인식을 이용한 이동로봇의 주행)

  • Kim, Il-Myeong;Kim, Wan-Cheol;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.