• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.027 seconds

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.

Real-time Abnormal Behavior Analysis System Based on Pedestrian Detection and Tracking (보행자의 검출 및 추적을 기반으로 한 실시간 이상행위 분석 시스템)

  • Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.25-27
    • /
    • 2021
  • With the recent development of deep learning technology, computer vision-based AI technologies have been studied to analyze the abnormal behavior of objects in image information acquired through CCTV cameras. There are many cases where surveillance cameras are installed in dangerous areas or security areas for crime prevention and surveillance. For this reason, companies are conducting studies to determine major situations such as intrusion, roaming, falls, and assault in the surveillance camera environment. In this paper, we propose a real-time abnormal behavior analysis algorithm using object detection and tracking method.

  • PDF

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Comparison of estimating vegetation index for outdoor free-range pig production using convolutional neural networks

  • Sang-Hyon OH;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.6
    • /
    • pp.1254-1269
    • /
    • 2023
  • This study aims to predict the change in corn share according to the grazing of 20 gestational sows in a mature corn field by taking images with a camera-equipped unmanned air vehicle (UAV). Deep learning based on convolutional neural networks (CNNs) has been verified for its performance in various areas. It has also demonstrated high recognition accuracy and detection time in agricultural applications such as pest and disease diagnosis and prediction. A large amount of data is required to train CNNs effectively. Still, since UAVs capture only a limited number of images, we propose a data augmentation method that can effectively increase data. And most occupancy prediction predicts occupancy by designing a CNN-based object detector for an image and counting the number of recognized objects or calculating the number of pixels occupied by an object. These methods require complex occupancy rate calculations; the accuracy depends on whether the object features of interest are visible in the image. However, in this study, CNN is not approached as a corn object detection and classification problem but as a function approximation and regression problem so that the occupancy rate of corn objects in an image can be represented as the CNN output. The proposed method effectively estimates occupancy for a limited number of cornfield photos, shows excellent prediction accuracy, and confirms the potential and scalability of deep learning.

Hardware Design of Super Resolution on Human Faces for Improving Face Recognition Performance of Intelligent Video Surveillance Systems (지능형 영상 보안 시스템의 얼굴 인식 성능 향상을 위한 얼굴 영역 초해상도 하드웨어 설계)

  • Kim, Cho-Rong;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.9
    • /
    • pp.22-30
    • /
    • 2011
  • Recently, the rising demand for intelligent video surveillance system leads to high-performance face recognition systems. The solution for low-resolution images acquired by a long-distance camera is required to overcome the distance limits of the existing face recognition systems. For that reason, this paper proposes a hardware design of an image resolution enhancement algorithm for real-time intelligent video surveillance systems. The algorithm is synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images, called training set. When we checked the performance of the algorithm at 32bit RISC micro-processor, the entire operation took about 25 sec, which is inappropriate for real-time target applications. Based on the result, we implemented the hardware module and verified it using Xilinx Virtex-4 and ARM9-based embedded processor(S3C2440A). The designed hardware can complete the whole operation within 33 msec, so it can deal with 30 frames per second. We expect that the proposed hardware could be one of the solutions not only for real-time processing at the embedded environment, but also for an easy integration with existing face recognition system.

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

A Study on Touchless Finger Vein Recognition Robust to the Alignment and Rotation of Finger (손가락 정렬과 회전에 강인한 비 접촉식 손가락 정맥 인식 연구)

  • Park, Kang-Ryoung;Jang, Young-Kyoon;Kang, Byung-Jun
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.275-284
    • /
    • 2008
  • With increases in recent security requirements, biometric technology such as fingerprints, faces and iris recognitions have been widely used in many applications including door access control, personal authentication for computers, internet banking, automatic teller machines and border-crossing controls. Finger vein recognition uses the unique patterns of finger veins in order to identify individuals at a high level of accuracy. This paper proposes new device and methods for touchless finger vein recognition. This research presents the following five advantages compared to previous works. First, by using a minimal guiding structure for the finger tip, side and the back of finger, we were able to obtain touchless finger vein images without causing much inconvenience to user. Second, by using a hot mirror, which was slanted at the angle of 45 degrees in front of the camera, we were able to reduce the depth of the capturing device. Consequently, it would be possible to use the device in many applications having size limitations such as mobile phones. Third, we used the holistic texture information of the finger veins based on a LBP (Local Binary Pattern) without needing to extract accurate finger vein regions. By using this method, we were able to reduce the effect of non-uniform illumination including shaded and highly saturated areas. Fourth, we enhanced recognition performance by excluding non-finger vein regions. Fifth, when matching the extracted finger vein code with the enrolled one, by using the bit-shift in both the horizontal and vertical directions, we could reduce the authentic variations caused by the translation and rotation of finger. Experimental results showed that the EER (Equal Error Rate) was 0.07423% and the total processing time was 91.4ms.

A Study on Iris Recognition by Iris Feature Extraction from Polar Coordinate Circular Iris Region (극 좌표계 원형 홍채영상에서의 특징 검출에 의한 홍채인식 연구)

  • Jeong, Dae-Sik;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.48-60
    • /
    • 2007
  • In previous researches for iris feature extraction, they transform a original iris image into rectangular one by stretching and interpolation, which causes the distortion of iris patterns. Consequently, it reduce iris recognition accuracy. So we are propose the method that extracts iris feature by using polar coordinates without distortion of iris patterns. Our proposed method has three strengths compared with previous researches. First, we extract iris feature directly from polar coordinate circular iris image. Though it requires a little more processing time, there is no degradation of accuracy for iris recognition and we compares the recognition performance of polar coordinate to rectangular type using by Hamming Distance, Cosine Distance and Euclidean Distance. Second, in general, the center position of pupil is different from that of iris due to camera angle, head position and gaze direction of user. So, we propose the method of iris feature detection based on polar coordinate circular iris region, which uses pupil and iris position and radius at the same time. Third, we overcome override point from iris patterns by using polar coordinates circular method. each overlapped point would be extracted from the same position of iris region. To overcome such problem, we modify Gabor filter's size and frequency on first track in order to consider low frequency iris patterns caused by overlapped points. Experimental results showed that EER is 0.29%, d' is 5,9 and EER is 0.16%, d' is 6,4 in case of using conventional rectangular image and proposed method, respectively.

Study of Fast Face Detection in Video frames compressed by advanced CODEC (향상된 코덱으로 압축된 프레임에서 고속 얼굴 검출 기법 연구)

  • Yoon, So-Jeong;Yoo, Sung-Geun;Eom, Yumie
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.254-257
    • /
    • 2014
  • Recently, various applications using real-time face detection have been developed as face recognition technology and hardware grows. While network service is developing and video instruments costs lower, it is needed that smart surveillance camera and service using network camera based on IP and face detection technology. However, videos should be compressed for reducing network bandwidth and storage capacity in surveillance system. As it requires high-level improvement of system performance when all the compressed frames are processed in a face detection program, fast face detection method is needed. In this paper, not only a fast way of algorithm using Haar like features and adaboost learning and motion information but also an application on broadcast system is suggested.

  • PDF

Gesture-based Table Tennis Game in AR Environment (증강현실과 제스처를 이용한 비전기반 탁구 게임)

  • Yang, Jong-Yeol;Lee, Sang-Kyung;Kyoung, Dong-Wuk;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.5 no.3
    • /
    • pp.3-10
    • /
    • 2005
  • We present the computer table tennis game using player's swing motion. We need to transform a real world coordinate into a virtual world coordinate in order to hit the virtual ball. We can not get a correct 3-dimension position of racket in environment that using one camera or simple image processing. Therefore we use Augmented Reality (AR) concept to develop the game. This paper shows the AR table tennis game using gesture and method to develop the 3D interaction game that only using one camera without any motion detection device or stereo cameras. Also, we use a scan line method to recognize gesture for speedy processing. The game is developed using ARtoolkit and DirectX that is popular tool of SDK for game development.

  • PDF