• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.035 seconds

Camera Tracking Method based on Model with Multiple Planes (다수의 평면을 가지는 모델기반 카메라 추적방법)

  • Lee, In-Pyo;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.11 no.4
    • /
    • pp.143-149
    • /
    • 2011
  • This paper presents a novel camera tracking method based on model with multiple planes. The proposed algorithm detects QR code that is one of the most popular types of two-dimensional barcodes. A 3D model is imported from the detected QR code for augmented reality application. Based on the geometric property of the model, the vertices are detected and tracked using optical flow. A clipping algorithm is applied to identify each plane from model surfaces. The proposed method estimates the homography from coplanar feature correspondences, which is used to obtain the initial camera motion parameters. After deriving a linear equation from many feature points on the model and their 3D information, we employ DLT(Direct Linear Transform) to compute camera information. In the final step, the error of camera poses in every frame are minimized with local Bundle Adjustment algorithm in real-time.

A Study on the Image-based Automatic Flight Control of Mini Drone (미니드론의 영상기반 자동 비행 제어에 관한 연구)

  • Sun, Eun-Hey;Luat, Tran Huu;Kim, Dongyeon;Kim, Yong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.6
    • /
    • pp.536-541
    • /
    • 2015
  • In this paper, we propose a the image-based automatic flight control system for the mini drone. Automatic flight system with a camera on the ceiling and markers on the floor and landing position is designed in an indoor environment. Images from the ceiling camera is used not only to recognize the makers and landing position but also to track the drone motion. PC sever identifies the location of the drone and sends control commands to the mini drone. Flight controller of the mini drone is designed using state-machine algorithm, PID control and way-point position control method. From the, The proposed automatic flight control system is verified through the experiments of the mini drone. We see that known makers in environment are recognized and the drone can follows the trajectories with the specific ㄱ, ㄷ and ㅁ shapes. Also, experimental results show that the drone can approach and correctly land on the target positions which are set at different height.

Implementation of Deep Learning-Based Vehicle Model and License Plate Recognition System (딥러닝 기반 자동차 모델 및 번호판 인식 시스템 구현)

  • Ham, Kyoung-Youn;Kang, Gil-Nam;Lee, Jang-Hyeon;Lee, Jung-Woo;Park, Dong-Hoon;Ryoo, Myung-Chun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.465-466
    • /
    • 2022
  • 본 논문에서는 딥러닝 영상인식 기술을 활용한 객체검출 모델인 YOLOv4를 활용하여 차량의 모델과 번호판인식 시스템을 제안한다. 본 논문에서 제안하는 시스템은 실시간 영상처리기술인 YOLOv4를 사용하여 차량모델 인식과 번호판 영역 검출을 하고, CNN(Convolutional Neural Network)알고리즘을 이용하여 번호판의 글자와 숫자를 인식한다. 이러한 방법을 이용한다면 카메라 1대로 차량의 모델 인식과 번호판 인식이 가능하다. 차량모델 인식과 번호판 영역 검출에는 실제 데이터를 사용하였으며, 차량 번호판 문자 인식의 경우 실제 데이터와 가상 데이터를 사용하였다. 차량 모델 인식 정확도는 92.3%, 번호판 검출 98.9%, 번호판 문자 인식 94.2%를 기록하였다.

  • PDF

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Sign Language recognition Using Sequential Ram-based Cumulative Neural Networks (순차 램 기반 누적 신경망을 이용한 수화 인식)

  • Lee, Dong-Hyung;Kang, Man-Mo;Kim, Young-Kee;Lee, Soo-Dong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.205-211
    • /
    • 2009
  • The Weightless Neural Network(WNN) has the advantage of the processing speed, less computability than weighted neural network which readjusts the weight. Especially, The behavior information such as sequential gesture has many serial correlation. So, It is required the high computability and processing time to recognize. To solve these problem, Many algorithms used that added preprocessing and hardware interface device to reduce the computability and speed. In this paper, we proposed the Ram based Sequential Cumulative Neural Network(SCNN) model which is sign language recognition system without preprocessing and hardware interface. We experimented with using compound words in continuous korean sign language which was input binary image with edge detection from camera. The recognition system of sign language without preprocessing got 93% recognition rate.

  • PDF

An User-aware System using Visible Light Communication (가시광 통신을 이용한 사용자 인식 시스템)

  • Kim, Jong-Su;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.715-722
    • /
    • 2019
  • This paper introduces the implementation of an user-aware system using a visible light communication and its operations. The user-aware system using a visible light communication consists of the transmitter based on the Android system and the receiver based on an open-source controller. In the transmitter, user's personal information data is encoded and converted to visible light signals by the Android camera interface. In the receiver, the photodiode module receives the incoming visible light signals and converts to electrical signals and the open-source controller, that is arduino processes the received data. The processing module finds the start bit of 0111 to determine the user information data from the packet for the burst-mode communication. According to the experimental results, the proposed system successfully transmits and receives visible light data with the manchester encoding.

Design of Image Recognition Module for Face and Iris Area based on Pixel with Eye Blinking (눈 깜박임 화소 값 기반의 안면과 홍채영역 영상인식용 모듈설계)

  • Kang, Mingoo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2017
  • In this paper, an USB-OTG (Uiversal Serial Bus On-the-go) interface module was designed with the iris information for personal identification. The image recognition algorithm which was searching face and iris areas, was proposed with pixel differences from eye blinking after several facial images were captured and then detected without any activities like as pressing the button of smart phone. The region of pupil and iris could be fast involved with the proper iris area segmentation from the pixel value calculation of frame difference among the images which were detected with two adjacent open-eye and close-eye pictures. This proposed iris recognition could be fast processed with the proper grid size of the eye region, and designed with the frame difference between the adjacent images from the USB-OTG interface with this camera module with the restrict of searching area in face and iris location. As a result, the detection time of iris location can be reduced, and this module can be expected with eliminating the standby time of eye-open.

The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera (줌 카메라를 통해 획득된 거리별 얼굴 영상을 이용한 원거리 얼굴 인식 기술)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1139-1145
    • /
    • 2014
  • User recognition technology, which identifies or verifies a certain individual is absolutely essential under robotic environments for intelligent services. The conventional face recognition algorithm using single distance face image as training images has a problem that face recognition rate decreases as distance increases. The face recognition algorithm using face images by actual distance as training images shows good performance but this has a problem that it requires user cooperation. This paper proposes the LDA-based long distance face recognition method which uses multiple distance face images from a zoom camera for training face images. The proposed face recognition technique generated better performance by average 7.8% than the technique using the existing single distance face image as training. Compared with the technique that used face images by distance as training, the performance fell average 8.0%. However, the proposed method has a strength that it spends less time and requires less cooperation to users when taking face images.

An OSD Menu Verification Technique using a FMM Neural Network (FMM 신경망을 이용한 OSD 메뉴 검증기법)

  • Lee Jin-Seok;Park Jung-Min;Kim Ho-Joon
    • Annual Conference of KIPS
    • /
    • 2006.05a
    • /
    • pp.315-318
    • /
    • 2006
  • 본 논문에서는 TV OSD(On Screen Display) 메뉴 자동검증 시스템에서 문자패턴의 실시간 인식을 위한 방법론을 고찰한다. 이는 일반적인 문자인식 문제와는 달리 시스템 환경에 대한 몇 가지 가정과 제약조건을 고려해야 한다. 예컨대 문제의 특성상 카메라 및 TV 제어 기기부의 동작과 연동하는 작업 스케쥴링 기능과 실시간 분석기능 등의 요건은 시스템개발을 복잡하게 하는 반면, 주어진 OSD 메뉴 데이터로부터 검증과정은 미지 패턴에 대한 인식과정을 단순화하여 일종의 판정(decision) 문제로 고려될 수 있게 한다. 본 연구에서는 인식의 방법론으로서 수정된 구조의 FMM 신경망을 적용한다. 이는 하이퍼박스 기반의 패턴 분류기로서 간결하면서도 강력한 학습기능을 제공한다. 기존의 FMM 모델이 갖는 단점인 학습패턴에서 특징분포와 빈도를 고려하지 못한다는 점을 개선하여, 특징과 하이퍼박스간의 가중치 요소를 고려한 활성화 특성을 정의한다. 또한 실제 데이터를 사용한 실험결과를 통해 제안된 이론의 유용성을 고찰한다.

  • PDF

Vision-based human motion analysis for event recognition (휴먼 모션 분석을 통한 이벤트 검출 및 인식)

  • Cui, Yao-Huan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.219-222
    • /
    • 2009
  • 최근 컴퓨터비젼 분야에서 이벤트 검출 및 인식이 활발히 연구되고 있으며, 도전적인 주제들 중 하나이다. 이벤트 검출 기술들은 많은 감시시스템들에서 유용하고 효율적인 응용 분야이다. 본 논문에서는 사무실 환경에서 발생할 수 있는 이벤트의 검출 및 인식을 위한 방법을 제안한다. 제안된 방법에서의 이벤트는 입장( entering), 퇴장(exiting), 착석(sitting-down), 기립(standing-up)으로 구성된다. 제안된 방법은 하드웨어적인 센서를 사용하지 않고, MHI(Motion History Image) 시퀀스(sequence)를 이용한 인간의 모션 분석을 통해 이벤트를 검출할 수 있는 방법이며, 사람의 체형과 착용한 옷의 종류와 색상, 그라고 카메라로부터의 위치관계에 불변한 특성을 가진다. 에지검출 기술을 HMI 시퀀스정보와 결합하여 사람 모션의 기하학적 특징을 추출한 후, 이 정보를 이벤트 인식의 기본 특징으로 사용한다. 제안된 방법은 단순한 이벤트 검출 프레임웍을 사용하기 때문에 검출하고자 하는 이벤트의 설명만을 첨가하는 것으로 확장이 가능하다. 또한, 제안된 방법은 컴퓨터비견 기술에 기반한 많은 감시시스템에 적용이 가능하다.

  • PDF