• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.035 seconds

Thermal Display-Based Emotional Communication System for Blindness (시각장애인을 위한 온각 기반 감정 전달 시스템)

  • Noh, Hyoju;Kim, Kangtae;Lee, Sungkil
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.1659-1660
    • /
    • 2013
  • 사람 간 의사소통에서 표정, 몸짓과 같은 비언어적 시각 요소들은 감정 표현의 중요한 요소이나, 시각장애인들은 이러한 감정 정보들을 받아들이는데 제한적이다. 본 논문은 시각장애인에게 이러한 비언어적 시각 요소 기반의 감정 정보를 인식하여 온각으로 전달하기 위한 방법론을 제안한다. 상대방의 표정은 안경 착용형 카메라로 인식되어 감정으로 분류된다. 인식된 표정이 웃는 얼굴과 같이 호감인 경우, 이 상태는 온각으로 변환되어 안경에 착용된 온도전달 장치에서 시각장애인에게 호감을 전달한다. 이러한 온각기반 감정전달 장치는 시각장애인의 의사소통 향상을 위한 장치의 개발에 응용될 수 있다.

Alarm Device Using Eye-Tracking Web-camera (웹카메라를 이용한 시선 추적식 졸음 방지 디바이스)

  • Kim, Seong-Joo;Kim, Yoo-Hyun;Shin, Eun-Jung;Lee, Kang-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.01a
    • /
    • pp.321-322
    • /
    • 2013
  • 본 논문은 웹카메라를 이용하여 시선 추적식 졸음 방지 디바이스를 제안한다. 이는 하드웨어와 소프트웨어 두 부분으로 설계되었으며, 웹카메라를 이용하여 사용자의 눈을 인식하고, Arduino와 Max/msp를 기반으로 한다. Eye-Tracking 기술을 적용하여 사용자의 상태를 파악하고, 상태에 따라 적절한 졸음 방지 기능을 수행하도록 한다. 또한 졸음 방지 기능, 탁상 보조등과 같은 다양한 기능을 수행한다. 사용자는 웹카메라를 통한 시선 추적식 알람 디바이스를 이용함으로써, 새로운 경험을 제공 받는다. 세계 최초(World-First)로 시선추적 기술을 이용하여 남녀노소 누구나 업무 중 이용이 가능한 디바이스이다.

  • PDF

Design of Camera Model for Implementation of Spherical PTAM (구면 PTAM의 구현을 위한 카메라 모델 설계)

  • Kim, Ki-Sik;Park, Jong-Seung
    • Annual Conference of KIPS
    • /
    • 2020.05a
    • /
    • pp.607-610
    • /
    • 2020
  • 시각적 환경 인식을 위하여 PTAM 연구가 활발히 이루어지고 있다. 최근 모든 방향의 시야각을 제공하는 구면 비디오를 위한 연구로 확장되고 있다. 기존의 구면 SLAM 방법은 Unified Sphere Model을 사용하며 앞면 시야각만 제공할 수 있는 한계가 있다. 본 논문에서는 구면 비디오를 위한 PTAM의 구현을 위한 카메라 모델을 제시한다. 제안된 카메라 모델은 핀홀 투영 카메라에 기반한 듀얼 영상 평면을 사용한다. 제안 방법은 앞면 시야각에 제약되지 않으며 전체 시야각을 지원한다. 또한 구면 바디오의 PTAM 적용 과정에서 평면 연산식을 직접 적용할 수 있는 장점이 있다.

Deep Learning Model Selection Platform for Object Detection (사물인식을 위한 딥러닝 모델 선정 플랫폼)

  • Lee, Hansol;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Recently, object recognition technology using computer vision has attracted attention as a technology to replace sensor-based object recognition technology. It is often difficult to commercialize sensor-based object recognition technology because such approach requires an expensive sensor. On the other hand, object recognition technology using computer vision may replace sensors with inexpensive cameras. Moreover, Real-time recognition is viable due to the growth of CNN, which is actively introduced into other fields such as IoT and autonomous vehicles. Because object recognition model applications demand expert knowledge on deep learning to select and learn the model, such method, however, is challenging for non-experts to use it. Therefore, in this paper, we analyze the structure of deep - learning - based object recognition models, and propose a platform that can automatically select a deep - running object recognition model based on a user 's desired condition. We also present the reason we need to select statistics-based object recognition model through conducted experiments on different models.

A Study on Lambertian Color Segmentation and Canny Edge Detection Algorithms for Automatic Display Detection in CamCom (저속 카메라 통신용 자동 디스플레이 검출을 위한 Lambertian 색상 분할 및 Canny Edge Detection 알고리즘 연구)

  • Han, Jungdo;Said, Ngumanov;Vadim, Li;Cha, Jaesang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.615-622
    • /
    • 2018
  • Recent advancements in camera communication (CamCom) technology using visible light exploited to use display as an luminance source to modulate the data for visible light data communication. The existing display-CamCom techniques uses the selected region of interest based camera capturing approach to detect and decode the 2D color coded data on display screen. This is not effective way to do communicate when the user on mobility. This paper propose the automatic display detection using Lambertian color segmentation combined with canny edge detection algorithms for CamCom in order to avoid manual region of interest selection to establish communication link between display and camera. The automatic display detection methods fails using conventional edge detection algorithms when content changes dynamically in displays. In order to solve this problem lambertian color segmentation combined with canny edge detection algorithms are proposed to detect display automatically. This research analysed different algorithms on display edge recognition and measured the performance on rendering dynamically changing content with color code on display. The display detection rate is achieved around 96% using this proposed solutions.

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface (가상 놀이 공간 인터페이스를 위한 HMM 기반 상반신 제스처 인식)

  • Park, Jae-Wan;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.11-17
    • /
    • 2010
  • In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM's non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM's state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.

Object Detection of AGV in Manufacturing Plants using Deep Learning (딥러닝 기반 제조 공장 내 AGV 객체 인식에 대한 연구)

  • Lee, Gil-Won;Lee, Hwally;Cheong, Hee-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-43
    • /
    • 2021
  • In this research, the accuracy of YOLO v3 algorithm in object detection during AGV (Automated Guided Vehicle) operation was investigated. First of all, AGV with 2D LiDAR and stereo camera was prepared. AGV was driven along the route scanned with SLAM (Simultaneous Localization and Mapping) using 2D LiDAR while front objects were detected through stereo camera. In order to evaluate the accuracy of YOLO v3 algorithm, recall, AP (Average Precision), and mAP (mean Average Precision) of the algorithm were measured with a degree of machine learning. Experimental results show that mAP, precision, and recall are improved by 10%, 6.8%, and 16.4%, respectively, when YOLO v3 is fitted with 4000 training dataset and 500 testing dataset which were collected through online search and is trained additionally with 1200 dataset collected from the stereo camera on AGV.

Implementation of Camera-Based Autonomous Driving Vehicle for Indoor Delivery using SLAM (SLAM을 이용한 카메라 기반의 실내 배송용 자율주행 차량 구현)

  • Kim, Yu-Jung;Kang, Jun-Woo;Yoon, Jung-Bin;Lee, Yu-Bin;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.687-694
    • /
    • 2022
  • In this paper, we proposed an autonomous vehicle platform that delivers goods to a designated destination based on the SLAM (Simultaneous Localization and Mapping) map generated indoors by applying the Visual SLAM technology. To generate a SLAM map indoors, a depth camera for SLAM map generation was installed on the top of a small autonomous vehicle platform, and a tracking camera was installed for accurate location estimation in the SLAM map. In addition, a convolutional neural network (CNN) was used to recognize the label of the destination, and the driving algorithm was applied to accurately arrive at the destination. A prototype of an indoor delivery autonomous vehicle was manufactured, and the accuracy of the SLAM map was verified and a destination label recognition experiment was performed through CNN. As a result, the suitability of the autonomous driving vehicle implemented by increasing the label recognition success rate for indoor delivery purposes was verified.

Home device control using hand motion recognition for the disabled (장애인을 위한 손 동작 인식을 이용한 홈 디바이스 제어)

  • Lee, Se-Hoon;Im, So-Jung;Kim, Hyun-A
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.259-260
    • /
    • 2019
  • 장애인은 비장애인보다 극한 상황에 쉽게 노출될 수 있어 큰 주의가 필요하다. 본 논문에서는 OpenCV 라이브러리를 기반으로한 손동작 인식 시스템을 제안한다. 장애인을 비롯한 몸이 불편한 사람들이 간단한 동작만으로 집 안의 모듈을 제어할 수 있도록 시스템을 구현하였다. OpenCV 라이브러리를 기반으로 카메라 촬영을 통해 손동작을 인식하여 물체를 제어하는 시스템을 설계하였다.

  • PDF