• 제목/요약/키워드: vision-based control

검색결과 688건 처리시간 0.028초

제스처와 EEG 신호를 이용한 감정인식 방법 (Emotion Recognition Method using Gestures and EEG Signals)

  • 김호덕;정태민;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Exploring the Potential of Modifying Visual Stimuli in Virtual Reality to Reduce Hand Tremor in Micromanipulation Tasks

  • Prada, John;Park, Taiwoo;Jang, Sunjin;Im, Jintaek;Song, Cheol
    • Current Optics and Photonics
    • /
    • 제1권6호
    • /
    • pp.642-648
    • /
    • 2017
  • Involuntary hand tremor has been a serious challenge in micromanipulation tasks and thus draws a significant amount of attention from related fields. To minimize the effect of the hand tremor, a variety of mechanically assistive solutions have been proposed. However, approaches increasing human awareness of their own hand tremor have not been extensively studied. In this paper, a head mount display based virtual reality (VR) system to increase human self-awareness of hand tremor is proposed. It shows a user a virtual image of a handheld device with emphasized hand tremor information. Provided with this emphasized tremor information, we hypothesize that subjects will control their hand tremor more effectively. Two methods of emphasizing hand tremor information are demonstrated: (1) direct amplification of tremor and (2) magnification of virtual object, in comparison to the controlled condition without emphasized tremor information. A human-subject study with twelve trials was conducted, with four healthy participants who performed a task of holding a handheld gripper device in a specific direction. The results showed that the proposed methods achieved a reduced level of hand tremor compared with the control condition.

컨볼루션 신경망 기반의 차량 전면부 검출 시스템 (Convolutional Neural Network-based System for Vehicle Front-Side Detection)

  • 박용규;박제강;온한익;강동중
    • 제어로봇시스템학회논문지
    • /
    • 제21권11호
    • /
    • pp.1008-1016
    • /
    • 2015
  • This paper proposes a method for detecting the front side of vehicles. The method can find the car side with a license plate even with complicated and cluttered backgrounds. A convolutional neural network (CNN) is used to solve the detection problem as a unified framework combining feature detection, classification, searching, and localization estimation and improve the reliability of the system with simplicity of usage. The proposed CNN structure avoids sliding window search to find the locations of vehicles and reduces the computing time to achieve real-time processing. Multiple responses of the network for vehicle position are further processed by a weighted clustering and probabilistic threshold decision method. Experiments using real images in parking lots show the reliability of the method.

자동차 안전운전 보조 시스템에 응용할 수 있는 카메라 캘리브레이션 방법 (Camera Calibration Method for an Automotive Safety Driving System)

  • 박종섭;김기석;노수장;조재수
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.621-626
    • /
    • 2015
  • This paper presents a camera calibration method in order to estimate the lane detection and inter-vehicle distance estimation system for an automotive safety driving system. In order to implement the lane detection and vision-based inter-vehicle distance estimation to the embedded navigations or black box systems, it is necessary to consider the computation time and algorithm complexity. The process of camera calibration estimates the horizon, the position of the car's hood and the lane width for extraction of region of interest (ROI) from input image sequences. The precision of the calibration method is very important to the lane detection and inter-vehicle distance estimation. The proposed calibration method consists of three main steps: 1) horizon area determination; 2) estimation of the car's hood area; and 3) estimation of initial lane width. Various experimental results show the effectiveness of the proposed method.

GCP Placement Methods for Improving the Accuracy of Shoreline Extraction in Coastal Video Monitoring

  • Changyul Lee;Kideok Do;Inho Kim;Sungyeol Chang
    • 한국해양공학회지
    • /
    • 제38권4호
    • /
    • pp.174-186
    • /
    • 2024
  • In coastal video monitoring, the direct linear transform (DLT) method with ground control points (GCPs) is commonly used for geo-rectification. However, current practices often overlook the impact of GCP quantity, arrangement, and the geographical characteristics of beaches. To address this, we designed scenarios at Chuam Beach to evaluate how factors such as the distance from the camera to GCPs, the number of GCPs, and the height of each point affect the DLT method. Accuracy was assessed by calculating the root mean square error of the distance errors between the actual GCP coordinates and the image coordinates for each setting. This analysis aims to propose an optimal GCP placement method. Our results show that placing GCPs within 200 m of the camera ensures high accuracy with few points, whereas positioning them at strategic heights enhances shoreline extraction. However, since only fixed cameras were used in this study, factors like varying heights, orientations, and resolutions could not be considered. Based on data from a single location, we propose an optimal method for GCP placement that takes into account distance, number, and height using the DLT method.

헥사로터형 헬리콥터의 동역학 모델기반 비행제어 (Dynamic Modeling based Flight Control of Hexa-Rotor Helicopter System)

  • 한재균;진태석
    • 한국지능시스템학회논문지
    • /
    • 제25권4호
    • /
    • pp.398-404
    • /
    • 2015
  • 본 논문에서는 관성측정장치를 기반으로 블루투스 환경에서의 자율비행을 위한 멀티 로터형 헬리콥터에 대한 설계 및 성능을 제시하였다. 멀티 로터관련 다양한 연구가 진행되어오고 있긴 하지만 최근에는 다양한 서비스를 목적으로 짐벌이 장착된 헥사로터형의 헬리콥터에 대한 관심이 모아지고 있다. 따라서, 본 논문에서는 지상의 원격조정 PC나 고성능의 원격제어장치나 영상시스템과 같은 외부보조 시스템 없이 연구와 구조활동, 모니터링 활동을 수행할 수 있는 컴팩트하고 자율비행을 위한 헥사로터(hexa rotor)형 헬리콥터에 대한 하드웨어 및 소프트웨어를 소개하고자 한다. 제안한 시스템은 헥사로터 헬리콥터의 구조와 관성측정장치 관련 하드웨어 구성과 수학적 모델링 및 시뮬레이션 결과를 각각 제시하였다. 또한, IMU 구현을 위하여 MCU(ARM-cortex) 보드를 장착하여 각 로터의 회전과 관성 측정장치의 입력신호에 대한 상태를 제어할 수 있도록 하였다. 그리고 시스템 시뮬레이션과 실험을 통한 시스템의 성능을 각각 검증하였다.

손 모양 인식을 이용한 모바일 로봇제어 (Mobile Robot Control using Hand Shape Recognition)

  • 김영래;김은이;장재식;박세현
    • 전자공학회논문지CI
    • /
    • 제45권4호
    • /
    • pp.34-40
    • /
    • 2008
  • 본 논문에서는 손 모양 인식을 이용한 비전기반의 모바일 로봇제어 시스템을 제안한다. 손 모양을 인식하기 위해서는 움직이는 카메라로부터 정확한 손의 경계선을 추출하고 추적하는 것이 필요하다. 이를 위해 본 논문에서는 초기 윤곽선 위치 및 경계에 강건하고, 빠른 물체를 정확히 추적할 수 있는 mean shift를 이용한 활성 윤곽선 모델(ACM) 추적 방법을 개발하였다. 제안된 시스템은 손 검출기, 손 추적기, 손 모양 인식기, 로봇 제어기 4가지 모듈로 구성된다. 손 검출기는 영상에서 피부색 영역으로 정확한 모양을 손으로 추출한 이후 활성 윤곽선 모델(ACM) 과 mean shift를 사용하여 손 영역을 정확히 추적한다. 마지막으로 Hue 모멘트에 이용하여 손의 형태를 인식한다. 제안된 시스템의 적합성을 평가하기 위하여 2족 보행로봇 RCB-1에서 실험이 수행되었다. 실험 결과는 제안된 시스템의 효율성을 증명하였다.

Performance of AMI-CORBA for Field Robot Application

  • Syahroni Nanang;Choi Jae-Weon
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2005년도 추계학술대회 논문집
    • /
    • pp.384-389
    • /
    • 2005
  • The objective on this project is to develop a cooperative Field Robot (FR), by using a customize Open Control Platform (OCP) as design and development process. An OCP is a CORBA-based solution for networked control system, which facilitates the transitioning of control designs to embedded targets. In order to achieve the cooperation surveillance system, two FRs are distributed by navigation messages (GPS and sensor data) using CORBA event-channel communication, while graphical information from IR night vision camera is distributed using CORBA Asynchronous Method Invocation (AMI). The QoS features of AMI in the network are to provide the additional delivery method for distributing an IR camera Images will be evaluate in this experiment. In this paper also presents an empirical performance evaluation from the variable chunk sizes were compared with the number of clients and message latency, some of the measurement data's are summarized in the following paragraph. In the AMI buffers size measurement, when the chuck sizes were change, the message latency is significantly change according to it frame size. The smaller frame size between 256 bytes to 512 bytes is more efficient fur the message size below 2Mbytes, but it average performance in the large of message size a bigger frame size is more efficient. For the several destination, the same experiment using 512 bytes to 2 Mbytes frame with 2 to 5 destinations are presented. For the message size bigger than 2Mbytes, the AMI are still able to meet requirement far more than 5 clients simultaneously.

  • PDF

자율주행 제어를 위한 향상된 주변환경 인식 알고리즘 (Improved Environment Recognition Algorithms for Autonomous Vehicle Control)

  • 배인환;김영후;김태경;오민호;주현수;김슬기;신관준;윤선재;이채진;임용섭;최경호
    • 자동차안전학회지
    • /
    • 제11권2호
    • /
    • pp.35-43
    • /
    • 2019
  • This paper describes the improved environment recognition algorithms using some type of sensors like LiDAR and cameras. Additionally, integrated control algorithm for an autonomous vehicle is included. The integrated algorithm was based on C++ environment and supported the stability of the whole driving control algorithms. As to the improved vision algorithms, lane tracing and traffic sign recognition were mainly operated with three cameras. There are two algorithms developed for lane tracing, Improved Lane Tracing (ILT) and Histogram Extension (HIX). Two independent algorithms were combined into one algorithm - Enhanced Lane Tracing with Histogram Extension (ELIX). As for the enhanced traffic sign recognition algorithm, integrated Mutual Validation Procedure (MVP) by using three algorithms - Cascade, Reinforced DSIFT SVM and YOLO was developed. Comparing to the results for those, it is convincing that the precision of traffic sign recognition is substantially increased. With the LiDAR sensor, static and dynamic obstacle detection and obstacle avoidance algorithms were focused. Therefore, improved environment recognition algorithms, which are higher accuracy and faster processing speed than ones of the previous algorithms, were proposed. Moreover, by optimizing with integrated control algorithm, the memory issue of irregular system shutdown was prevented. Therefore, the maneuvering stability of the autonomous vehicle in severe environment were enhanced.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF