• 제목/요약/키워드: Camera Electronics Unit

검색결과 56건 처리시간 0.024초

로봇 Endeffector 인식을 위한 모듈라 신경회로망 (A MNN(Modular Neural Network) for Robot Endeffector Recognition)

  • 김영부;박동선
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.496-499
    • /
    • 1999
  • This paper describes a medular neural network(MNN) for a vision system which tracks a given object using a sequence of images from a camera unit. The MNN is used to precisely recognize the given robot endeffector and to minize the processing time. Since the robot endeffector can be viewed in many different shapes in 3-D space, a MNN structure, which contains a set of feedforwared neural networks, co be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training patterns for a neural network share the similar charateristics so that they can be easily trained. The trained MNN is less sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

  • PDF

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

바닥 특징점을 사용하는 실내용 정밀 고속 자율 주행 로봇을 위한 싱글보드 컴퓨터 솔루션 (An Embedded Solution for Fast Navigation and Precise Positioning of Indoor Mobile Robots by Floor Features)

  • 김용년;서일홍
    • 로봇학회논문지
    • /
    • 제14권4호
    • /
    • pp.293-300
    • /
    • 2019
  • In this paper, an Embedded solution for fast navigation and precise positioning of mobile robots by floor features is introduced. Most of navigation systems tend to require high-performance computing unit and high quality sensor data. They can produce high accuracy navigation systems but have limited application due to their high cost. The introduced navigation system is designed to be a low cost solution for a wide range of applications such as toys, mobile service robots and education. The key design idea of the system is a simple localization approach using line features of the floor and delayed localization strategy using topological map. It differs from typical navigation approaches which usually use Simultaneous Localization and Mapping (SLAM) technique with high latency localization. This navigation system is implemented on single board Raspberry Pi B+ computer which has 1.4 GHz processor and Redone mobile robot which has maximum speed of 1.1 m/s.

원거리 검출범위를 제공하는 소형 RGB 센서 개발 (Development Small Size RGB Sensor for Providing Long Detecting Range)

  • 서재용;이시현
    • 전자공학회논문지
    • /
    • 제52권12호
    • /
    • pp.174-182
    • /
    • 2015
  • 본 연구에서는 저가형 컬러센서를 이용하여 원거리 인식이 가능한 소형 RGB 센서를 개발하였다. 이 센서의 수광부에는 원거리 인식을 위해 카메라 렌즈를 사용하였으며, 고출력 백색 LED와 반사경이 장착된 렌즈를 조명부에 사용하여 조명의 강도를 높였다. RGB 색상 인식 알고리즘은 학습과정과 실시간 인식과정으로 구성되어 있다. 학습과정에서는 기준색으로 도색된 시편을 이용하여 RGB 색상에 대한 정규화된 기준 데이터를 취득하고, 인식과정에서는 마할라노비스 거리를 이용하여 3색을 분류한다. 개발한 RGB 색상 인식 센서를 부품 분류 시제품에 적용하여 성능을 검증하였다.

유도형 전력선 통신과 연동된 SSD 기반 화재인식 및 알림 시스템 (SSD-based Fire Recognition and Notification System Linked with Power Line Communication)

  • 양승호;손경락;정재환;김현식
    • 전기전자학회논문지
    • /
    • 제23권3호
    • /
    • pp.777-784
    • /
    • 2019
  • 인적이 드문 한적한 곳이나 산악 지역에서 화재가 발생 하였을 때 화재 상황을 정확하게 파악하고 적절한 초동 대처를 한다면 피해를 최소화할 수 있으므로 사전 화재인지시스템과 자동알림시스템이 요구된다. 본 연구에서는 객체인식을 위한 딥러닝 알고리즘 중 Faster-RCNN 및 SSD(single shot multibox detecter)을 사용한 화재 인식시스템을 전력선 통신과 연동하여 자동알림시스템을 시연하였으며 향 후 고압송전망을 이용한 산불화재 감시에 응용 가능함을 제시하였다. 학습된 모델을 장착한 라즈베리파이에 파이카메라를 설치하여 화재 영상인식을 수행하였으며, 검출된 화재영상은 유도형 전력선 통신망을 통하여 모니터링 PC로 전송하였다. 학습 모델별 라즈베리파이에서의 초당 프레임 율은 Faster-RCNN의 경우 0.05 fps, SSD의 경우 1.4 fps로 SSD의 처리속도가 Faster-RCNN 보다 28배 정도 빨랐다.

간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법 (Sampling-based Control of SAR System Mounted on A Simple Manipulator)

  • 이아현;이주호;이주행
    • 한국CDE학회논문집
    • /
    • 제19권4호
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.

PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법 (An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor)

  • 김철문;백열민;김회율
    • 전자공학회논문지
    • /
    • 제50권10호
    • /
    • pp.162-170
    • /
    • 2013
  • 최근 보행자의 교통안전 개선을 위한 목적으로 차량에 장착되는 보행자 보호 시스템(PPS, Pedestrian Protection System)에 대한 관심과 요구가 증가하고 있다. 본 연구에서는 보행자 검출 후보 윈도우 추출과 셀(cell) 단위 히스토그램 기반의 HOG 특징 계산 방법을 제안하였다. 보행자 검출 후보 윈도우 추출은 주변밝기 비율체크, 수직방향 에지투영, 에지펙터(edge factor), 그리고 PCA(Principal Component Analysis) 복원 영상을 이용하였다. Dalal 의 HOG 는 겹침 블록 상의 모든 픽셀에 대해 가우시안 가중치와 삼선형보간에 의한 히스토그램 계산이 필요한데 반하여 제안하는 방법은 단위 셀마다 가우시안 가중 및 히스토그램을 계산하고 그것들을 인접 셀과 결합하므로 연산 속도가 빠르다. 제안하는 PCA 복원 에러 기반의 보행자 검출 후보 윈도우 추출은 보행자의 머리와 어깨 영역과의 차이를 기준으로 배경을 효율적으로 분류한다. 제안하는 방법은 카메라 컬리브레이션이나 스테레오 카메라를 이용한 거리 정보 없이도 영상만으로 전통적인 HOG 에 비하여 연산속도가 크게 개선된다.

Preliminary Design of Electronic System for the Optical Payload

  • Kong Jong-Pil;Heo Haeng-Pal;Kim YoungSun;Park Jong-Euk;Chang Young-Jun
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.637-640
    • /
    • 2005
  • In the development of a electronic system for a optical payload comprising mainly EOS(Electro-Optical Sub-system) and PDTS(Payload Data Transmission Sub-system), many aspects should be investigated and discussed for the easy implementation, for th e higher reliability of operation and for the effective ness in cost, size and weight as well as for the secure interface with components of a satellite bus, etc. As important aspects the interfaces between a satellite bus and a payload, and some design features of the CEU(Camera Electronics Unit) inside the payload are described in this paper. Interfaces between a satellite bus and a payload depend considerably on whether t he payload carries the PMU(Payload Management Un it), which functions as main controller of the Payload, or not. With the PMU inside the payload, EOS and PDTS control is performed through the PMU keep ing the least interfaces of control signals and primary power lines, while the EOS and PDTS control is performed directly by the satellite bus components using relatively many control signals when no PMU exists inside the payload. For the CEU design the output channel configurations of panchromatic and multi-spectral bands including the video image data inter face between EOS and PDTS are described conceptually. The timing information control which is also important and necessary to interpret the received image data is described.

  • PDF

Data-Driven Kinematic Control for Robotic Spatial Augmented Reality System with Loose Kinematic Specifications

  • Lee, Ahyun;Lee, Joo-Haeng;Kim, Jaehong
    • ETRI Journal
    • /
    • 제38권2호
    • /
    • pp.337-346
    • /
    • 2016
  • We propose a data-driven kinematic control method for a robotic spatial augmented reality (RSAR) system. We assume a scenario where a robotic device and a projector-camera unit (PCU) are assembled in an ad hoc manner with loose kinematic specifications, which hinders the application of a conventional kinematic control method based on the exact link and joint specifications. In the proposed method, the kinematic relation between a PCU and joints is represented as a set of B-spline surfaces based on sample data rather than analytic or differential equations. The sampling process, which automatically records the values of joint angles and the corresponding external parameters of a PCU, is performed as an off-line process when an RSAR system is installed. In an on-line process, an external parameter of a PCU at a certain joint configuration, which is directly readable from motors, can be computed by evaluating the pre-built B-spline surfaces. We provide details of the proposed method and validate the model through a comparison with an analytic RSAR model with synthetic noises to simulate assembly errors.

색상정보를 이용한 원자로 육안검사용 수중로봇의 위치 추적 (Position Tracking of Underwater Robot for Nuclear Reactor Inspection using Color Information)

  • 조재완;김창회;서용칠;최영수;김승호
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2259-2262
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tend to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color informations, yellow and indigo. The center coordinates extraction procedures is as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences: binarization labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF