• Title/Summary/Keyword: Camera Electronics Unit

Search Result 56, Processing Time 0.03 seconds

A MNN(Modular Neural Network) for Robot Endeffector Recognition (로봇 Endeffector 인식을 위한 모듈라 신경회로망)

  • 김영부;박동선
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.496-499
    • /
    • 1999
  • This paper describes a medular neural network(MNN) for a vision system which tracks a given object using a sequence of images from a camera unit. The MNN is used to precisely recognize the given robot endeffector and to minize the processing time. Since the robot endeffector can be viewed in many different shapes in 3-D space, a MNN structure, which contains a set of feedforwared neural networks, co be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training patterns for a neural network share the similar charateristics so that they can be easily trained. The trained MNN is less sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

  • PDF

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

An Embedded Solution for Fast Navigation and Precise Positioning of Indoor Mobile Robots by Floor Features (바닥 특징점을 사용하는 실내용 정밀 고속 자율 주행 로봇을 위한 싱글보드 컴퓨터 솔루션)

  • Kim, Yong Nyeon;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.293-300
    • /
    • 2019
  • In this paper, an Embedded solution for fast navigation and precise positioning of mobile robots by floor features is introduced. Most of navigation systems tend to require high-performance computing unit and high quality sensor data. They can produce high accuracy navigation systems but have limited application due to their high cost. The introduced navigation system is designed to be a low cost solution for a wide range of applications such as toys, mobile service robots and education. The key design idea of the system is a simple localization approach using line features of the floor and delayed localization strategy using topological map. It differs from typical navigation approaches which usually use Simultaneous Localization and Mapping (SLAM) technique with high latency localization. This navigation system is implemented on single board Raspberry Pi B+ computer which has 1.4 GHz processor and Redone mobile robot which has maximum speed of 1.1 m/s.

Development Small Size RGB Sensor for Providing Long Detecting Range (원거리 검출범위를 제공하는 소형 RGB 센서 개발)

  • Seo, Jae Yong;Lee, Si Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.174-182
    • /
    • 2015
  • In this paper, we developed the small size RGB sensor that recognizes a long distance using a low-cost color sensor. Light receiving portion of the sensor was used as a camera lens for far distance recognition, and illuminating unit was increased the strength of the light by using a high-power white LED and a lens mounted on the reflector. RGB color recognition algorithm consists of the learning process and the realtime recognition process. We obtain a normalized RGB color reference data in the learning process using the specimens painted with target colors, and classifies the three colors using the Mahalanobis distance in recognition process. We apply the developed the RGB color recognition sensor to a prototype of the part classification system and evaluate the performance of its.

SSD-based Fire Recognition and Notification System Linked with Power Line Communication (유도형 전력선 통신과 연동된 SSD 기반 화재인식 및 알림 시스템)

  • Yang, Seung-Ho;Sohn, Kyung-Rak;Jeong, Jae-Hwan;Kim, Hyun-Sik
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.777-784
    • /
    • 2019
  • A pre-fire awareness and automatic notification system are required because it is possible to minimize the damage if the fire situation is precisely detected after a fire occurs in a place where people are unusual or in a mountainous area. In this study, we developed a RaspberryPi-based fire recognition system using Faster-recurrent convolutional neural network (F-RCNN) and single shot multibox detector (SSD) and demonstrated a fire alarm system that works with power line communication. Image recognition was performed with a pie camera of RaspberryPi, and the detected fire image was transmitted to a monitoring PC through an inductive power line communication network. The frame rate per second (fps) for each learning model was 0.05 fps for Faster-RCNN and 1.4 fps for SSD. SSD was 28 times faster than F-RCNN.

Sampling-based Control of SAR System Mounted on A Simple Manipulator (간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법)

  • Lee, Ahyun;Lee, Joo-Ho;Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.

Preliminary Design of Electronic System for the Optical Payload

  • Kong Jong-Pil;Heo Haeng-Pal;Kim YoungSun;Park Jong-Euk;Chang Young-Jun
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.637-640
    • /
    • 2005
  • In the development of a electronic system for a optical payload comprising mainly EOS(Electro-Optical Sub-system) and PDTS(Payload Data Transmission Sub-system), many aspects should be investigated and discussed for the easy implementation, for th e higher reliability of operation and for the effective ness in cost, size and weight as well as for the secure interface with components of a satellite bus, etc. As important aspects the interfaces between a satellite bus and a payload, and some design features of the CEU(Camera Electronics Unit) inside the payload are described in this paper. Interfaces between a satellite bus and a payload depend considerably on whether t he payload carries the PMU(Payload Management Un it), which functions as main controller of the Payload, or not. With the PMU inside the payload, EOS and PDTS control is performed through the PMU keep ing the least interfaces of control signals and primary power lines, while the EOS and PDTS control is performed directly by the satellite bus components using relatively many control signals when no PMU exists inside the payload. For the CEU design the output channel configurations of panchromatic and multi-spectral bands including the video image data inter face between EOS and PDTS are described conceptually. The timing information control which is also important and necessary to interpret the received image data is described.

  • PDF

Data-Driven Kinematic Control for Robotic Spatial Augmented Reality System with Loose Kinematic Specifications

  • Lee, Ahyun;Lee, Joo-Haeng;Kim, Jaehong
    • ETRI Journal
    • /
    • v.38 no.2
    • /
    • pp.337-346
    • /
    • 2016
  • We propose a data-driven kinematic control method for a robotic spatial augmented reality (RSAR) system. We assume a scenario where a robotic device and a projector-camera unit (PCU) are assembled in an ad hoc manner with loose kinematic specifications, which hinders the application of a conventional kinematic control method based on the exact link and joint specifications. In the proposed method, the kinematic relation between a PCU and joints is represented as a set of B-spline surfaces based on sample data rather than analytic or differential equations. The sampling process, which automatically records the values of joint angles and the corresponding external parameters of a PCU, is performed as an off-line process when an RSAR system is installed. In an on-line process, an external parameter of a PCU at a certain joint configuration, which is directly readable from motors, can be computed by evaluating the pre-built B-spline surfaces. We provide details of the proposed method and validate the model through a comparison with an analytic RSAR model with synthetic noises to simulate assembly errors.

Position Tracking of Underwater Robot for Nuclear Reactor Inspection using Color Information (색상정보를 이용한 원자로 육안검사용 수중로봇의 위치 추적)

  • 조재완;김창회;서용칠;최영수;김승호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2259-2262
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tend to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color informations, yellow and indigo. The center coordinates extraction procedures is as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences: binarization labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF