• Title/Summary/Keyword: Camera Electronics Unit

Search Result 56, Processing Time 0.024 seconds

Implementation of Communication Unit for KOMPSAT-II (다목적실용위성 2호기의 통신 부호화기 구현)

  • 이상택;이종태;이상규
    • Proceedings of the IEEK Conference
    • /
    • 2003.11c
    • /
    • pp.378-381
    • /
    • 2003
  • The Channel Coding Unit (CCU) is an integral component of Payload Data Transmission System (PDTS) for the Multi-Spectral Camera (MSC) data. The main function of the CCU is channel coding and encryption. CCU has two channels (I & Q) for data processing. The input of CCU is the output of DCSU (Data Compression & Storage Unit). The output of CCU is the input of QTX which modulate data for RF communication. In this paper, there are the overview, short H/W description and operation concept of CCU.

  • PDF

Proposal of Camera Gesture Recognition System Using Motion Recognition Algorithm

  • Moon, Yu-Sung;Kim, Jung-Won
    • Journal of IKEEE
    • /
    • v.26 no.1
    • /
    • pp.133-136
    • /
    • 2022
  • This paper is about motion gesture recognition system, and proposes the following improvement to the flaws of the current system: a motion gesture recognition system and such algorithm that uses the video image of the entire hand and reading its motion gesture to advance the accuracy of recognition. The motion gesture recognition system includes, an image capturing unit that captures and obtains the images of the area applicable for gesture reading, a motion extraction unit that extracts the motion area of the image, and a hand gesture recognition unit that read the motion gestures of the extracted area. The proposed application of the motion gesture algorithm achieves 20% improvement compared to that of the current system.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Measurement of 3D Spreader Position Information using the CCD Cameras and a Laser Distance Measuring Unit

  • Lee, Jung-Jae;Nam, Gi-Gun;Lee, Bong-Ki;Lee, Jang-Myung
    • Journal of Navigation and Port Research
    • /
    • v.28 no.4
    • /
    • pp.323-331
    • /
    • 2004
  • This paper introduces a novel approach that can provide the three dimensional information about the movement of a spreader by using two CCD cameras and a laser distance measuring unit in order to derive ALS (Automatic Landing System) in the crane used at a harbor. So far a kind of 2D Laser scanner sensor or laser distance measuring units are used as comer detectors for the geometrical matching between the spreader and a container. Such systems provide only two dimensional information which is not enough for an accurate and fast ALS. In addition to this deficiency in performance, the price of the system is too high to adapt to the ALS. Therefore, to overcome these defects, we proposed a novel method to acquire the three dimensional spreader information using two CCD cameras and a laser distance measuring unit. To show the efficiency of proposed method, real experiments are performed to show the improvement of accuracy in distance measurement by fusing the sensory information of the CCD cameras and a laser distance measuring unit.

Real-time Full-view 3D Human Reconstruction using Multiple RGB-D Cameras

  • Yoon, Bumsik;Choi, Kunwoo;Ra, Moonsu;Kim, Whoi-Yul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.224-230
    • /
    • 2015
  • This manuscript presents a real-time solution for 3D human body reconstruction with multiple RGB-D cameras. The proposed system uses four consumer RGB/Depth (RGB-D) cameras, each located at approximately $90^{\circ}$ from the next camera around a freely moving human body. A single mesh is constructed from the captured point clouds by iteratively removing the estimated overlapping regions from the boundary. A cell-based mesh construction algorithm is developed, recovering the 3D shape from various conditions, considering the direction of the camera and the mesh boundary. The proposed algorithm also allows problematic holes and/or occluded regions to be recovered from another view. Finally, calibrated RGB data is merged with the constructed mesh so it can be viewed from an arbitrary direction. The proposed algorithm is implemented with general-purpose computation on graphics processing unit (GPGPU) for real-time processing owing to its suitability for parallel processing.

3D Spreader Position Information by the CCD Cameras and the Laser Distance Measuring Unit for ATC

  • Bae, Dong-Suk;Lee, Jung-Jae;Lee, Bong-Ki;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1679-1684
    • /
    • 2004
  • This paper introduces a novel approach that can provide the three dimensional information on the movement of a spreader by using two CCD cameras and a laser distance sensor, which enables an ALS (Automatic Landing System) to be used for yard cranes at a harbor. So far a kind of 2D Laser scanner sensor or laser distance measuring units are used as corner detectors for the geometrical matching between the spreader and a container, which provides only 2D information which is not enough for an accurate and fast ALS system required presently. In addition to this deficiency in performance, the price for the system is too high to be adopted widely for the ALS. Therefore, to overcome these defects, a novel method to acquire the three dimensional information for the movement of a spreader including skew and sway angles is proposed using two CCD cameras and a laser distance sensor. To show the efficiency of proposed algorithm, real experiments are performed to show the accuracy improvement in distance measurement by fusing the sensory information of CCD camera and laser distance sensor.

  • PDF

Performance Evaluation of a Compressed-State Constraint Kalman Filter for a Visual/Inertial/GNSS Navigation System

  • Yu Dam Lee;Taek Geun Lee;Hyung Keun Lee
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.2
    • /
    • pp.129-140
    • /
    • 2023
  • Autonomous driving systems are likely to be operated in various complex environments. However, the well-known integrated Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS), which is currently the major source for absolute position information, still has difficulties in accurate positioning in harsh signal environments such as urban canyons. To overcome these difficulties, integrated Visual/Inertial/GNSS (VIG) navigation systems have been extensively studied in various areas. Recently, a Compressed-State Constraint Kalman Filter (CSCKF)-based VIG navigation system (CSCKF-VIG) using a monocular camera, an Inertial Measurement Unit (IMU), and GNSS receivers has been studied with the aim of providing robust and accurate position information in urban areas. For this new filter-based navigation system, on the basis of time-propagation measurement fusion theory, unnecessary camera states are not required in the system state. This paper presents a performance evaluation of the CSCKF-VIG system compared to other conventional navigation systems. First, the CSCKF-VIG is introduced in detail compared to the well-known Multi-State Constraint Kalman Filter (MSCKF). The CSCKF-VIG system is then evaluated by a field experiment in different GNSS availability situations. The results show that accuracy is improved in the GNSS-degraded environment compared to that of the conventional systems.

Development of Defect Inspection System for Polygonal Containers (다각형 용기의 결함 검사 시스템 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.485-492
    • /
    • 2021
  • In this paper, we propose the development of a defect inspection system for polygonal containers. Embedded board consists of main part, communication part, input/output part, etc. The main unit is a main arithmetic unit, and the operating system that drives the embedded board is ported to control input/output for external communication, sensors and control. The input/output unit converts the electrical signals of the sensors installed in the field into digital and transmits them to the main module and plays the role of controlling the external stepper motor. The communication unit performs a role of setting an image capturing camera trigger and driving setting of the control device. The input/output unit converts the electrical signals of the control switches and sensors into digital and transmits them to the main module. In the input circuit for receiving the pulse input related to the operation mode, etc., a photocoupler is designed for each input port in order to minimize the interference of external noise. In order to objectively evaluate the accuracy of the development of the proposed polygonal container defect inspection system, comparison with other machine vision inspection systems is required, but it is impossible because there is currently no machine vision inspection system for polygonal containers. Therefore, by measuring the operation timing with an oscilloscope, it was confirmed that waveforms such as Test Time, One Angle Pulse Value, One Pulse Time, Camera Trigger Pulse, and BLU brightness control were accurately output.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

A Study on the Implementation of the Web-Camera System for Realtime Monitoring (실시간 영상 감시를 위한 웹 카메라 시스템의 구현에 관한 연구)

  • Ahn, Young-Min;Jin, Hyun-Joon;Park, Nho-Kyung
    • Journal of IKEEE
    • /
    • v.5 no.2 s.9
    • /
    • pp.174-181
    • /
    • 2001
  • In this study, the architecture of the Web Camera System for realtime monitoring on Internet is proposed and implemented in two different structures. In the one architecture, a Web-server and a Camera-server are implemented on the same system, and the system transfers motion pictures compressed to JPEG file to users on the WWW(World Wide Web). In the other architecture, the Web-server and the Camera-server are implemented on different systems, and the motion pictures are transferred from the Camera-server to the Web-server, and finally to users. For JPEG image transferring in the Web Camera system, the Java Applet and the Java Script are used to maximize flexibility of the system from the Operating system and the Web browsers. In order to compare system performance between two architectures, data traffic is measured and simulated in the unit of byte per second.

  • PDF