• Title/Summary/Keyword: Camera Configuration

Search Result 145, Processing Time 0.024 seconds

Development of Vision based Passenger Monitoring System for Passenger's Safety in Railway Station (철도 승강장 승객 안전을 위한 영상처리식 모니터링시스템 개발)

  • Oh, Seh-Chan;Park, Sung-Hyuk;Lee, Han-Min;Kim, Gil-Dong;Lee, Chang-Mu
    • Proceedings of the KSR Conference
    • /
    • 2008.11b
    • /
    • pp.1354-1359
    • /
    • 2008
  • In this paper, we propose a vision based passenger monitoring system for passenger's safety in railway station. Since 2005, Korea Railroad Research Institute (KRRI) has developed a vision based monitoring system, funded by Korean government, for passenger's safety in railway station. The proposed system uses various types of sensors, such as, stereo camera, thermal-camera and infrared sensor, in order to detects danger situations in platform area. Especially, detection process of the system exploits the stereo vision algorithm to improve detection accuracy. The paper describes the overall system configuration and proposed detection algorithm, and then verifies the system performance with extensive experimental results in a real station environment.

  • PDF

Development of Low Cost Autonomous-Driving Delivery Robot System Using SLAM Technology (SLAM 기술을 활용한 저가형 자율주행 배달 로봇 시스템 개발)

  • Donghoon Lee;Jehyun Park;Kyunghoon Jung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.249-257
    • /
    • 2023
  • This paper discusses the increasing need for autonomous delivery robots due to the current growth in the delivery market, rising delivery fees, high costs of hiring delivery personnel, and the need for contactless services. Additionally, the cost of hardware and complex software systems required to build and operate autonomous delivery robots is high. To provide a low-cost alternative to this, this paper proposes a autonomous delivery robot platform using a low-cost sensor combination of 2D LIDAR, depth camera and tracking camera to replace the existing expensive 3D LIDAR. The proposed robot was developed using the RTAB-Map SLAM open source package for 2D mapping and overcomes the limitations of low-cost sensors by using the convex hull algorithm. The paper details the hardware and software configuration of the robot and presents the results of driving experiments. The proposed platform has significant potential for various industries, including the delivery and other industries.

Development of a software based calibration system for automobile assembly system oriented AR (자동차 조립시스템 지향 AR을 위한 소프트웨어 기반의 캘리브레이션 시스템 개발)

  • Park, Jin-Woo;Park, Hong-Seok
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.1
    • /
    • pp.35-44
    • /
    • 2012
  • Many automobile manufacturers are doing experiment on manufacturing environments by using an augmented reality technology. However, system layout and process simulation by using the virtual reality technology have been performed actively more than by using the augmented reality technology in practical use so far. Existing automobile assembly by using the augmented reality requires the precise calibrating work after setting the robot because the existing augmented reality system for the automobile assembly system configuration does not include the end tip deflection and the robot joints deflection due to the heavy weight of product and gripper. Because the robot is used mostly at the automobile assembly, the deflection problem of the robot joint and the product in the existing augmented reality system need to be improved. Moreover camera lens calibration has to be performed precisely to use augmented reality. In order to improve this problem, this paper introduces a method of the software based calibration to apply the augmented reality effectively to the automobile assembly system. On the other hand, the camera lens calibration module and the direct compensation module of the virtual object displacement for the augmented reality were designed and implemented. Furthermore, the developed automobile assembly system oriented AR-system was verified by the practical test.

Focal Reducer for CQUEAN

  • Lim, Ju-Hee;Chang, Seung-Hyuk;Kim, Young-Ju;Kim, Jin-Young;Park, Won-Kee;Im, Myung-Shin;Pak, Soo-Jong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.62.2-62.2
    • /
    • 2010
  • The CQUEAN (Camera for QUasars in EArly uNiverse) is an optical CCD camera optimized for the observation of high redshift QSOs to understand the nature of early universe. The focal reducer, which is composed of four spherical lens, is allowed to secure a wider field of view for CQUEAN, by reducing the focal length of the system by one third. We designed the lens configuration, the lens barrel, and the adapters to assemble to attach focal reducer to the CCD camera system. We performed tolerance analysis using ZEMAX. The manufacturing of the focal reducer system and its lab test of optical performance were already finished. It turned out that the performance can meet the original requirement, with the aberration and alignment error taken into account. We successfully attached the focal reducer and CQUEAN to the cassegrain focus of 2.1m telescope at McDonald Observatory, USA, and several tests of CQUEAN system were carried out. In this presentation, I will show the process of focal reducer fabrication and the result of performance test.

  • PDF

Optical Principles of Beam Splitters

  • Lee, Chang-Kyung
    • Korean Journal of Geomatics
    • /
    • v.1 no.1
    • /
    • pp.69-74
    • /
    • 2001
  • In conventional photogrammetry, three-dimensional coordinates are obtained from two consecutive images of a stationary object photographed from two exposure stations, separated by a certain distance. However, it is impossible to photograph moving objects from two stations with one camera at the same time. Various methods to overcome this obstacle were devised e. g. taking the left and right scenes simultaneously with one camera using a beam splitter attached to the front, thus creating a stereo scene in one image. A beam splitter consists of two outer mirrors and two inner mirrors. This paper deals with research where the optical principles of the beam splitter were evaluated based on light path phenomena between the outer mirrors and the inner mirrors. A mathematical model of the geometric configuration was derived for the beam splitter. This allows us to design and control a beam splitter to obtain maximum scale and maximum base-height ratio by stepwise application of the mathematical model. The results show that the beam splitter is a very useful tool for stereophotography with one camera. The optimum geometric configurations ensuring maximum scale and base-height ratio are closely related to inner and outer reflector sizes, their inclination angles and the offsets between the outer mirrors.

  • PDF

Mixing Collaborative and Hybrid Vision Devices for Robotic Applications (로봇 응용을 위한 협력 및 결합 비전 시스템)

  • Bazin, Jean-Charles;Kim, Sung-Heum;Choi, Dong-Geol;Lee, Joon-Young;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

Real-time Monocular Camera Pose Estimation using a Particle Filiter Intergrated with UKF (UKF와 연동된 입자필터를 이용한 실시간 단안시 카메라 추적 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.315-324
    • /
    • 2023
  • In this paper, we propose a real-time pose estimation method for a monocular camera using a particle filter integrated with UKF (unscented Kalman filter). While conventional camera tracking techniques combine camera images with data from additional devices such as gyroscopes and accelerometers, the proposed method aims to use only two-dimensional visual information from the camera without additional sensors. This leads to a significant simplification in the hardware configuration. The proposed approach is based on a particle filter integrated with UKF. The pose of the camera is estimated using UKF, which is defined individually for each particle. Statistics regarding the camera state are derived from all particles of the particle filter, from which the real-time camera pose information is computed. The proposed method demonstrates robust tracking, even in the case of rapid camera shakes and severe scene occlusions. The experiments show that our method remains robust even when most of the feature points in the image are obscured. In addition, we verify that when the number of particles is 35, the processing time per frame is approximately 25ms, which confirms that there are no issues with real-time processing.

The Running Control for the Mobile Vehicle

  • Sugisaka, Masanori;Adachi, Takuya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.491-491
    • /
    • 2000
  • In this paper, we report the results about the rotational control count on DC motor to drive the mobile vehicle as a first step of the research for the realization of the mobile vehicle with the artificial brain. First of all, we introduce the configuration of the mobile vehicle. This mobile vehicle has one CCD camera driven by a rear wheel. Secondly we show the control methods. This research is adopted the various controls. Finally we report the experimental methods and results and we describe the conclusion of this research.

  • PDF

THE ANALYSIS OF PSM (POWER SUPPLY MODULE) FOR MULTI-SPECTRAL CAMERA IN KOMPSAT

  • Park Jong-Euk;Kong Jong-Pil;Heo Haeng-Pal;Kim Young Sun;Chang Young Jun
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.493-496
    • /
    • 2005
  • The PMU (Payload Management Unit) in MSC (Multi-Spectral Camera) is the main subsystem for the management, control and power supply of the MSC payload operation. The PMU shall handle the communication with the BUS (Spacecraft) OBC (On Board Computer) for the command, the telemetry and the communications with the various MSC units. The PMU will perform that distributes power to the various MSC units, collects the telemetry reports from MSC units, performs thermal control of the EOS (Electro-Optical Subsystem), performs the NUC (Non-Uniformity Correction) function of the raw imagery data, and rearranges the pixel data and output it to the DCSU (Data Compression and Storage Unit). The BUS provides high voltage to the MSC. The PMU is connected to primary and redundant BUS power and distributes the high unregulated primary voltages for all MSC sub-units. The PSM (Power Supply Module) is an assembly in the PMU implements the interface between several channels on the input. The bus switches are used to prevent a single point system failure. Such a failure could need the PSS (Power Supply System) requirement to combine the two PSM boards' bus outputs in a wired-OR configuration. In such a configuration if one of the boards' output gets shorted to ground then the entire bus could fail thereby causing the entire MSC to fail. To prevent such a short from pulling down the system, the switch could be opened and disconnect the short from the bus. This switch operation is controlled by the BUS.

  • PDF

An Camera Information Detection Method for Dynamic Scene (Dynamic scene에 대한 카메라 정보 추출 기법)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.275-280
    • /
    • 2013
  • In this paper, a new stereo object extraction algorithm using a block-based MSE (mean square error) algorithm and the configuration parameters of a stereo camera is proposed. That is, by applying the SSD algorithm between the initial reference image and the next stereo input image, location coordinates of a target object in the right and left images are acquired and then with these values, the pan/tilt system is controlled. And using the moving angle of this pan/tilt system and the configulation parameters of the stereo camera system, the mask window size of a target object is adaptively determined. The newly segmented target image is used as a reference image in the next stage and it is automatically updated in the course of target tracking basing on the same procedure. Meanwhile, a target object is under tracking through continuously controlling the convergence and FOV by using the sequentiall extracted location coordinates of a moving target.