• Title/Summary/Keyword: one camera

Search Result 1,569, Processing Time 0.029 seconds

Research on Thermal Refocusing System of High-resolution Space Camera

  • Li, Weiyan;Lv, Qunbo;Wang, Jianwei;Zhao, Na;Tan, Zheng;Pei, Linlin
    • Current Optics and Photonics
    • /
    • v.6 no.1
    • /
    • pp.69-78
    • /
    • 2022
  • A high-resolution camera is a precise optical system. Its vibrations during transportation and launch, together with changes in temperature and gravity field in orbit, lead to different degrees of defocus of the camera. Thermal refocusing is one of the solutions to the problems related to in-orbit defocusing, but there are few relevant thermal refocusing mathematical models for systematic analysis and research. Therefore, to further research thermal refocusing systems by using the development of a high-resolution micro-nano satellite (CX6-02) super-resolution camera as an example, we established a thermal refocusing mathematical model based on the thermal elasticity theory on the basis of the secondary mirror position. The detailed design of the thermal refocusing system was carried out under the guidance of the mathematical model. Through optical-mechanical-thermal integration analysis and Zernike polynomial calculation, we found that the data error obtained was about 1%, and deformation in the secondary mirror surface conformed to the optical index, indicating the accuracy and reliability of the thermal refocusing mathematical model. In the final ground test, the thermal vacuum experimental verification data and in-orbit imaging results showed that the thermal refocusing system is consistent with the experimental data, and the performance is stable, which provides theoretical and technical support for the future development of a thermal refocusing space camera.

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.1
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Forward Error Correction based Adaptive data frame format for Optical camera communication

  • Nguyen, Quoc Huy;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Lee, Seonhee
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.94-102
    • /
    • 2015
  • Optical camera communication (OCC) is an extension of Visible Light Communication. Different from traditional visible light communication, optical camera communications is an almost no additional cost technology by taking the advantage of build-in camera in devices. It was became a candidate for communication protocol for IoT. Camera module can be easy attached to IoT device, because it is small and flexible. Furthermore almost smartphone equip one or two camera for both back and font side with high quality and resolution. It can be utilized for receiving the data from LED or positioning. Actually, OCC combines illumination and communication. It can supply communication for special areas or environment where do not allow Radio frequency such as hospital, airplane etc. There are many concept and experiment be proposed. In this paper we proposed utilizing Android smart-phone camera for receiver and introduce new approach in modulation scheme for LED at transmitter. It also show how Manchester coding can be used encode bits while at the same time being successfully decoded by Android smart-phone camera. We introduce new data frame format for easy decoded and can be achieve high bit rate. This format can be easy to adapt to performance limit of Android operator or embedded system.

Development of the Strip Off-center Meter Using Line Scan Camera in FM Line (라인 스캔 카메라를 이용한 압연라인의 판쏠림 측정장치 개발)

  • Yoo Ki-Sung;Lee Min-Choel;Choi Yong-Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.6
    • /
    • pp.518-523
    • /
    • 2005
  • Strip off-center is one of the major problems in hot strip mill line. The key to good centering is having good equipment, modern control systems, excellent maintenance and an understanding of milling process. Therefore, this study aims to develop a system that is useful for quantitative analysis of strip off-center. In this study, the measuring method of strip off-center was thoroughly studied and the exclusive control board to line scan camera were designed. Also, the manipulated type housing for line scan camera was developed to adjust initial parameters for strip width and center line. In order to check the accuracy and usefulness of the developed system, FM stand in $\#2$ Hot Strip Factory in Pohang Steel Works was targeted.

The estimation of camera's position and orientation using Hough Transform and Vanishing Point in the road Image (도로영상에서 허프변환과 무한원점을 이용한 카메라 위치 및 자세 추정 알고리즘)

  • Chae, Jung-Soo;Choi, Seong-Gu;Rho, Do-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.511-513
    • /
    • 2004
  • Camera Calibration should certain)y be achieved to take an accurate measurement using image system. Calibration is to prove the relation between an measurement object and camera and to estimate twelve internal and external parameters. In this paper, we suggest that an algorithm should estimate the external parameters from the road image and use a vanishing point's character from parallel straight lines in a space. also, we use Hough Transform to estimate an accurate vanishing point. Hough Transform has one of the advantages which is an application for each road environment. we assume a variety of environments to prove the usability of a suggested algorithm and show simulation results with a computer.

  • PDF

Optical Camera Communication Based Lateral Vehicle Position Estimation Scheme Using Angle of LED Street Lights (LED 가로등의 각도를 이용한 광카메라통신기반 횡방향 차량 위치추정 기법)

  • Jeon, Hui-Jin;Yun, Soo-Keun;Kim, Byung Wook;Jung, Sung-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.9
    • /
    • pp.1416-1423
    • /
    • 2017
  • Lane detection technology is one of the most important issues on car safety and self-driving capability of autonomous vehicle. This paper introduces an accurate lane detection scheme based on OCC(Optical Camera Communication) for moving vehicles. For lane detection of moving vehicles, the streetlights and the front camera of the vehicle were used for a transmitter and a receiver, respectively. Based on the angle information of multiple streetlights in a captured image, the distance from sidewalk can be calculated using non-linear regression analysis. Simulation results show that the proposed scheme shows robust performance of accurate lane detection.

Inter-vehicular Instruction Transmission Scheme Based on Optical Camera Communication (카메라 통신 기반 리더 차량 추종 기술 연구)

  • Kim, Deok-Kyu;Kim, Min-Jeong;Jung, Sung-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.7
    • /
    • pp.878-883
    • /
    • 2018
  • This paper proposes a method for transmitting instruction between vehicles in a moving situation using RC Car having camera. Information of preceding RC Car was transmitted by LED using Optical Camera Communication(OCC). Rear RC Car follows the preceding one by analyzing transmitted OCC data based on image processing. Through this procedure, the information reception ratio according to the distance change of two RC Cars is confirmed. Through experiments, we showed that our proposed scheme enables the possibility of vehicle platooning.

Ensemble of Convolution Neural Networks for Driver Smartphone Usage Detection Using Multiple Cameras

  • Zhang, Ziyi;Kang, Bo-Yeong
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.75-81
    • /
    • 2020
  • Approximately 1.3 million people die from traffic accidents each year, and smartphone usage while driving is one of the main causes of such accidents. Therefore, detection of smartphone usage by drivers has become an important part of distracted driving detection. Previous studies have used single camera-based methods to collect the driver images. However, smartphone usage detection by employing a single camera can be unsuccessful if the driver occludes the phone. In this paper, we present a driver smartphone usage detection system that uses multiple cameras to collect driver images from different perspectives, and then processes these images with ensemble convolutional neural networks. The ensemble method comprises three individual convolutional neural networks with a simple voting system. Each network provides a distinct image perspective and the voting mechanism selects the final classification. Experimental results verified that the proposed method avoided the limitations observed in single camera-based methods, and achieved 98.96% accuracy on our dataset.