• Title/Summary/Keyword: Multi-camera

Search Result 879, Processing Time 0.024 seconds

Implementation of theVerification and Analysis System for the High-Resolution Stereo Camera (고해상도 다기능 스테레오 카메라 지상 검증 및 분석 시스템 구현)

  • Shin, Sang-Youn;Ko, Hyoungho
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.471-482
    • /
    • 2019
  • The mission of the high-resolution camera for the lunar exploration is to provide 3D topographic information. It enables us to find the appropriate landing site or to control accurate landing by the short distance stereo image in real-time. In this paper, the ground verification and analysis system using the multi-application stereo camera to develop the high-resolution camera for the lunar exploration are proposed. The mission test items and test plans for the mission requirement are provided and the test results are analyzed by the ground verification and analysis system. For the realistic simulation for the lunar orbiter, the target area that has similar characteristics with the real lunar surface is chosen and the aircraft flight is planned to take image of the area. The DEM is extracted from the stereo image and compose three dimensional results. The high-resolution camera mission requirements for the lunar exploration are verified and the ground data analysis system is developed.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

The Walkers Tracking Algorithm using Color Informations on Multi-Video Camera (다중 비디오카메라에서 색 정보를 이용한 보행자 추적)

  • 신창훈;이주신
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1080-1088
    • /
    • 2004
  • In this paper, the interesting moving objects tracking algorithm using color information on Multi-Video camera against variance of intensity, shape and background is proposed. Moving objects are detected by using difference image method and integral projection method to background image and objects image only with hue area, after converting RGB color coordination of image which is input from multi-video camera into HSI color coordination. Hue information of the detected moving area are segmented to 24 levels from $0^{\circ}$ to $360^{\circ}$. It is used to the feature parameter of the moving objects that are three segmented hue levels with the highest distribution and difference among three segmented hue levels. To examine propriety of the proposed method, human images with variance of intensity and shape and human images with variance of intensity, shape and background are targeted for moving objects. As surveillance results of the interesting human, hue distribution level variation of the detected interesting human at each camera is under 2 level, and it is confirmed that the interesting human is tracked and surveilled by using feature parameters at cameras, automatically.

Platform Calibration of an Aerial Multi-View Camera System (항공용 다각사진 카메라 시스템의 플랫폼 캘리브레이션)

  • Lee, Chang-No;Kim, Chang-Jae;Seo, Sang-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.369-375
    • /
    • 2010
  • Since multi-view images can be utilized for 3D visualization and surveying as well, a system calibration is an essential procedure. The cameras in the system are mounted to the holder and their locations and attitudes are relatively fixed. Therefore, the locations and the attitudes of the perspective centers of the four oblique looking cameras can be calculated using the location and attitude of the nadir looking camera and the boresight values between the cameras. In this regard, this research is focusing on the analysis of the relative location and attitude between the nadir and oblique looking cameras based on the results of the exterior orientation parameters after the aerial triangulation of the real multiview images. We acquired high standard deviations of the relative locations between the nadir and oblique cameras. Standard deviations of the relative attitudes between the cameras were low when only the exterior orientations of the oblique looking cameras were allowed to be adjusted. Moreover, low standard deviations of the relative attitudes came when we considered not all the exterior orientations of the cameras but the attitudes of them only.

Low-noise reconstruction method for coded-aperture gamma camera based on multi-layer perceptron

  • Zhang, Rui;Tang, Xiaobin;Gong, Pin;Wang, Peng;Zhou, Cheng;Zhu, Xiaoxiang;Liang, Dajian;Wang, Zeyu
    • Nuclear Engineering and Technology
    • /
    • v.52 no.10
    • /
    • pp.2250-2261
    • /
    • 2020
  • Accurate localization of radioactive materials is crucial in homeland security and radiological emergencies. Coded-aperture gamma camera is an interesting solution for such applications and can be developed into portable real-time imaging devices. However, traditional reconstruction methods cannot effectively deal with signal-independent noise, thereby hindering low-noise real-time imaging. In this study, a novel reconstruction method with excellent noise-suppression capability based on a multi-layer perceptron (MLP) is proposed. A coded-aperture gamma camera based on pixel detector and coded-aperture mask was constructed, and the process of radioactive source imaging was simulated. Results showed that the MLP method performs better in noise suppression than the traditional correlation analysis method. When the Co-57 source with an activity of 1 MBq was at 289 different positions within the field of view which correspond to 289 different pixels in the reconstructed image, the average contrast-to-noise ratio (CNR) obtained by the MLP method was 21.82, whereas that obtained by the correlation analysis method was 5.85. The variance in CNR of the MLP method is larger than that of correlation analysis, which means the MLP method has some instability in certain conditions.

Tracking and Recognition of vehicle and pedestrian for intelligent multi-visual surveillance systems (지능형 다중 화상감시시스템을 위한 움직이는 물체 추적 및 보행자/차량 인식 방법)

  • Lee, Saac;Cho, Jae-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.435-442
    • /
    • 2015
  • In this paper, we propose a tracking and recognition of pedestrian/vehicle for intelligent multi-visual surveillance system. The intelligent multi-visual surveillance system consists of several fixed cameras and one calibrated PTZ camera, which automatically tracks and recognizes the detected moving objects. The fixed wide-angle cameras are used to monitor large open areas, but the moving objects on the images are too small to view in detail. But, the PTZ camera is capable of increasing the monitoring area and enhancing the image quality by tracking and zooming in on a target. The proposed system is able to determine whether the detected moving objects are pedestrian/vehicle or not using the SVM. In order to reduce the tracking error, an improved camera calibration algorithm between the fixed cameras and the PTZ camera is proposed. Various experimental results show the effectiveness of the proposed system.

Object Tracking Framework of Video Surveillance System based on Non-overlapping Multi-camera (비겹침 다중 IP 카메라 기반 영상감시시스템의 객체추적 프레임워크)

  • Han, Min-Ho;Park, Su-Wan;Han, Jong-Wook
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.141-152
    • /
    • 2011
  • Growing efforts and interests of security techniques in a diverse surveillance environment, the intelligent surveillance system, which is capable of automatically detecting and tracking target objects in multi-cameras environment, is actively developing in a security community. In this paper, we propose an effective visual surveillance system that is avaliable to track objects continuously in multiple non-overlapped cameras. The proposed object tracking scheme consists of object tracking module and tracking management module, which are based on hand-off scheme and protocol. The object tracking module, runs on IP camera, provides object tracking information generation, object tracking information distribution and similarity comparison function. On the other hand, the tracking management module, runs on video control server, provides realtime object tracking reception, object tracking information retrieval and IP camera control functions. The proposed object tracking scheme allows comprehensive framework that can be used in a diverse range of application, because it doesn't rely on the particular surveillance system or object tracking techniques.

Tackling range uncertainty in proton therapy: Development and evaluation of a new multi-slit prompt-gamma camera (MSPGC) system

  • Youngmo Ku;Sehoon Choi;Jaeho Cho;Sehyun Jang;Jong Hwi Jeong;Sung Hun Kim;Sungkoo Cho;Chan Hyeong Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3140-3149
    • /
    • 2023
  • In theory, the sharp dose falloff at the distal end of a proton beam allows for high conformal dose to the target. However, conformity has not been fully achieved in practice, primarily due to beam range uncertainty, which is approximately 4% and varies slightly across institutions. To address this issue, we developed a new range verification system prototype: a multi-slit prompt-gamma camera (MSPGC). This system features high prompt-gamma detection sensitivity, an advanced range estimation algorithm, and a precise camera positioning system. We evaluated the range measurement precision of the prototype for single spot beams with varying energies, proton quantities, and positions, as well as for spot-scanning proton beams in a simulated SSPT treatment using a phantom. Our results demonstrated high accuracy (<0.4 mm) in range measurement for the tested beam energies and positions. Measurement precision increased significantly with the number of protons, achieving 1% precision with 5 × 108 protons. For spot-scanning proton beams, the prototype ensured more than 5 × 108 protons per spot with a 7 mm or larger spot aggregation, achieving 1% range measurement precision. Based on these findings, we anticipate that the clinical application of the new prototype will reduce range uncertainty (currently approximately 4%) to 1% or less.

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.