• 제목/요약/키워드: Single camera

검색결과 766건 처리시간 0.027초

Sensors Comparison for Observation of floating structure's movement

  • Trieu, Hang Thi;Han, Dong Yeob
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2014년도 추계학술대회
    • /
    • pp.219-221
    • /
    • 2014
  • The objective of this paper is to simulate the dynamic behavior of a floating structure model, using image processing and close-range photogrammetry, instead of the contact sensors. Previously, the movement of structure was presented through the exterior orientation estimation of a single camera by space resection. The inverse resection yields the 6 orientation parameters of the floating structure, with respect to the camera coordinate system. The single camera solution is of interest in applications characterized by restriction in term of costs, unfavorable observation conditions, or synchronization demands when using multiple cameras. This article discusses the theoretical determinations of camera exterior orientation based on Direct Linear Transformation and photogrammetric resection using least squares adjustment. The proposed method was used to monitor the motion of a floating model. The results of six degrees of freedom (6-DOF) by inverse resection show that the appropriate initial values by DLT can be effectually applied in least squares adjustment, to obtain the precision of exterior orientation parameters. Additionally, a comparison between the close-range photogrammetry and total station results was feasibly verified. Therefore, the proposed method can be considered as an efficient solution to simulating the movement of floating structure.

  • PDF

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

영상처리기법을 이용한 그린시트 측정알고리즘 개발 (Development of Green-Sheet Measurement Algorithm by Image Processing Technique)

  • 표창률;양상모;강성훈;윤성만
    • 소성∙가공
    • /
    • 제16권4호
    • /
    • pp.313-316
    • /
    • 2007
  • The purpose of this paper is the development of measurement algorithm for green-sheet based on the digital image processing technique. The Low Temperature Co-fired Ceramic(LTCC) technology can be employed to produce multilayer circuits with the help of single tapes, which are used to apply conductive, dielectric and/or resistive pastes on. These single green-sheets must be laminated together and fired at the same time. Main function of the green-sheet film measurement algorithm is to measure the position and size of the punching hole in each single layer. The line scan camera coupled with motorized X-Y stage is used. In order to measure the entire film area using several scanning steps, an overlapping method is used.

인공지능 이미지 인식 기술을 활용한 위험 알림 CCTV 서비스 (Danger Alert Surveillance Camera Service using AI Image Recognition technology)

  • 이하린;김유진;이민아;문재현
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.814-817
    • /
    • 2020
  • The number of single-person households is increasing every year, and there are also high concerns about the crime and safety of single-person households. In particular, crimes targeting women are increasing. Although home surveillance camera applications, which are mostly used by single-person households, only provide intrusion detection functions, this service utilizes AI image recognition technologies such as face recognition and object detection to provide theft, violence, stranger and intrusion detection. Users can receive security-related notifications, relieve their anxiety, and prevent crimes through this service.

3D reconstruction using a method of the planar homography from uncalibrated camera

  • Yoon Yong In;Choi Jong Soo;Kwon Jun sik;Kwon Oh Keun
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2004년도 학술대회지
    • /
    • pp.804-809
    • /
    • 2004
  • It is essential to calibrate a camera in order to recover 3-dimensional reconstruction from uncalibrated images. This paper proposes a new technique of the camera calibration using a homography between the planar patterns image taken by the camera, which is located at the three planar patterns image. Since the proposed method should be computed from the homography among the three planar patterns from a single image, it is implemented more easily and simply to recover 3D object than the conventional. Experimental results show the performances of the proposed method are the better than the conventional. We demonstrate the examples of 3D reconstruction using the proposed algorithm from image sequence.

  • PDF

거울 및 단일 카메라를 이용한 3차원 발 스캐너 (A 3D Foot Scanner Using Mirrors and Single Camera)

  • 정성엽;박상근
    • 한국CDE학회논문집
    • /
    • 제16권1호
    • /
    • pp.11-20
    • /
    • 2011
  • A structured beam laser is often used to scan object and make 3D model. Multiple cameras are inevitable to see occluded areas, which is the main reason of the high price of the scanner. In this paper, a low cost 3D foot scanner is developed using one camera and two mirrors. The camera and two mirrors are located below and above the foot, respectively. Occluded area, which is the top of the foot, is reflected by the mirrors. Then the camera measures 3D point data of the bottom and top of the foot at the same time. Then, the whole foot model is reconstructed after symmetrical transformation of the data reflected by mirrors. The reliability of the scan data depends on the accuracy of the parameters between the camera and the laser. A calibration method is also proposed and verified by experiments. The results of the experiments show that the worst errors of the system are 2 mm along x, y, and z directions.

로봇의 운동특성을 고려한 새로운 시각구동 방법 (A novel visual servoing techniques considering robot dynamics)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

볼록 거울 및 단일 카메라를 이용한 실내에서의 전 방향 위치 검출 방법 (The Indoor Position Detection Method using a Single Camera and a Parabolic Mirror)

  • 김지홍;김희선;이창구
    • 제어로봇시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.161-167
    • /
    • 2008
  • This article describes the methods of a decision of the location which user points to move by an optical device like a laser pointer and a moving to that location. Using a conic mirror and CCD camera sensor, a robot observes a spot of user wanted point among an initiative, computes the location and azimuth and moves to that position. This system offers the brief data to a processor with simple devices. In these reason, we can reduce the time of a calculation to process of images and find the target by user point for carrying a robot. User points a laser spot on a point to be moved so that this sensor system in the robot, detecting the laser spot point with a conic mirror, laid on the robot, showing a camera. The camera is attached on the robot upper body and fixed parallel to the ground and the conic mirror.

카메라를 이용한 3차원 공간상의 이동 목표물의 거리정보기반 모션추정 (Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera)

  • 좌동경
    • 전기학회논문지
    • /
    • 제65권12호
    • /
    • pp.2057-2060
    • /
    • 2016
  • Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.