• 제목/요약/키워드: Perspective Image

검색결과 755건 처리시간 0.035초

깊이 정보를 이용한 원근 왜곡 영상의 보정 (Correction of Perspective Distortion Image Using Depth Information)

  • 권순각;이동석
    • 한국멀티미디어학회논문지
    • /
    • 제18권2호
    • /
    • pp.106-112
    • /
    • 2015
  • In this paper, we propose a method for correction of perspective distortion on a taken image. An image taken by a camera is caused perspective distortion depending on the direction of the camera when objects are projected onto the image. The proposed method in this paper is to obtain the normal vector of the plane through the depth information using a depth camera and calculate the direction of the camera based on this normal vector. Then the method corrects the perspective distortion to the view taken from the front side by performing a rotation transformation on the image according to the direction of the camera. Through the proposed method, it is possible to increase the processing speed than the conventional method such as correction of perspective distortion based on color information.

An Interactive Perspective Scene Completion Framework Guided by Complanate Mesh

  • Hao, Chuanyan;Jin, Zilong;Yang, Zhixin;Chen, Yadang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권1호
    • /
    • pp.183-200
    • /
    • 2020
  • This paper presents an efficient interactive framework for perspective scene completion and editing tasks, which are available largely in the real world but rarely studied in the field of image completion. Considering that it is quite hard to extract perspective information from a single image, this work starts from a friendly and portable interactive platform to obtain the basic perspective data. Then, in order to make this interface less sensitive, easier and more flexible, a perspective-rectification based correction mechanism is proposed to iteratively update the locations of the initial points selected by users. At last, a complanate mesh is generated by the geometry calculations from these corrected initial positions. This mesh must approximate the perspective direction and the structure topology as much as possible so that the filling process can be conducted under the constraint of the perspective effects of the original image. Our experiments show the results with good qualities and performances, and also demonstrate the validity of our approaches by various perspective scenes and images.

원근 왜곡 보정의 실시간 구현 방법 (Realtime Implementation Method for Perspective Distortion Correction)

  • 이동석;김남규;권순각
    • 한국멀티미디어학회논문지
    • /
    • 제20권4호
    • /
    • pp.606-613
    • /
    • 2017
  • When the planar area is captured by the depth camera, the shape of the plane in the captured image has perspective projection distortion according to the position of the camera. We can correct the distorted image by the depth information in the plane in the captured area. Previous depth information based perspective distortion correction methods fail to satisfy the real-time property due to a large amount of computation. In this paper, we propose the method of applying the conversion table selectively by measuring the motion of the plane and performing the correction process by parallel processing for correcting perspective projection distortion. By appling the proposed method, the system for correcting perspective projection distortion correct the distorted image, whose resolution is 640x480, as 22.52ms per frame, so the proposed system satisfies the real-time property.

비선형 스케일링 함수를 이용한 어안 영상의 원근 변환 (Conversion of Fisheye Image to Perspective Image Using Nonlinear Scaling Function)

  • 김태우;조태경
    • 한국산학기술학회논문지
    • /
    • 제10권1호
    • /
    • pp.117-121
    • /
    • 2009
  • 어안 렌즈 카메라로 촬영한 어안 영상은 일반 카메라 영상보다 화각이 크다. 반면 영상에서 피사체의 왜곡이 커서 사용자의 인지가 어려우므로 원근 영상으로 변환이 필요하다. 기존의 Ishii 방법[1]은 등거리 투영을 사용하므로 피사체가 변환 영상에서 크기와 기하학적 왜곡이 생기는 문제점을 가지고 있었다. 본 논문에서는 스케일링 함수를 이용한 어안 영상의 원근 영상 변환 방법을 제안하였다. 실험에서, 제안한 방법은 스케일링 함수를 적용함으로써 크기 왜곡과 기하학적 왜곡이 감소되었다.

Lane Detection Based on Inverse Perspective Transformation and Kalman Filter

  • Huang, Yingping;Li, Yangwei;Hu, Xing;Ci, Wenyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권2호
    • /
    • pp.643-661
    • /
    • 2018
  • This paper proposes a novel algorithm for lane detection based on inverse perspective transformation and Kalman filter. A simple inverse perspective transformation method is presented to remove perspective effects and generate a top-view image. This method does not need to obtain the internal and external parameters of the camera. The Gaussian kernel function is used to convolute the image to highlight the lane lines, and then an iterative threshold method is used to segment the image. A searching method is applied in the top-view image obtained from the inverse perspective transformation to determine the lane points and their positions. Combining with feature voting mechanism, the detected lane points are fitted as a straight line. Kalman filter is then applied to optimize and track the lane lines and improve the detection robustness. The experimental results show that the proposed method works well in various road conditions and meet the real-time requirements.

경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출 (Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System)

  • 홍성훈;박대진
    • 대한임베디드공학회논문지
    • /
    • 제17권1호
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

Single-Image Dehazing based on Scene Brightness for Perspective Preservation

  • Young-Su Chung;Nam-Ho Kim
    • Journal of information and communication convergence engineering
    • /
    • 제22권1호
    • /
    • pp.70-79
    • /
    • 2024
  • Bad weather conditions such as haze lead to a significant lack of visibility in images, which can affect the functioning and reliability of image processing systems. Accordingly, various single-image dehazing (SID) methods have recently been proposed. Existing SID methods have introduced effective visibility improvement algorithms, but they do not reflect the image's perspective, and thus have limitations that distort the sky area and nearby objects. This study proposes a new SID method that reflects the sense of space by defining the correlation between image brightness and haze. The proposed method defines the haze intensity by calculating the airlight brightness deviation and sets the weight factor of the depth map by classifying images based on the defined haze intensity into images with a large sense of space, images with high intensity, and general images. Consequently, it emphasizes the contrast of nearby images where haze is present and naturally smooths the sky region to preserve the image's perspective.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

라이트필드 영상의 Perspective 및 재초점 화질측정방법 비교 (Comparison of Quality Metrics of Perspective and Refocused Images in Light Field Images)

  • ;;전병우
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2019년도 추계학술대회
    • /
    • pp.228-229
    • /
    • 2019
  • Digital refocusing and perspective change are the most expected applications of light field (LF) images. As LF image has a large amount of data, its compression is very essential. The fidelity of LF image after compression needs to be evaluated differently depending on a specific application such as perspective change or refocusing. In this paper, we investigate the fidelity of images after perspective change and refocusing. Several state-of-the-art objective quality metrics are compared. Our experiment shows that IWPSNR is the most reliable metric for both perspective and focus changes, but it does not outperform the popular metrics such as PSNR and SSIM.

  • PDF

경사진 도로 환경에서도 강인한 실시간 차선 검출방법 (A Robust Real-Time Lane Detection for Sloping Roads)

  • 허환;한기태
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권6호
    • /
    • pp.413-422
    • /
    • 2013
  • 본 논문에서는 영상의 카메라 파라미터가 필요 없는 역 투시변환 기술 및 제안한 차선필터를 사용하여 경사진 도로 환경에서도 강인한 실시간 차선 검출방법을 제안한다. 영상의 시작 프레임에서 소실점을 찾은 후, 소실점 주변의 일정영역을 템플릿(TA: Template Area)으로 저장하며, 소실점을 기준으로 하단으로 내려가면서 차선을 예측하고, 예측된 차선을 기반으로 역 투시변환계수를 추출하여 추출된 계수로 원근감이 제거된 영상을 얻으며, 바로 그 영상에 제안한 차선필터를 적용하여 차선을 검출한다. 경사진 도로환경에서도 강인한 차선 검출을 위하여 입력영상으로 부터 TA와 유사한 영역(SA: Similar Area)을 템플릿 매칭으로 추적하여 소실점을 재계산하여 차선을 검출한다. 제안한 방법은 경사진 도로 환경에서도 차선검출이 견고하며, 처리영역을 축소하고 처리과정을 단순화함으로서 초당 40 frames 정도의 양호한 차선검출 결과를 보였다.