• Title/Summary/Keyword: Bird-view

Search Result 94, Processing Time 0.026 seconds

Clinical Usefulness of a Newly Standardized Bird's Eye View Clinical Photography in Nasal Bone Fracture (코뼈 골절 수술결과 평가에 있어서 Bird's Eye View의 유용성)

  • Park, Dong Kwon;Choi, Jae Hoon;Lee, Jin Hyo
    • Archives of Craniofacial Surgery
    • /
    • v.12 no.2
    • /
    • pp.97-101
    • /
    • 2011
  • Purpose: Nasal bone fracture is the most common type of facial bone fracture. The standard 6-view photography was not adequate to support the evaluation of nasal deformity and the results of closed reduction. The authors have standardized a bird's eye view photography to more effectively evaluate this nasal deformity. Methods: We reviewed the medical records and radiologic studies of 63 nasal bone fracture patients. We had taken clinical photography including bird's eye view that was standardized as nasal tip was aligned to Cupid's bow of upper lip and light was focused on the nasion of all 63 patients. Results: Nasal deviations and reductions were more noticeable on the newly standardized bird's eye view. This clinical photography was very useful to explain the results of reduction. Conclusion: It was concluded that this photography can be more reliable for evaluation of severity of nasal deformity and the result of closed reduction.

Bird's-Eye View Service under Ubiquitous Transportation Sensor Network Environments (Ubiquitous Transportation Sensor Network에서 Bird's-Eye View 서비스)

  • Kim, Joohwan;Nam, Doohee;Baek, Sungjoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.225-231
    • /
    • 2013
  • A bird's-eye view is an elevated view of an object from above, with a perspective as though the observer were a bird, often used in the making of blueprints, floor plans and maps. It can be used under severe weather conditions when visibility is poor. Under low visibility environments, drivers can communicate each other using V2V communication to get each vehicle's status to prevent collision and other accidents. Ubiquitous transportation sensor networks(u-TSN) and its application are emerging rapidly as an exciting new paradigm to provide reliable and comfortable transportatione services. The ever-growing u-TSN and its application will provide an intelligent and ubiquitous communication and network technology for traffic safety area.

Design and Implementation of 4-sided Monitoring System providing Bird's Eye View in Car PC Environment (Car PC 환경에서 Bird's Eye View를 제공하는 4SM (4-Sided Monitoring) 시스템 설계 및 구현)

  • Yu, Young-Ho;Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.1
    • /
    • pp.153-159
    • /
    • 2012
  • Driver's view has blind spot of automobile surroundings due to physical components of automobile architecture. Obstacles on blind spot are the cause of car destruction and car accidents. Cars which produced in recent have obstacle detection sensors and rear view cameras which provide information of obstacles on the blind sopt, and have also AVM(Around View Monitoring) which provides automobile surroundings for driver's safe driving. During a low-speed travel while parking or moving in a narrow street, a driver get help for safe driving by taking information of automobile surroundings using the above-mentioned devices. In this paper, we present a design and implementation of a 4-sided monitoring (4SM) system, which helps a driver see an integrated view of a vehicle's perimeter at a glance, using a car PC connected to four cameras installed on the front, rear, left, and right sides.

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

Simulation to Create Panorama Image in Blind Spot (사각지대 파노라마 영상생성을 위한 시뮬레이션)

  • Park Min-Woo;Lee Seok-Jun;Jang Hyoung-Ho;Jung Soon-Ki;Yoon Pal-Joo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.292-294
    • /
    • 2006
  • 현재 대다수의 자동차에 장착된 사이드미러와 백미러 같은 기존의 비젼 시스템은 사각지대(blind spot)를 가지고 있다. 이러한 단점을 보안하기 위해 고급 자동차에는 후방에 wide-angle 카메라를 장착하고 있다. 기존의 wide-angie 카메라 시스템은 1대의 카메라를 사용하여 후방 영상을 얻고 그것을 그대로 보여줌으로서 어느 정도 사각지대를 줄여주는 역할을 하고 있지만 여전히 확보되지 않은 사각지대가 존재한다. 하지만 다수의 카메라를 사용하면 보다 넓은 후방 시야를 확보함으로서 보다 완벽하게 사각지대를 제거할 뿐만 아니라, 좀 더 다양한 위험물 정보를 주행 중에도 운전자에게 제공하는 것이 가능해진다. 본 논문에서는 사각지대를 제거하기 위해 차량의 좌. 우측 그리고 후방에 장착된 카메라를 통해 얻어진 영상물 하나의 통합된 파노라마 영상물 생성하는 방법을 제안하고 몇 가지 경우에 대해 실험한다. 우리는 3D 와핑을 통해 각 영상의 Bird's Eye View를 생성하고, 생성된 Bird's Eye View를 2차원 이동변환만을 이용해서 하나의 통합된 Bird's Eye View를 만든다. 이렇게 만들어진 통합된 영상을 후방 카메라를 기준으로 다시 3D 와핑 함으로서 완전한 파노라마 영상을 생성한다.

  • PDF

Simulation Panorama Image Reconstruction for Eliminating Blind Spot of a Running Vehicle (주행 중인 차량의 사각지대 제거를 위한 파노라마 시뮬레이션)

  • Park, Min-Woo;Lee, Seok-Jun;Jang, Kyoung-Ho;Jung, Soon-Ki;Yoon, Pal-Joo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.767-773
    • /
    • 2007
  • 현재 시판되고 있는 대다수의 자동차에 장착된 사이드미러와 백미러 같은 기존의 비젼 시스템은 모두 사각지대(blind spot)를 가지고 있다. 사각지대는 크고 작은 사고의 원인이 되기도 한다. 이러한 단점을 보완하기 위해 자동차 기업들은 자사의 고급 자동차 후방에 광각(wide-angle) 카메라를 장착하고 있다. 광각 카메라 시스템은 1대의 카메라를 사용하여 후방 영상을 얻고 그것을 그대로 보여줌으로서 어느 정도 사각지대를 줄여주는 역할을 하고 있지만 후방의 모든 사각지대를 제거해주지는 못한다. 그러므로 다수의 카메라를 사용하면 보다 넓은 후방 시야를 확보함으로서 보다 완벽하게 사각지대를 제거할 뿐만 아니라, 좀 더 다양한 위험물 정보를 주행 중에도 운전자에게 제공하는 것이 가능해진다. 본 논문에서는 사각지대를 제거하기 위해 차량의 좌, 우측 그리고 후방에 3대의 카메라를 장착하고, 장착된 카메라를 통해 얻어진 영상을 통합한 파노라마 영상을 생성하는 방법과 다양한 환경에서 실험한 결과를 제시한다. 파노라마 영상을 생성하기 위해서 제안하는 방법은 3D 와핑을 통해 각 영상의 Bird's Eye View를 생성하고, 생성된 Bird's Eye View를 2차원 이동변환만을 이용해서 하나의 통합된 Bird's Eye View를 만든다. 이렇게 만들어진 통합된 영상을 후방 카메라를 기준으로 다시 3D 와핑 함으로서 완전한 파노라마 영상을 생성한다. 제시된 방법으로 다양한 상황에 따라 실험을 수행하고, 이를 통해 문제점을 찾아본다.

  • PDF

Stereo Vision-Based Obstacle Detection and Vehicle Verification Methods Using U-Disparity Map and Bird's-Eye View Mapping (U-시차맵과 조감도를 이용한 스테레오 비전 기반의 장애물체 검출 및 차량 검증 방법)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.86-96
    • /
    • 2010
  • In this paper, we propose stereo vision-based obstacle detection and vehicle verification methods using U-disparity map and bird's-eye view mapping. First, we extract a road feature using maximum frequent values in each row and column. And we extract obstacle areas on the road using the extracted road feature. To extract obstacle areas exactly we utilize U-disparity map. We can extract obstacle areas exactly on the U-disparity map using threshold value which consists of disparity value and camera parameter. But there are still multiple obstacles in the extracted obstacle areas. Thus, we perform another processing, namely segmentation. We convert the extracted obstacle areas into a bird's-eye view using camera modeling and parameters. We can segment obstacle areas on the bird's-eye view robustly because obstacles are represented on it according to ranges. Finally, we verify the obstacles whether those are vehicles or not using various vehicle features, namely road contacting, constant horizontal length, aspect ratio and texture information. We conduct experiments to prove the performance of our proposed algorithms in real traffic situations.

Comparative Analysis of Image Fusion Methods According to Spectral Responses of High-Resolution Optical Sensors (고해상 광학센서의 스펙트럼 응답에 따른 영상융합 기법 비교분석)

  • Lee, Ha-Seong;Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.227-239
    • /
    • 2014
  • This study aims to evaluate performance of various image fusion methods based on the spectral responses of high-resolution optical satellite sensors such as KOMPSAT-2, QuickBird and WorldView-2. The image fusion methods used in this study are GIHS, GIHSA, GS1 and AIHS. A quality evaluation of each image fusion method was performed with both quantitative and visual analysis. The quantitative analysis was carried out using spectral angle mapper index (SAM), relative global dimensional error (spectral ERGAS) and image quality index (Q4). The results indicates that the GIHSA method is slightly better than other methods for KOMPSAT-2 images. On the other hand, the GS1 method is suitable for Quickbird and WorldView-2 images.