• Title/Summary/Keyword: Building Object Detection

Search Result 127, Processing Time 0.027 seconds

Determination of Physical Footprints of Buildings with Consideration Terrain Surface LiDAR Data (지표면 라이다 데이터를 고려한 건물 외곽선 결정)

  • Yoo, Eun Jin;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.503-514
    • /
    • 2016
  • Delineation of accurate object boundaries is crucial to provide reliable spatial information products such as digital topographic maps, building models, and spatial database. In LiDAR(Light Detection and Ranging) data, real boundaries of the buildings exist somewhere between outer-most points on the roofs and the closest points to the buildings among points on the ground. In most cases, areas of the building footprints represented by LiDAR points are smaller than actual size of the buildings because LiDAR points are located inside of the physical boundaries. Therefore, building boundaries determined by points on the roofs do not coincide with the actual footprints. This paper aims to estimate accurate boundaries that are close to the physical boundaries using airborne LiDAR data. The accurate boundaries are determined from the non-gridded original LiDAR data using initial boundaries extracted from the gridded data. The similar method implemented in this paper is also found in demarcation of the maritime boundary between two territories. The proposed method consists of determining initial boundaries with segmented LiDAR data, estimating accurate boundaries, and accuracy evaluation. In addition, extremely low density data was also utilized for verifying robustness of the method. Both simulation and real LiDAR data were used to demonstrate feasibility of the method. The results show that the proposed method is effective even though further refinement and improvement process could be required.

Building Large-scale CityGML Feature for Digital 3D Infrastructure (디지털 3D 인프라 구축을 위한 대규모 CityGML 객체 생성 방법)

  • Jang, Hanme;Kim, HyunJun;Kang, HyeYoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.187-201
    • /
    • 2021
  • Recently, the demand for a 3D urban spatial information infrastructure for storing, operating, and analyzing a large number of digital data produced in cities is increasing. CityGML is a 3D spatial information data standard of OGC (Open Geospatial Consortium), which has strengths in the exchange and attribute expression of city data. Cases of constructing 3D urban spatial data in CityGML format has emerged on several cities such as Singapore and New York. However, the current ecosystem for the creation and editing of CityGML data is limited in constructing CityGML data on a large scale because of lack of completeness compared to commercial programs used to construct 3D data such as sketchup or 3d max. Therefore, in this study, a method of constructing CityGML data is proposed using commercial 3D mesh data and 2D polygons that are rapidly and automatically produced through aerial LiDAR (Light Detection and Ranging) or RGB (Red Green Blue) cameras. During the data construction process, the original 3D mesh data was geometrically transformed so that each object could be expressed in various CityGML LoD (Levels of Detail), and attribute information extracted from the 2D spatial information data was used as a supplement to increase the utilization as spatial information. The 3D city features produced in this study are CityGML building, bridge, cityFurniture, road, and tunnel. Data conversion for each feature and property construction method were presented, and visualization and validation were conducted.

Implementation of AI-based Object Recognition Model for Improving Driving Safety of Electric Mobility Aids (전동 이동 보조기기 주행 안전성 향상을 위한 AI기반 객체 인식 모델의 구현)

  • Je-Seung Woo;Sun-Gi Hong;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.166-172
    • /
    • 2022
  • In this study, we photograph driving obstacle objects such as crosswalks, side spheres, manholes, braille blocks, partial ramps, temporary safety barriers, stairs, and inclined curb that hinder or cause inconvenience to the movement of the vulnerable using electric mobility aids. We develop an optimal AI model that classifies photographed objects and automatically recognizes them, and implement an algorithm that can efficiently determine obstacles in front of electric mobility aids. In order to enable object detection to be AI learning with high probability, the labeling form is labeled as a polygon form when building a dataset. It was developed using a Mask R-CNN model in Detectron2 framework that can detect objects labeled in the form of polygons. Image acquisition was conducted by dividing it into two groups: the general public and the transportation weak, and image information obtained in two areas of the test bed was secured. As for the parameter setting of the Mask R-CNN learning result, it was confirmed that the model learned with IMAGES_PER_BATCH: 2, BASE_LEARNING_RATE 0.001, MAX_ITERATION: 10,000 showed the highest performance at 68.532, so that the user can quickly and accurately recognize driving risks and obstacles.

Implementation of AI-based Object Recognition Model for Improving Driving Safety of Electric Mobility Aids (객체 인식 모델과 지면 투영기법을 활용한 영상 내 다중 객체의 위치 보정 알고리즘 구현)

  • Dong-Seok Park;Sun-Gi Hong;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.2
    • /
    • pp.119-125
    • /
    • 2023
  • In this study, we photograph driving obstacle objects such as crosswalks, side spheres, manholes, braille blocks, partial ramps, temporary safety barriers, stairs, and inclined curb that hinder or cause inconvenience to the movement of the vulnerable using electric mobility aids. We develop an optimal AI model that classifies photographed objects and automatically recognizes them, and implement an algorithm that can efficiently determine obstacles in front of electric mobility aids. In order to enable object detection to be AI learning with high probability, the labeling form is labeled as a polygon form when building a dataset. It was developed using a Mask R-CNN model in Detectron2 framework that can detect objects labeled in the form of polygons. Image acquisition was conducted by dividing it into two groups: the general public and the transportation weak, and image information obtained in two areas of the test bed was secured. As for the parameter setting of the Mask R-CNN learning result, it was confirmed that the model learned with IMAGES_PER_BATCH: 2, BASE_LEARNING_RATE 0.001, MAX_ITERATION: 10,000 showed the highest performance at 68.532, so that the user can quickly and accurately recognize driving risks and obstacles.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Network Time Protocol Extension for Wireless Sensor Networks (무선 센서 네트워크를 위한 인터넷 시각 동기 프로토콜 확장)

  • Hwang, So-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2563-2567
    • /
    • 2011
  • Advances in smart sensors, embedded systems, low-power design, ad-hoc networks and MEMS have allowed the development of low-cost small sensor nodes with computation and wireless communication capabilities that can form distributed wireless sensor networks. Time information and time synchronization are fundamental building blocks in wireless sensor networks since many sensor network applications need time information for object tracking, consistent state updates, duplicate detection and temporal order delivery. Various time synchronization protocols have been proposed for sensor networks because of the characteristics of sensor networks which have limited computing power and resources. However, none of these protocols have been designed with time representation scheme in mind. Global time format such as UTC TOD (Universal Time Coordinated, Time Of Day) is very useful in sensor network applications. In this paper we propose network time protocol extension for global time presentation in wireless sensor networks.

A Study for Introducing a Method of Detecting and Recovering the Shadow Edge from Aerial Photos (항공영상에서 그림자 경계 탐색 및 복원 기법 연구)

  • Jung, Yong-Ju;Jang, Young-Woon;Choi, Yun-Woong;Cho, Gi-Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.4
    • /
    • pp.327-334
    • /
    • 2006
  • The aerial photos need in a simple object such as cartography and ground cover classification and also in a social objects such as the city plan, environment, disaster, transportation etc. However, the shadow, which includes when taking the aerial photos, makes a trouble to interpret the ground information, and also users, who need the photos in their field tasks, have a restriction. Generally the shadow occurs by the building and surface topography, and the detail cause is by changing of the illumination in an area. For removing the shadow this study uses the single image and processes the image without the source of image and taking situation. Also, applying the entropy minimization method it generates the 1-D gray-scale invariant image for creating the shadow edge mask and using the Canny edge detection creates the shadow edge mask, and finally by filtering in Fourier frequency domain creates the intrinsic image which recovers the 3-D color information and removes the shadow.

Real-time Detection Technique of the Target in a Berth for Automatic Ship Berthing (선박 자동접안을 위한 정박지 목표물의 실시간 검출법)

  • Choi, Yong-Woon;;Kim, Young-Bok;Lee, Kwon-Soon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.431-437
    • /
    • 2006
  • In this paper vector code correlation(VCC) method and an algorithm to promote the image-processing performance in building an effective measurement system using cameras are described far automatically berthing and controlling the ship equipped with side-thrusters. In order to realize automatic ship berthing, it is indispensable that the berthing assistant system on the ship should continuously trace a target in the berth to measure the distance to the target and the ship attitude, such that we can make the ship move to the specified location. The considered system is made up of 4 apparatuses compounded from a CCD camera, a camera direction controller, a popular PC with a built-in image processing board and a signal conversion unit connected to parallel port of the PC. The object of this paper is to reduce the image-processing time so that the berthing system is able to ensure the safety schedule against risks during approaching to the berth. It could be achieved by composing the vector code image to utilize the gradient of an approximated plane found with the brightness of pixels forming a certain region in an image and verifying the effectiveness on a commonly used PC. From experimental results, it is clear that the proposed method can be applied to the measurement system for automatic ship berthing and has the image-processing time of fourfold as compared with the typical template matching method.

Implentation of a Model for Predicting the Distance between Hazardous Objects and Workers in the Workplace using YOLO-v4 (YOLO-v4를 활용한 작업장의 위험 객체와 작업자 간 거리 예측 모델의 구현)

  • Lee, Taejun;Cho, Minwoo;Kim, Hangil;Kim, Taekcheon;Jung, Heokyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.332-334
    • /
    • 2021
  • As fatal accidents due to industrial accidents and deaths due to civil accidents were pointed out as social problems, the Act on Punishment of Serious Accidents Occurred in the Workplace was enacted to ensure the safety of citizens and to prevent serious accidents in advance. Effort is required. In this paper, we propose a distance prediction model in relation to the case where an operator is hit by heavy equipment such as a forklift. For the data, actual forklift trucks and workers roaming environments were directly captured by CCTV, and it was conducted based on the Euclidean distance. It is thought that it will be possible to learn YOLO-v4 by directly building a data-set at the industrial site, and then implement a model that predicts the distance and determines whether it is a dangerous situation, which can be used as basic data for a comprehensive risk situation judgment model.

  • PDF

Detection Algorithm of Road Damage and Obstacle Based on Joint Deep Learning for Driving Safety (주행 안전을 위한 joint deep learning 기반의 도로 노면 파손 및 장애물 탐지 알고리즘)

  • Shim, Seungbo;Jeong, Jae-Jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.2
    • /
    • pp.95-111
    • /
    • 2021
  • As the population decreases in an aging society, the average age of drivers increases. Accordingly, the elderly at high risk of being in an accident need autonomous-driving vehicles. In order to secure driving safety on the road, several technologies to respond to various obstacles are required in those vehicles. Among them, technology is required to recognize static obstacles, such as poor road conditions, as well as dynamic obstacles, such as vehicles, bicycles, and people, that may be encountered while driving. In this study, we propose a deep neural network algorithm capable of simultaneously detecting these two types of obstacle. For this algorithm, we used 1,418 road images and produced annotation data that marks seven categories of dynamic obstacles and labels images to indicate road damage. As a result of training, dynamic obstacles were detected with an average accuracy of 46.22%, and road surface damage was detected with a mean intersection over union of 74.71%. In addition, the average elapsed time required to process a single image is 89ms, and this algorithm is suitable for personal mobility vehicles that are slower than ordinary vehicles. In the future, it is expected that driving safety with personal mobility vehicles will be improved by utilizing technology that detects road obstacles.