• Title/Summary/Keyword: Building Object Detection

Search Result 127, Processing Time 0.028 seconds

Fusion of LIDAR Data and Aerial Images for Building Reconstruction

  • Chen, Liang-Chien;Lai, Yen-Chung;Rau, Jiann-Yeou
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.773-775
    • /
    • 2003
  • From the view point of data fusion, we integrate LIDAR data and digital aerial images to perform 3D building modeling in this study. The proposed scheme comprises two major parts: (1) building block extraction and (2) building model reconstruction. In the first step, height differences are analyzed to detect the above ground areas. Color analysis is then performed for the exclusion of tree areas. Potential building blocks are selected first followed by the refinement of building areas. In the second step, through edge detection and extracting the height information from LIDAR data, accurate 3D edges in object space is calculated. The accurate 3D edges are combined with the already developed SMS method for building modeling. LIDAR data acquired by Leica ALS 40 in Hsin-Chu Science-based Industrial Park of north Taiwan will be used in the test.

  • PDF

Development of Low-Cost Vision-based Eye Tracking Algorithm for Information Augmented Interactive System

  • Park, Seo-Jeon;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.11-16
    • /
    • 2020
  • Deep Learning has become the most important technology in the field of artificial intelligence machine learning, with its high performance overwhelming existing methods in various applications. In this paper, an interactive window service based on object recognition technology is proposed. The main goal is to implement an object recognition technology using this deep learning technology to remove the existing eye tracking technology, which requires users to wear eye tracking devices themselves, and to implement an eye tracking technology that uses only usual cameras to track users' eye. We design an interactive system based on efficient eye detection and pupil tracking method that can verify the user's eye movement. To estimate the view-direction of user's eye, we initialize to make the reference (origin) coordinate. Then the view direction is estimated from the extracted eye pupils from the origin coordinate. Also, we propose a blink detection technique based on the eye apply ratio (EAR). With the extracted view direction and eye action, we provide some augmented information of interest without the existing complex and expensive eye-tracking systems with various service topics and situations. For verification, the user guiding service is implemented as a proto-type model with the school map to inform the location information of the desired location or building.

Deep learning platform architecture for monitoring image-based real-time construction site equipment and worker (이미지 기반 실시간 건설 현장 장비 및 작업자 모니터링을 위한 딥러닝 플랫폼 아키텍처 도출)

  • Kang, Tae-Wook;Kim, Byung-Kon;Jung, Yoo-Seok
    • Journal of KIBIM
    • /
    • v.11 no.2
    • /
    • pp.24-32
    • /
    • 2021
  • Recently, starting with smart construction research, interest in technology that automates construction site management using artificial intelligence technology is increasing. In order to automate construction site management, it is necessary to recognize objects such as construction equipment or workers, and automatically analyze the relationship between them. For example, if the relationship between workers and construction equipment at a construction site can be known, various use cases of site management such as work productivity, equipment operation status monitoring, and safety management can be implemented. This study derives a real-time object detection platform architecture that is required when performing construction site management using deep learning technology, which has recently been increasingly used. To this end, deep learning models that support real-time object detection are investigated and analyzed. Based on this, a deep learning model development process required for real-time construction site object detection is defined. Based on the defined process, a prototype that learns and detects construction site objects is developed, and then platform development considerations and architecture are derived from the results.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

A Study of Sensor Fusion using Radar Sensor and Vision Sensor in Moving Object Detection (레이더 센서와 비전 센서를 활용한 다중 센서 융합 기반 움직임 검지에 관한 연구)

  • Kim, Se Jin;Byun, Ki Hun;Won, In Su;Kwon, Jang Woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.140-152
    • /
    • 2017
  • This Paper is for A study of sensor fusion using Radar sensor and Vision sensor in moving object detection. Radar sensor has some problems to detect object. When the sensor moves by wind or that kind of thing, it can happen to detect wrong object like building or tress. And vision sensor is very useful for all area. And it is also used so much. but there are some weakness that is influenced easily by the light of the area, shaking of the sensor device, and weather and so on. So in this paper I want to suggest to fuse these sensor to detect object. Each sensor can fill the other's weakness, so this kind of sensor fusion makes object detection much powerful.

Detecting Jaywalking Using the YOLOv5 Model

  • Kim, Hyun-Tae;Lee, Sang-Hyun
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.300-306
    • /
    • 2022
  • Currently, Korea is building traffic infrastructure using Intelligent Transport Systems (ITS), but the pedestrian traffic accident rate is very high. The purpose of this paper is to prevent the risk of traffic accidents by jaywalking pedestrians. The development of this study aims to detect pedestrians who trespass using the public data set provided by the Artificial Intelligence Hub (AIHub). The data set uses training data: 673,150 pieces and validation data: 131,385 pieces, and the types include snow, rain, fog, etc., and there is a total of 7 types including passenger cars, small buses, large buses, trucks, large trailers, motorcycles, and pedestrians. has a class format of Learning is carried out using YOLOv5 as an implementation model, and as an object detection and edge detection method of an input image, a canny edge model is applied to classify and visualize human objects within the detected road boundary range. In this study, it was designed and implemented to detect pedestrians using the deep learning-based YOLOv5 model. As the final result, the mAP 0.5 showed a real-time detection rate of 61% and 114.9 fps at 338 epochs using the YOLOv5 model.

A Research of Obstacle Detection and Path Planning for Lane Change of Autonomous Vehicle in Urban Environment (자율주행 자동차의 실 도로 차선 변경을 위한 장애물 검출 및 경로 계획에 관한 연구)

  • Oh, Jae-Saek;Lim, Kyung-Il;Kim, Jung-Ha
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.115-120
    • /
    • 2015
  • Recently, in automotive technology area, intelligent safety systems have been actively accomplished for drivers, passengers, and pedestrians. Also, many researches are focused on development of autonomous vehicles. This paper propose the application of LiDAR sensors, which takes major role in perceiving environment, terrain classification, obstacle data clustering method, and local map building for autonomous driving. Finally, based on these results, planning for lane change path that vehicle tracking possible were created and the reliability of path generation were experimented.

The On-Line Fault Detection and Diagnostic Testing of Systems using Neural Network (신경회로망을 이용한 시스템의 실시간 고장감지 및 진단 방법)

  • 정진구
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.2
    • /
    • pp.147-154
    • /
    • 1998
  • As technical systems in building are being developed, the processes and systems get more difficult for the average operator to understand. When operating a complex facility, it is beneficial in equipment management to provide the operator with tools which can help in dicision making for recovery from a failure of the system. The main object of the study is to develop real-time automatic fault detection and diagnosis system for optimal operation of IBS building.

  • PDF

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas (재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가)

  • Ser, Junho;Yang, Byungyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.381-391
    • /
    • 2022
  • This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.