• Title/Summary/Keyword: Road images

Search Result 449, Processing Time 0.024 seconds

CycleGAN-based Object Detection under Night Environments (CycleGAN을 이용한 야간 상황 물체 검출 알고리즘)

  • Cho, Sangheum;Lee, Ryong;Na, Jaemin;Kim, Youngbin;Park, Minwoo;Lee, Sanghwan;Hwang, Wonjun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.44-54
    • /
    • 2019
  • Recently, image-based object detection has made great progress with the introduction of Convolutional Neural Network (CNN). Many trials such as Region-based CNN, Fast R-CNN, and Faster R-CNN, have been proposed for achieving better performance in object detection. YOLO has showed the best performance under consideration of both accuracy and computational complexity. However, these data-driven detection methods including YOLO have the fundamental problem is that they can not guarantee the good performance without a large number of training database. In this paper, we propose a data sampling method using CycleGAN to solve this problem, which can convert styles while retaining the characteristics of a given input image. We will generate the insufficient data samples for training more robust object detection without efforts of collecting more database. We make extensive experimental results using the day-time and night-time road images and we validate the proposed method can improve the object detection accuracy of the night-time without training night-time object databases, because we converts the day-time training images into the synthesized night-time images and we train the detection model with the real day-time images and the synthesized night-time images.

A license plate area segmentation algorithm using statistical processing on color and edge information (색상과 에지에 대한 통계 처리를 이용한 번호판 영역 분할 알고리즘)

  • Seok Jung-Chul;Kim Ku-Jin;Baek Nak-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.353-360
    • /
    • 2006
  • This paper presents a robust algorithm for segmenting a vehicle license plate area from a road image. We consider the features of license plates in three aspects : 1) edges due to the characters in the plate, 2) colors in the plate, and 3) geometric properties of the plate. In the preprocessing step, we compute the thresholds based on each feature to decide whether a pixel is inside a plate or not. A statistical approach is applied to the sample images to compute the thresholds. For a given road image, our algorithm binarizes it by using the thresholds. Then, we select three candidate regions to be a plate by searching the binary image with a moving window. The plate area is selected among the candidates with simple heuristics. This algorithm robustly detects the plate against the transformation or the difference of color intensity of the plate in the input image. Moreover, the preprocessing step requires only a small number of sample images for the statistical processing. The experimental results show that the algorithm has 97.8% of successful segmentation of the plate from 228 input images. Our prototype implementation shows average processing time of 0.676 seconds per image for a set of $1280{\times}960$ images, executed on a 3GHz Pentium4 PC with 512M byte memory.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

A Study on the Priorities of Urban Street Environment Components - Focusing on An Analysis of AOI (Area of Interest) Setup through An Eye-tracking Experiment - (도시가로환경 구성요소의 우선순위에 관한 연구 - 아이트래킹 실험을 통한 관심영역설정 분석을 중심으로 -)

  • Lee, Sun Hwa;Lee, Chang No
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.1
    • /
    • pp.73-80
    • /
    • 2016
  • Street is the most fundamental component of city and place to promote diverse actions of people. Pedestrians gaze at various street environments. A visual gaze means that there are interesting elements and these elements need to be preferentially improved in the street environment improvement project. Therefore, this study aims to set up the priorities of street environment components by analyzing eye movements from a pedestrian perspective. In this study, street environment components were classified into road, street facility, building(facade) and sky and as street environment images, three "Streets of Youth" situated in Gwangbok-ro, Seomyeon and Busan University of Busan were selected. The experiment targeted 30 males and females in their twenties to forties. After setting the angle of sight through a calibration test, an eye-tracking experiment regarding the three images was conducted. Lastly, the subjects were asked to fill in questionnaires. The following three conclusions were obtained from the results of the eye-tracking experiment and the survey. First, building was the top priority of street environment components and it was followed by street facility, road and sky. Second, as components to be regarded as important, fast 'Sequence', many 'Fixation Counts' and 'Visit Counts', short 'Time to First Fixation' and long 'Fixation Duration' and 'Visit Duration' were preferred. Third, after voluntary eye movements, the subjects recognized the objects with the highest gaze frequency and the lowest gaze frequency.

Edge-Based Tracking of an LED Traffic Light for a Road-to-Vehicle Visible Light Communication System

  • Premachandra, H. Chinthaka N.;Yendo, Tomohiro;Tehrani, Mehrdad Panahpour;Yamazato, Takaya;Fujii, Toshiaki;Tanimoto, Masayuki;Kimura, Yoshikatsu
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.475-487
    • /
    • 2009
  • We propose a visible light road-to-vehicle communication system at intersection as one of ITS technique. In this system, the communication between vehicle and a LED traffic light is approached using LED traffic light as a transmitter, and on-vehicle high-speed camera as a receiver. The LEDs in the transmitter are emitted in 500Hz and those emitting LEDs are captured by a high-speed camera for making communication. Here, the luminance value of each LED in the transmitter should be found for consecutive frames to achieve effective communication. For this purpose, first the transmitter should be identified, then it should be tracked for consecutive frames while the vehicle is moving, by processing the images from the high-speed camera. In our previous work, the transmitter was identified by getting the subtraction of two consecutive frames. In this paper, we mainly introduce an algorithm to track the identified transmitter in consecutive frames. Experimental results using appropriate images showed the effectiveness of the proposal.

Crack Detection on the Road in Aerial Image using Mask R-CNN (Mask R-CNN을 이용한 항공 영상에서의 도로 균열 검출)

  • Lee, Min Hye;Nam, Kwang Woo;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.3
    • /
    • pp.23-29
    • /
    • 2019
  • Conventional crack detection methods have a problem of consuming a lot of labor, time and cost. To solve these problems, an automatic detection system is needed to detect cracks in images obtained by using vehicles or UAVs(unmanned aerial vehicles). In this paper, we have studied road crack detection with unmanned aerial photographs. Aerial images are generated through preprocessing and labeling to generate morphological information data sets of cracks. The generated data set was applied to the mask R-CNN model to obtain a new model in which various crack information was learned. Experimental results show that the cracks in the proposed aerial image were detected with an accuracy of 73.5% and some of them were predicted in a certain type of crack region.

An Overloaded Vehicle Identifying System based on Object Detection Model (객체 인식 모델을 활용한 적재 불량 화물차 탐지 시스템)

  • Jung, Woojin;Park, Jinuk;Park, Yongju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1794-1799
    • /
    • 2022
  • Recently, the increasing number of overloaded vehicles on the road poses a risk to traffic safety, such as falling objects, road damage, and chain collisions due to the abnormal weight distribution, and can cause great damage once an accident occurs. therefore we propose to build an object detection-based AI model to identify overloaded vehicles that cause such social problems. In addition, we present a simple yet effective method to construct an object detection model for the large-scale vehicle images. In particular, we utilize the large-scale of vehicle image sets provided by open AI-Hub, which include the overloaded vehicles. We inspected the specific features of sizes of vehicles and types of image sources, and pre-processed these images to train a deep learning-based object detection model. Also, we propose an integrated system for tracking the detected vehicles. Finally, we demonstrated that the detection performance of the overloaded vehicle was improved by about 23% compared to the one using raw data.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Estimation of PM concentrations at night time using CCTV images in the area around the road (도로 주변 지역의 CCTV영상을 이용한 야간시간대 미세먼지 농도 추정)

  • Won, Taeyeon;Eo, Yang Dam;Jo, Su Min;Song, Junyoung;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.393-399
    • /
    • 2021
  • In this study, experiments were conducted to estimate the PM concentrations by learning the nighttime CCTV images of various PM concentrations environments. In the case of daytime images, there have been many related studies, and the various texture and brightness information of images is well expressed, so the information affecting learning is clear. However, nighttime images contain less information than daytime images, and studies using only nighttime images are rare. Therefore, we conducted an experiment combining nighttime images with non-uniform characteristics due to light sources such as vehicles and streetlights and building roofs, building walls, and streetlights with relatively constant light sources as an ROI (Region of Interest). After that, the correlation was analyzed compared to the daytime experiment to see if deep learning-based PM concentrations estimation was possible with nighttime images. As a result of the experiment, the result of roof ROI learning was the highest, and the combined learning model with the entire image showed more improved results. Overall, R2 exceeded 0.9, indicating that PM estimation is possible from nighttime CCTV images, and it was calculated that additional combined learning of weather data did not significantly affect the experimental results.

Development of Road Surface Management System using Digital Imagery (수치영상을 이용한 도로 노면관리시스템 개발)

  • Seo, Dong-Ju
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.1
    • /
    • pp.35-46
    • /
    • 2007
  • In the study digital imagery was used to examine asphalt concrete pavements. With digitally mastered-image information that was filmed with a video camera fixed on a car travelling on road at a consistent speed, a road surface management system that can gain road surface information (Crack, Rutting, IRI) was developed using an object-oriented language "Delphi". This system was designed to improve visualized effects by animations and graphs. After analyzing the accuracy of 3-D coordinates of road surfaces that were decided using multiple image orientation and bundle adjustment method, the average of standard errors turned out to be 0.0427m in the X direction, 0.0527m in the Y direction and 0.1539m in the Z direction. As a result, it was found to be good enough to be put to practical use for maps drawn on scales below 1/1000, which are currently producted and used in our country, and GIS data. According to the analysis of the accuracy in crack width on 12 spots using a digital video camera, the standard error was found to be ${\pm}0.256mm$, which is considered as high precision. In order to get information on rutting, the physically measured cross sections of 4 spots were compared with cross sections generated from digital images. Even though a maximum error turned out to be 10.88mm, its practicality is found in work efficiency.

  • PDF