DOI QR코드

DOI QR Code

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael (Department of Geoinformatics, University of Seoul) ;
  • Lee, Impyeong (Department of Geoinformatics, University of Seoul)
  • Received : 2021.11.26
  • Accepted : 2022.01.05
  • Published : 2022.02.28

Abstract

As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Keywords

1. Introduction

Various problems such as traffic congestion and automobile exhaust emissions have become progressively serious challenges today; their influence on the capacity of the road and the atmosphere is mainly prominent. During the period, vehicle emission parameters have a great significance to traffic management and public services. While the old vehicle emission model considered vehicle parameters, roads, and environmental factors, it did not consider vehicle power output in vehicle emissions, so there is always excellent quality of total emissions. The vehicle-specific power (VSP) idea is mainly applied to exhaust remote sensing (Jiménez-Palacios, 1999). VSP reflects the needs of vehicle installation and is combined with periodic transportation emission testing. In recent years, it has been proved to be closely correlated to many pollutants emitted. Our internally developed MOVES model (Environmental Protection Agency, 2010) and the IVE emission model of developing countries (Mage et al., 1996) both introduce vehicle-specific power as an emission calculation tool (Wang et al., 2008).

Air quality monitoring is related to and mainly affects people living in urban areas (McFrederick et al., 2008). Therefore, different solutions and applications must be sought to measure or estimate air quality for environments. For the earlier reasons, environmental protection organizations and government agencies have started to consider monitoring and evaluating the impact of environmental pollutants as their main objective (Mage et al., 1996). Most methods used to measure air pollution in dense cities are based on fixed monitoring stations and give average readings (Kanaroglou et al., 2005). However, in areas with high inhabitants’ density (Alvear et al., 2016).

Since air monitoring stations cannot be established in many areas with poor accessibility, or because of a limited budget, implementing Unmanned Aerial Vehicles (UAVs) equipped with a visual sensor could be a solution (Dunbabin and Marques, 2012) This camera is essential to get vehicle velocity, acceleration, and different vehicle parameters while calculating the vehicle-specific power. Nowadays, video-based detection and pursuit as a big part of Intelligent Transportation Systems (ITS) have well-known applications in traffic drawback analysis (Zhang et al., 2011). Therefore, this study aims to develop a simple system to detect and estimate sidewalk pollutants concentration.

2. Research purpose

Sidewalk pedestrian exposure to different concentrations of pollutants emitted from the on-road vehicles. Exposure to elevated gaseous pollutants is a serious concern to both the public and the environmental agencies. Establishing a sideroad air monitoring station in each main and poor accessibility road is impossible because of money and logistic reasons. Therefore, this study intends to provide the solution in terms of sidewalk CO and NO pollutants estimation by developing UAV video-based estimation model. Estimate the pollutants level on the sidewalk using UAV video-based model by developing a deep learning vehicle detection, classification, tracking, and localization model. Estimate the vehicle emission rate and predict the concentrations at one receptor on the sidewalk. This research technique can be used to replace the roadside monitoring stations which can’t be installed on narrow or unreachable roads with a UAV video-based system that covers a vast area with low cost equipment.

As the previous research gave a kind of trade-off between the coverage area, accuracy, and cost. My research aims to utilize and integrate aerial detection using deep machine learning to quantify vehicles in a vast area, develop an emission dispersion model, and evaluate it with a real measurement Near real-time estimation, which saves the establishment and running cost of Roadside monitoring stations. Also, with the revolutionary new 5G technology, several drones can be operated simultaneously to cover a larger area and draw one colored map.

3. Limitations of the previous studies

Previous studies attempt to measure the concentration of pollutants by installing several pieces of equipment along a small segment of the road. These instruments mainly use a laser beam to calculate the vehicle parameters (velocity, acceleration, etc). And other instruments to determine the relative content of these substances in the vehicle exhaust by measuring how much light of a specific wavelength is absorbed when the beam passes through the column. But the intensity and duration of these weather conditions directly affect the propagation distance and the quality of transmission. However, the effectiveness also depends on the wavelength. For submicron and micron wavelengths, the three most important conditions affecting laser transmission are absorption, scattering, and radiation. All three conditions can degrade the performance of the receiver and affect the bit reliability and error rate (Kalashnikova et al., 2002). These methods are effective, but still cover a short section of the road, and are not efficient in fog and rain. Experimental studies have shown that pollutants are removed from the atmosphere by atmospheric precipitation. In particular (CPCB, 2014) measured dust concentrations in the air, found significant decreases in concentrations within and after monsoons, and investigated the dynamics of disinfection by rain or fog. Pioneers in this field have calculated the purifying effect of atmospheric precipitation on the removal of pollutants from the atmosphere (Hales, 1972). It should be noted here that the removal of pollutants by rain is a non-linear process. Because it involves non-linear interactions between the different stages of the ambient air.

In similar research, replace the ground camera with a drone as a visual sensor but keep the ground sensors (Khalid et al., 2018). These ground-fixed sensors are synchronized with the drone. Cover a vast area, but it is computation expensive plus the same weather condition error. A method more like my attempt to estimate vehicle emissions in real-time is a roadside camera system (Xia et al., 2017). This method proposes to use deep machine learning for vehicle detection and vehicle parameters extraction to estimate the emission as well.

1) Remote sensing for vehicle emission monitoring

The remote sensing system uses absorption spectra for inconspicuous measurement. The pollutant concentration in the exhaust column of the vehicle used. The light source and detecting devices are located near or above the road, and the instrument is aligned so that the beam from the pollutant source interacts well with the light in a clear way, it is possible to determine the relative content of these substances in the vehicle exhaust by measuring how much light of a specific wavelength is absorbed when the beam passes through the column.

Fig. 1 illustrates the remote junction detection configuration: streetlight source and light detector with reflector; Speed and acceleration detectors; and the license plate recorder. Below, configuration for the top-down remote sensing system. The measurement time is less than one second, and if successful, an estimate of the pollutant concentration relative to the CO2concentration in the exhaust tower will be provided. The remote sensing system measures nitric oxide (NO), carbon dioxide, nitrogen (NO2), and carbon monoxide (CO). The remote sensing system includes additional equipment to capture the image. The camera records the image of the vehicle. The license plate of the vehicle specification (such as brand, model, fuel type, engine size, emission standard) can be retrieved from the vehicle database. Another device measures vehicle speed and acceleration and provides information on the vehicle’s engine load by measuring emissions. Finally, the sensors measure environmental conditions, such as temperature, pressure, and relative humidity (Zhang et al., 2011) This technique is the most widely used technology to measure the level of pollutants in vehicle exhaust gases when the vehicle is running. Remote Sensing Devices (RSDs) do not need to be physically attached to the vehicle (Alvear et al., 2016).

OGCSBN_2022_v38n1_1_f0001.png 이미지

Fig. 1. Graphic diagram of the three components of the remote sensing device.

2) Monitoring aerial emissions from highways

The proposed system model is used to monitor pollution in real-time using a network of fixed and mobile sensor units. The output of the MOVES simulation model is first used to design a sensor network that provides initial data for the position and sensor detection. Fixed Sensor Units (FSUs) are used to monitor air pollutants such as CO, NO2, HC, ground-level ozone (O3), and greenhouse gases such as carbon dioxide (CO2) Fig. 2. It also includes a combination pressure and humidity sensor. These emission sensors are also mounted on the platform for unmanned aerial vehicles (UAV) together with other necessary devices and form a mobile sensor unit (MSU) (Khalid et al., 2018).

OGCSBN_2022_v38n1_1_f0002.png 이미지

Fig. 2. System Model with FSUs and MSUs.

The UAV platform is being developed by the Sheridan College research team. Data analysis and processing are carried out from the collected data. These data are then used to make comparisons with the emissions model based on MOVES estimates. Realtime data is used to calibrate MOVES output to aid inaccurate predictions of greenhouse gas and air pollutant emissions. They model a section of highway with a relatively stable average speed as a single link. Hourly traffic data for seven days and average hourly speed were obtained from the Ontario Department of Transportation (MTO) for the two four-lane segments.

3) Vehicle emission estimation based on video tracking of vehicle

Some other researchers attempt to estimate the vehicle emission in real-time, a system based on a roadside surveillance camera as shown in Fig. 3. This system proposes a detecting and tracking vehicles and VSP calculation after extracting the vehicle’s parameters “speed and acceleration”. They proposed a video-based vehicle emission estimation method that uses roadside video to track the vehicle’s trajectory in real-time and uses VSP to estimate vehicle emissions. The track of a vehicle on a particular road section is determined by a video-based algorithm. Next, the vehicle speed and acceleration are calculated from the acquired track data. Finally, vehicle emissions for the specified road section are estimated using the relationship between VSP and basic emission rate (Zhang et al., 2011).

OGCSBN_2022_v38n1_1_f0003.png 이미지

Fig. 3. Roadside camera for detection and tracking.

4. Methodology

There are three major calculation components to apply sidewalk vehicle emission estimation as shown in Fig. 4. First, vehicle detection, classification, tracking, and localization. Secondly, the emission factor calculation is related to the vehicle class, fuel type, speed, and acceleration. Thirdly, the dispersion line-source model is deployed to estimate the dispersion of the pollutants towards the sidewalk considering the wind speed and direction.

OGCSBN_2022_v38n1_1_f0004.png 이미지

Fig. 4. Full methodology architecture.

1) Input

In the first detailed methodology architecture Fig. 5, the input is the video recorded using a UAV following the standard operating procedure for a 90-degree angle with the road. For all experiments conducted for the research, videos were recorded at 25 FPS. Ground control points were acquired by GPS equipment for the localization process. Furthermore, four datasets were integrated and optimized for the pipeline detection and classification model.

OGCSBN_2022_v38n1_1_f0005.png 이미지

Fig. 5. Detailed input methodology architecture.

In this study, computers with the following configurations were used, Processor: MD Ryzen 7 3700X 8-Core Processor 3.60 GHz, installed RAM 16.0 GB, system type 64-bit operating system, x64- based processor, and GPU NVIDIA GeForce GTX 1060 6GB. The data needed for this study was video footage from the UAV. We experimented with DJI Mavic Pro Drone with camera 1/2.3” CMOS; Effective pixels: 12.35M. The drone is used to observe long distance traffic. And Leica GS07 GNSS NetRover Geosystems midrange GNSS smart antenna.

2) Processing step

The processing step consists of three main algorithms as shown in Fig. 6. Object detection algorithm using YOLOv5 (You Only Look Once) which has better detection speed techniques, outperforming R-CNN and its variants, and regarding the accuracy concerns, the retraining process using 4 customized integrated datasets was able to improve it. After the first detection algorithm extracts the required data frame by frame including (class name, the pixel coordinate), the vehicle can be tracked using the tracking algorithm Deep SORT (Deep Simple Online and Realtime Tracking). tracking algorithms provided by the tracking API of OpenCV. Once each vehicle in the scene got a unique ID and number, a homography localization algorithm will be applied to transform the pixel coordinate in the image plane into a real coordinate in longitude and latitude.

OGCSBN_2022_v38n1_1_f0006.png 이미지

Fig. 6. Detailed processing used algorithms (YOLOv5, Deep SORT, Homography).

3) Output

The first output of the processing step is a CSV file that contains the class name with a unique ID, frame number, long, and lat for each vehicle per frame. This initial data deploys in the calculation process to determine the average travel speed and acceleration for each vehicle individually. Aggregating this frame-by-frame data to be one second, then calculating first traffic parameters, the vehicle-specific power (VSP) can be calculated, by using the variable’s value A, B, C, M, and f that represents the coefficients for each vehicle type as shown in Table 5. After that, calculating the fuel consumption and pollution emission rate using operating modes for running energy consumption MOVES 2010 user guide shown in Table 7. Once the emission rate was calculated for each vehicle per second, a dispersion model estimates the CO, NOx concentrations at the receptor point where the air monitoring station is located that represents our evaluation and reference point as well as described in Fig. 7.

OGCSBN_2022_v38n1_1_f0007.png 이미지

Fig. 7. Traffic parameters extraction steps.

4) Evaluation

Ultimately, compare our estimated pollutants values, which represent one hour of measurements, with the retrieved readings from the Roadside monitoring station. This Roadside monitoring station works 24/7 giving a one-hour average of the same pollutants. This Roadside station is located in 996-9, Doksan-dong, Geumcheon-gu, Seoul, Republic of Korea, Fig. 8.

OGCSBN_2022_v38n1_1_f0008.png 이미지

Fig. 8. Roadside monitoring location and evaluation process.

5. Experiment

1) YOLOv5 vehicle detection using four integrated datasets

The object detection method that relies on these three stages is defined as a two-stage method: the proposal of the candidate area in the first stage and the object classification in the second stage. A two-step method built on a convolutional neural network provides high accuracy in terms of target identification, but the speed at which images are processed is low. Methods that do not require additional operations for suggestions in the region, such as YOLO (You Only Look Once) (Liu et al., 2016) and SSD. They represent the so-called one step method, which works faster than the two-step method, but has lower accuracy, especially when detecting small-scale objects. This trade-off limits its use for vehicle detection tasks in aerial images. Otherwise, a large data set is a key factor in exceeding the accuracy of small targets in aerial imagery.

Joseph Redmon and others proposed YOLO in 2015. CNN (Convolutional Neural Network) based realtime object detection system (Redmon and Farhadi, 2018) Joseph Redmon and Ali Farhadi presented YOLO v2 at the Conference on Computer Vision and Pattern Recognition (CVPR) (Redmon and Farhadi, 2017) to improve the accuracy and speed of the algorithm. Joseph Redmon and Ali Farhadi proposed the new YOLO v3 in April of this year. This further improves object discovery performance (Redmon and Farhadi, 2018). Proposed in 2020, YOLO v4 allows anyone to train ultra-fast and accurate object detectors using 1080Ti or 2080Ti GPUs. It describes the latest methods that are more efficient and better suited for training on a single GPU. Depending on how you update, this chapter will explain the basics of the YOLO algorithm (Bochkovskiy et al., 2020).

In this research, we integrate the four aerial image datasets; UAVDT benchmark (Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking); VAID (Vehicle Aerial Imaging from Drone) dataset is a new vehicle detection dataset; SDD (Stanford Drone Dataset) Contains a total of 16, 185 images of cars. In this original dataset, and LSM (Lab of Sensors and Modelling-University of Seoul), this dataset is prepared for our detection, classification, and tracking experiment. And modify the parameters of the YOLO v5 network map to train the model. Moreover, we unified class names to be 4 classes representing different types of vehicle categories (“car”, “truck”, “heavy_ truck”, “bus”). Therefore, we propose 4 classes of vehicle detection methods for the aerial image in specific steps as follows.

Table 1. Image number per each dataset, and classes distribution

OGCSBN_2022_v38n1_1_t0001.png 이미지

Table 2. Test results of the training model

OGCSBN_2022_v38n1_1_t0002.png 이미지

OGCSBN_2022_v38n1_1_f0009.png 이미지

Fig. 9. Detection and tracking results from the experiment scene including (ID, bounding box, confidence rate, class name).

The image selection method is that the data has a very large total number of frames due to the import from the video file, but most images are duplicated. For this reason, UAVDT sampled the image every 50 frames, and in the case of the Stanford image dataset, the sampled image was an image with car objects and was sampled at 100 frames. Bounding box preprocessing, every dataset we used gives the annotations in a different format which includes XML files for VAID dataset, x1, y1, x2, y2, text files for UAVDT dataset, and similarly for Stanford dataset. Since we are training yolov5, the format is stand-ard, for each image, there should be a text file with the bounding box coordinates in format xc, yc, w, h, and these are normalized using image dimension. Thus, this preprocessing was done on all the datasets before training.

Also, the Confusion matrix known as the error matrix is used to visualize the system performance. It is representing the predicted performance of the system, whether it detects and tracks vehicles, and a column represents the actual terrain values, whether the vehicle exists or not. After checking the detection file for an explicit explanation on own to use the model. We have chosen the score thresholding manually after the model inference in our code.

OGCSBN_2022_v38n1_1_f0010.png 이미지

Fig. 10. Confusion matrix result for 4 classes was made at IoU threshold of 0.45, confidence threshold of 0.25.

2) Vehicle tracking by Deep SORT(Simple Online and Realtime Tracking)

Once the YOLOv5 algorithm that detects the objects gave a bounding box to each vehicle, Deep SORT “Simple Online and Realtime Tracking” use the Kalman filter and the Hungarian algorithm to track objects (Bewley et al., 2016). However, the effectiveness of the SORT algorithm is degraded by the occlusion from multiple perspectives of the camera. The authors of the Deep SORT algorithm introduced another distance metric based on the “appearance” of the object to improve the SORT algorithm. the efficiency of the SORT algorithm must be evaluated using two different formulas, as shown below.

\(M O T P=\frac{\sum_{i, t} d_{i}^{i}}{\sum_{t} c_{l}}\)       (1)

Where “dt” is the distance between the localization of objects in the ground truth, and the detection output “ct” is the total matches made between the ground truth and the detection output.

\(\mathrm{MOTA}=1-\frac{\sum_{t} F N_{t}+F P_{t}+I D S_{t}}{\sum_{l} G T_{t}}\)       (2)

Where “FN” is the false negative/miss IDS –ID switch/mismatch errors FP –false positives GT – ground truth object count.

Table 3. Multi-vehicle tracking evaluation

OGCSBN_2022_v38n1_1_t0003.png 이미지

3) Vehicle localization using homography matrix

To calculate a homography between two planes, we will use 4 GCP’s correspondence between the two planes for matrix calculation and 3 GCP’s for evaluation. These points were acquired by a GPS device during the experiment. If we have more than 4 corresponding points, it is even better. Here, we use OpenCV’s homography function. In our case, source points are pixel coordinates, and destination points are latitude/longitude decimal degree values.

Where H is a uniform non-singular 3 × 3 matrix. The projection transformation has eight degrees of freedom and all the line points are projected onto the line points equivalent to the projection, keeping all the properties of the projection without changes. Suppose that pairs of coincident points x and x′ are (x, y) and (x′, y′), respectively. On the plane of the world image, the correspondence between each point provides two constraints.

\(\begin{gathered} \left(\begin{array}{l} x_{1}^{\prime} \\ x_{2}^{\prime} \\ x_{3}^{\prime} \end{array}\right)=\left(\begin{array}{lll} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{array}\right)\left(\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right) \\ x^{\prime}=\frac{x_{1}^{\prime}}{x_{3}^{\prime}}=\frac{h_{11} x+h_{12} y+h_{13}}{h_{31} x+h_{32} y+h_{33}} \\ y^{\prime}=\frac{x_{2}^{\prime}}{x_{3}^{\prime}}=\frac{h_{21} x+h_{22} y+h_{23}}{h_{31} x+h_{32} y+h_{33}} \end{gathered}\)       (3)

Table 4. Homography evaluation using 3 GCP’s

OGCSBN_2022_v38n1_1_t0004.png 이미지

4) Speed and acceleration calculation

For getting speed, we require the location and frame index of the last two frames in which the object appears, for distance, we localize the centroid from pixel coordinates to lat/long coordinate we can get time, by taking the difference of frame index and divide it by the fps of the video using equation 4.

\(\text { Speed }=\frac{\text { Distance }}{\text { Time }}\)       (4)

For getting acceleration, we require the speed and frame index of the last two frames in which the object appears. If the length of the speed list is greater than 2 then we can calculate acceleration else we append 0 in the acceleration list, and the object disappears, we take the average of all the accelerations as shown in equation 5.

\(\text { Acceleration }=\frac{\text { Difference in Speed }}{\text { Time }}\)       (5)

OGCSBN_2022_v38n1_1_f0011.png 이미지

Fig. 11. Both right and lift vertical lines to calculate the traffic flow rate on both sides.

5) Vehicle specific power calculation

VSP is defined as the power of an engine towing one ton of weight (including weight) and having units of kW/t (or W/kg). The VSP variable is closely related to the vehicle’s instantaneous driving conditions and emissions. The video retrieval method used in this study is limited to a specific location, and each vehicle has only instantaneous detection data and is like remote sensing detection, where other important parameters such as weight cannot be recorded. Thus, the VSP variable, which does not depend on the weight of the vehicle, provides a new method for studying the emission problem of a vehicle.

VSP works to overcome friction and air resistance by considering changes in vehicle kinetic energy and potential energy with instantaneous power per unit mass of the vehicle. Its value is related to the driving environment, vehicle factors, and vehicle operating conditions.

In this study, the constant operating environment allows the effects of VSP altitude and slope to be ignored to simplify computational complexity. In general, after determining the parameters (EnvironmentalProtection Agency, 2010), the VSP formula for a lightweight vehicle is:

\(V S P=\frac{A \cdot v+B \cdot v^{2}+C \cdot v^{3}+M \cdot(a+g \cdot \sin \theta) \cdot v}{f}\)       (6)

Where A is the moving term; B is the rotation term; C is the resistance term; M is the source mass (metric tons); f is the fixed mass factor (metric tons); g is the acceleration of gravity (9.8 m = s2); v is the speed of the vehicle, m = s; a is the acceleration of the vehicle when m = s2; and θis the grade of the road.

The original MOVES table classifies the vehicle in more class names, which is not applicable in our datasets to do the same classifications. Hence, unifying similar vehicles’ appearance and taking the mean variable value, and assigning one class name identical to my trained model. Vehicles are classified into source types based on body type as well as other characteristics, such as whether they are registered to an individual “car”, a commercial business “Passenger Truck, Light Commercial Truck”, or a transit agency; whether they have specific travel routines such as a refuse truck; and whether they typically travel short- or long-haul routes.

Table 5. Coefficients for each vehicle source type

OGCSBN_2022_v38n1_1_t0005.png 이미지

6) Fuel consumption and pollutant emission calculation

The VSP-based approach’s primary methodology is binning second-by-second VSP data and determining the average emission rate in each bin. The percentage of each VSP bin’s matching VSP values in the entire VSP distribution, which is the indicator used to define the statistical distribution features of all VSP bins, is the meaning of each VSP bin. The emissions in each bin can be captured for the relevant emission procedure after VSP distribution.

The VSP-based approach’s accuracy is determined by how VSP bins are specified. VSP bins, on the other hand, have lacked a clear definition. The EPA created a new variable called operating mode, which combines the VSP and the instantaneous speed and is included in the Motor Vehicle Emission Simulator (MOVES). Table 6 shows the definition of MOVES operating modes, which includes 23 operating mode bins that are well specified in terms of VSP and instantaneous speed. The percentage of its associated VSP, speed or acceleration values in the entire operating model distribution is the meaning of each operating mode bin. This is an indicator that provides the statistical distribution characteristics of all operating mode bins and can thus reflect dynamic vehicle activity conditions.

Operating modes are incorporated into the emission modeling process using the operating mode-based emission modeling approach. The operating mo debased approach’s main methodology entails binning second-by-second VSP and speed to compute the operating mode distribution and estimate the average emission rate/factor in each operating mode bin. As a result, the operating mode distributions are determined by two key variables: second-by-second speed and VSP data. The operating mode-based technique was used for emission estimation in this study since MOVES is now recognized as the most widely accepted tool that EPA provides for emissions estimation.

Table 6. Operating modes for running energy consumption (MOVES 2010 user guide)

OGCSBN_2022_v38n1_1_t0006.png 이미지

Once the operating mode calculation is completed, the MOVES model can proceed to the final step. The emission rate related to the operating mode is defined in USEPA MOVES, and we only extract the “rate Basic” value type definition above the specific vehicle. Therefore, based on the operating mode of vehicles per second, our MOVES model can generate seconds of fuel and emissions output per second. Emission rates for each operating mode bin were calculated using MOVES binning standard, as shown in table 7.

Table 7. Operating modes for running energy consumption (MOVES 2010 user guide)

OGCSBN_2022_v38n1_1_t0007.png 이미지

7) Gaussian plume dispersion model

The Gaussian plume line source model was developed to predict the concentration of gaseous pollutants CO, SO2, NO2, and particulate matter in different types of roads, vehicle speeds, and different driving modes at traffic intersections. The common Gaussian linear source model is based on the principle of superposition, that is, the concentration at the receiving end is the sum of the contributions from all infinitesimal point sources that make up the linear source. The diffusion mechanism of each point source is assumed to have nothing to do with the existence of other point sources. When the line source is accompanied by spontaneous turbulence, the validity of this assumption becomes questionable(McFrederick et al., 2008).

OGCSBN_2022_v38n1_1_f0012.png 이미지

Fig. 12. Line source orientation coordinate system of the wind direction.

This is the case on busy roads. Also, as the wind angle becomes smaller relative to the road, the overlap is close to getting worse. To alleviate these problems, it is helpful to avoid point source assumptions, so the model can take the following equation.

\(\begin{aligned} C(x, y, z)=& \frac{Q}{2 \sqrt{2 \pi} \sigma_{z}\left(u \sin \theta+u_{0}\right)} \\ &\left\{e^{\frac{-1}{2}\left(\frac{z-H}{\sigma}\right)^{2}}+e^{\frac{\mid 1}{2}\left(\frac{z-H}{\sigma !}\right)^{2}}\right\} \\ & \times\left[\operatorname{erf}\left(\mid \frac{\sin \theta(L / 2-y)-x \cos \theta}{\sqrt{2} \sigma_{y}}\right) \mid\right.\\ &\left.+\operatorname{erf}\left(\left|\frac{\sin \theta(L / 2+y)+x \cos \theta}{\sqrt{2} \sigma_{y}}\right|\right)\right] \end{aligned}\)       (7)

where C is the concentration of the pollutant at any receiving point (g/m3), Q is the emission rate of the source per unit length (g/sec), x, y, and z are receiver coordinates relative to the center of the stream source (x is downwind distance, y is downwind distance and z is the height of ), θ is the angle between wind direction and flow, varying in about 0-180 degrees, H is the effective height of the source, L is the length of the line of the source, U is the average wind speed of, Uo is the correction for wind speed due to discharge wake and there are values the difference is for different stability classes (McFrederick et al., 2008), but if the ambient wind speed Uo is less than the wind speed U calculated according to Carpenter and Clemena’s model, Uo ≥ U, then U = Uo (McMullen, 1975). erf is the error function and σy and σz and is the horizontal and vertical dispersion coefficient respectively and is a function of the distance x and the atmospheric stability of grade (Crawford, 1976).

These dispersion coefficients σy and σz (sometimes called the standard deviation) are expressed in meters and correspond to a pollutant sampling time in the atmosphere of 10 min. The dispersion coefficient is a function of the degree of atmospheric stability and the upwind distance x from the source of the emission of air pollutants. The amplitudes of the dispersion coefficients σy and σz can be estimated using equations 7, 8 reported by D. O. Martin (Crawford, 1976).

σz= cxd+ f      σy= axb       (8)

Table 8. Pasquill atmospheric stability classes

OGCSBN_2022_v38n1_1_t0008.png 이미지

Table 9. Meteorological conditions that define the Pasquill stability classes

OGCSBN_2022_v38n1_1_t0009.png 이미지

Table 10. Stability dependent constant value

OGCSBN_2022_v38n1_1_t0010.png 이미지

6. Results and analysis

The first experiment hour was conducted between 12 pm to 1 pm, the second hour 2-3 pm, and the third hour 5-6 pm. And the metrological data during, and the actual number of detected vehicles during that hour was as follow:

After calculating the emission rate per hour in three different spanning times, the dispersion model shows an estimated value at the receptor point where our monitoring station is installed. This monitoring station is considered as our reference point that evaluates our estimated values as shown visually in Fig. 13, 15, 17, and the estimated values numerically in Table 13. Moreover, as the experiment was conducted on the first Monday of January, we retrieved the data of the first Monday of each consecutive month. In order to compare data collected on two different pods, it’s best to subtract baseline data from both measurements as shown in Fig. 14, 16, 18.

OGCSBN_2022_v38n1_1_f0013.png 이미지

Fig. 13. CO, NOx estimated range value by our dispersion model during the first hour.

OGCSBN_2022_v38n1_1_f0014.png 이미지

Fig. 14. One-hour average (12 pm to 1 pm) of the first Monday for each month.

OGCSBN_2022_v38n1_1_f0015.png 이미지

Fig. 15. CO, NOx estimated range value by our dispersion model during the second hour.

OGCSBN_2022_v38n1_1_f0016.png 이미지

Fig. 16. One-hour average (1 pm to 2 pm) of the first Monday for each month.

OGCSBN_2022_v38n1_1_f0017.png 이미지

Fig. 17. CO, NOx estimated range value by our dispersion model during the third hour.

OGCSBN_2022_v38n1_1_f0018.png 이미지

Fig. 18. One-hour average (5 pm to 6 pm) of the first Monday for each month.

Table 13. Gaussian plume estimated value vs Air monitoring station

OGCSBN_2022_v38n1_1_t0013.png 이미지

Regarding our model results, there is no significant change in the vehicle number between the rush hour and break time as shown in Table 11, this stable vehicle’s number causes similar pollutants concentrations during the daytime except for NOx concentrations during the rush hour (5-6 pm). Furthermore, by investigating the experiment location one kilometer before and after the air monitoring station, we found that there are 5 cross walking signs along the way from one side and no more after for one kilometer. This means that the traffic flow near the air monitoring station remains constant at a low speed, which means that the pollutants concentrations remain at a high level while it will be improved drastically after passing the air monitoring station as the vehicle’s speed will be increased. Moreover, the model shows pollutants aggregation at the cross walking area as shown in Fig. 13, 15, 17, 19. This aggregation is caused as a result of two factors; first, normally every two minutes all vehicles stop nearby the cross walking area for two minutes. These two minutes of Breaking and Idling behavior give 0 speed and acceleration in the vehicle’s parametersextraction process as well. But in fact, the model keeps calculating the first 3 consecutive seconds of Breaking and Idling time as shown in Table 6, 7. It means that the model keeps calculating 120 seconds for one hour which is our experiment spanning time. The second factor, the wind speed during my experiment was less than 2 m/second as shown in Table 12, which means the pollutants scutter slightly and agglomerate closely to the pedestrians walking area.

Table 11. The total detected vehicle’s number and its classification according to our YOLOv5 model

OGCSBN_2022_v38n1_1_t0011.png 이미지

Table 12. Metrological data during the experiment hours

OGCSBN_2022_v38n1_1_t0012.png 이미지

OGCSBN_2022_v38n1_1_f0019.png 이미지

Fig. 19. Experiment location.

7. Conclusion

Here the black point is receptor red points or high value of concentration is aggregated in the right path when the vehicle moves from left to right or vice versa is because when the vehicle moves from left to right part it first detected in the left of the frame and initially we take it to speed as 0 and start calculating its speed and according to MOVES velocity is directly proportional to the emission rate that’s why it is aggregated in right part when the vehicle moves from left to rights. In this paper, YOLOv5 deep learning model is developed, vehicle tracking by Deep SORT is deployed, vehicle localization using homography transformation matrix to localize each vehicle and to calculate the speed and acceleration parameters, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. Experiments conducted in an urban section, giving good results for the NOx proved the feasibility and efficiency of the method. While CO values gave less accurate estimated values especially at the first hour of the experiment. The results give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road. Using licensed dispersion model software and comparing its results with my estimated values would be more efficient as it will give a colored map with a prediction range as my model. While the roadside monitoring station is giving single-point readings.

References

  1. Alvear, O., W. Zamora, C. Calafate, J.C. Cano, and P. Manzoni, 2016. An architecture offering mobile pollution sensing with high spatial resolution, Journal of Sensors, 2016: 1-13.
  2. Bewley, A., Z. Ge, L. Ott, F. Ramos, and B. Upcroft, 2016. Simple online and realtime tracking, Proc. of International Conference on Image Processing, Phoenix, AZ, USA, Aug. 25-28, pp. 3464-3468.
  3. Xu, X., H. Hao, Z. Liu, and I. Kim, 2017. Proc. of the 17thCOTAconference International Conference Of Transportation Professionals 2017,Shanghai, CHN, Jul. 7-9, pp. 4399-4409.
  4. Shekhar, S., 2014. National Air Quality Index, Central Pollution Control Board, New Delhi, IND.
  5. Crawford, M., 1976. Air Pollution Control Theory, McGraw-Hill, New York, NY, USA.
  6. Dunbabin, M. and L. Marques, 2012. Robots for Environmental monitoring: Significant advancements and applications, IEEE Robotics and Automation Magazine, 19(1): 24-39. https://doi.org/10.1109/MRA.2011.2181683
  7. Environmental Protection Agency, 2010. Motor Vehicle Emission Simulator(MOVES)2010:User Guide, United States Environmental Protection Agency, Washington, D.C., USA.
  8. Hales, J.M., 1972. Fundamentals of the theory of gas scavenging by rain, Atmospheric Environment, 6(9): 635-659. https://doi.org/10.1016/0004-6981(72)90023-6
  9. Jimenez-Palacios, J.L., 1999.Understanding and Quantifying Motor Vehicle Emissions with Vehicle Specific Power and TILDAS Remote Sensing, Massachusetts Institute of Technology, Cambridge, MA, USA.
  10. Kalashnikova, O.V., H.A. Willebrand, and L.M. Mayhew, 2002. Wavelength and altitude dependence of laser beam propagation in dense fog, Free-Space Laser Communication Technologies, 4635: 278-287. https://doi.org/10.1117/12.464103
  11. Kanaroglou, P.S., M. Jerrett, J. Morrison, B. Beckerman, M.A. Arain, N.L. Gilbert, and J.R. Brook, 2005. Establishing an air pollution monitoring network for intra-urban population exposure assessment: A location-allocation approach, Atmospheric Environment, 39(13): 2399-2409. https://doi.org/10.1016/j.atmosenv.2004.06.049
  12. Khalid, L., S.Ali, V. Mashatan, and B. Komisar, 2018. Methodology for Monitoring Aerial Emissions from Highways, Proc. of the 2nd International Conference of Recent Trends in Environmental Science and Engineering, Niagara falls, CAN, Jun. 10, vol. 142, pp. 1-7.
  13. Liu, W., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A.C. Berg, 2016. SSD: Single shot multibox detector, Lecture Notes in Computer Science, 9905: 21-37.
  14. Mage, D., G. Ozolins, P. Peterson, A. Webster, R. Orthofer, V. Vandeweerd, and M. Gwynne, 1996. Urban air pollution in megacities of the world, Atmospheric Environment, 30(5): 681-686. https://doi.org/10.1016/1352-2310(95)00219-7
  15. McFrederick, Q.S.,J.C. Kathilankal, and J.D. Fuentes, 2008. Air pollution modifies floral scent trails, Atmospheric Environment, 42(10): 2336-2348. https://doi.org/10.1016/j.atmosenv.2007.12.033
  16. McMullen, R.W., 1975. The Change of Concentration Standard Deviations with Distance, Journal of the Air Pollution Control Association, 25(10): 1057-1058. https://doi.org/10.1080/00022470.1975.10470179
  17. Redmon, J. and A. Farhadi, 2018. YOLOv3: An Incremental Improvement, arXiv preprint, arXiv: 1804.02767.
  18. Wang, Q., H. Huo, K. He, Z.Yao, and Q. Zhang, 2008. Characterization of vehicle driving patterns and development of driving cycles in Chinese cities, Transportation Research Part D: Transport and Environment, 13(5): 289-297. https://doi.org/10.1016/j.trd.2008.03.003
  19. Xia, Q.,Y. Chen, and L. Cheng, 2018.VehicleEmission Estimation Based on Video Tracking of Vehicle Trajectories, Proc. of 2017COTAInternational Conference of Transportation Professionals, Shanghai, CHA, Jul. 7-9, pp. 2942-2951.
  20. Zhang, J., F.Y. Wang, K. Wang, W.H. Lin, X. Xu, and C. Chen, 2011. Data-driven intelligent transportation systems: A survey, IEEE Transactions on Intelligent Transportation Systems, 12(4): 1624-1639. https://doi.org/10.1109/TITS.2011.2158001