• Title/Summary/Keyword: Image processing

Search Result 9,956, Processing Time 0.049 seconds

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Analysis of Waterbody Changes in Small and Medium-Sized Reservoirs Using Optical Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 광학 위성영상을 이용한 중소규모 저수지 수체 변화 분석)

  • Younghyun Cho;Joonwoo Noh
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.363-375
    • /
    • 2024
  • Waterbody change detection using satellite images has recently been carried out in various regions in South Korea, utilizing multiple types of sensors. This study utilizes optical satellite images from Landsat and Sentinel-2 based on Google Earth Engine (GEE) to analyze long-term surface water area changes in four monitored small and medium-sized water supply dams and agricultural reservoirs in South Korea. The analysis covers 19 years for the water supply dams and 27 years for the agricultural reservoirs. By employing image analysis methods such as normalized difference water index, Canny Edge Detection, and Otsu'sthresholding for waterbody detection, the study reliably extracted water surface areas, allowing for clear annual changes in waterbodies to be observed. When comparing the time series data of surface water areas derived from satellite images to actual measured water levels, a high correlation coefficient above 0.8 was found for the water supply dams. However, the agricultural reservoirs showed a lower correlation, between 0.5 and 0.7, attributed to the characteristics of agricultural reservoir management and the inadequacy of comparative data rather than the satellite image analysis itself. The analysis also revealed several inconsistencies in the results for smaller reservoirs, indicating the need for further studies on these reservoirs. The changes in surface water area, calculated using GEE, provide valuable spatial information on waterbody changes across the entire watershed, which cannot be identified solely by measuring water levels. This highlights the usefulness of efficiently processing extensive long-term satellite imagery data. Based on these findings, it is expected that future research could apply this method to a larger number of dam reservoirs with varying sizes,shapes, and monitoring statuses, potentially yielding additional insights into different reservoir groups.

Current Status and Perspectives in Varietal Improvement of Rice Cultivars for High-Quality and Value-Added Products (쌀 품질 고급화 및 고부가가치화를 위한 육종현황과 전망)

  • 최해춘
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.47
    • /
    • pp.15-32
    • /
    • 2002
  • The endeavors enhancing the grain quality of high-yielding japonica rice were steadily continued during 1980s-1990s along with the self-sufficiency of rice production and the increasing demands of high-quality rices. During this time, considerably great progress and success was obtained in development of high-quality japonica cultivars and quality evaluation techniques including the elucidation of interrelationship between the physicochemical properties of rice grain and the physical or palatability components of cooked rice. In 1990s, some high-quality japonica rice cultivars and special rices adaptable for food processing such as large kernel, chalky endosperm, aromatic and colored rices were developed and its objective preference and utility was also examined by a palatability meter, rapid-visco analyzer and texture analyzer, Recently, new special rices such as extremely low-amylose dull or opaque non-glutinous endosperm mutants were developed. Also, a high-lysine rice variety was developed for higher nutritional utility. The water uptake rate and the maximum water absorption ratio showed significantly negative correlations with the K/Mg ratio and alkali digestion value(ADV) of milled rice. The rice materials showing the higher amount of hot water absorption exhibited the larger volume expansion of cooked rice. The harder rices with lower moisture content revealed the higher rate of water uptake at twenty minutes after soaking and the higher ratio of maximum water uptake under the room temperature condition. These water uptake characteristics were not associated with the protein and amylose contents of milled rice and the palatability of cooked rice. The water/rice ratio (in w/w basis) for optimum cooking was averaged to 1.52 in dry milled rices (12% wet basis) with varietal range from 1.45 to 1.61 and the expansion ratio of milled rice after proper boiling was average to 2.63(in v/v basis). The major physicochemical components of rice grain associated with the palatability of cooked rice were examined using japonica rice materials showing narrow varietal variation in grain size and shape, alkali digestibility, gel consistency, amylose and protein contents, but considerable difference in appearance and texture of cooked rice. The glossiness or gross palatability score of cooked rice were closely associated with the peak, hot paste and consistency viscosities of viscosities with year difference. The high-quality rice variety "IIpumbyeo" showed less portion of amylose on the outer layer of milled rice grain and less and slower change in iodine blue value of extracted paste during twenty minutes of boiling. This highly palatable rice also exhibited very fine net structure in outer layer and fine-spongy and well-swollen shape of gelatinized starch granules in inner layer and core of cooked rice kernel compared with the poor palatable rice through image of scanning electronic microscope. Gross sensory score of cooked rice could be estimated by multiple linear regression formula, deduced from relationship between rice quality components mentioned above and eating quality of cooked rice, with high probability of determination. The $\alpha$-amylose-iodine method was adopted for checking the varietal difference in retrogradation of cooked rice. The rice cultivars revealing the relatively slow retrogradation in aged cooked rice were IIpumbyeo, Chucheongyeo, Sasanishiki, Jinbubyeo and Koshihikari. A Tonsil-type rice, Taebaegbyeo, and a japonica cultivar, Seomjinbyeo, showed the relatively fast deterioration of cooked rice. Generally, the better rice cultivars in eating quality of cooked rice showed less retrogradation and much sponginess in cooled cooked rice. Also, the rice varieties exhibiting less retrogradation in cooled cooked rice revealed higher hot viscosity and lower cool viscosity of rice flour in amylogram. The sponginess of cooled cooked rice was closely associated with magnesium content and volume expansion of cooked rice. The hardness-changed ratio of cooked rice by cooling was negatively correlated with solids amount extracted during boiling and volume expansion of cooked rice. The major physicochemical properties of rice grain closely related to the palatability of cooked rice may be directly or indirectly associated with the retrogradation characteristics of cooked rice. The softer gel consistency and lower amylose content in milled rice revealed the higher ratio of popped rice and larger bulk density of popping. The stronger hardness of rice grain showed relatively higher ratio of popping and the more chalky or less translucent rice exhibited the lower ratio of intact popped brown rice. The potassium and magnesium contents of milled rice were negatively associated with gross score of noodle making mixed with wheat flour in half and the better rice for noodle making revealed relatively less amount of solid extraction during boiling. The more volume expansion of batters for making brown rice bread resulted the better loaf formation and more springiness in rice breed. The higher protein rices produced relatively the more moist white rice bread. The springiness of rice bread was also significantly correlated with high amylose content and hard gel consistency. The completely chalky and large grain rices showed better suitability far fermentation and brewing. The glutinous rice were classified into nine different varietal groups based on various physicochemical and structural characteristics of endosperm. There was some close associations among these grain properties and large varietal difference in suitability to various traditional food processing. Our breeding efforts on improvement of rice quality for high palatability and processing utility or value-adding products in the future should focus on not only continuous enhancement of marketing and eating qualities but also the diversification in morphological, physicochemical and nutritional characteristics of rice grain suitable for processing various value-added rice foods.ice foods.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Strength Evaluation of Pinus rigida Miller Wooden Retaining Wall Using Steel Bar (Steel Bar를 이용한 리기다소나무 목재옹벽의 내력 평가)

  • Song, Yo-Jin;Kim, Keon-Ho;Lee, Dong-Heub;Hwang, Won-Joung;Hong, Soon-Il
    • Journal of the Korean Wood Science and Technology
    • /
    • v.39 no.4
    • /
    • pp.318-325
    • /
    • 2011
  • Pitch pine (Pinus rigida Miller) retaining walls using Steel bar, of which the constructability and strength performance are good at the construction site, were manufactured and their strength properties were evaluated. The wooden retaining wall using Steel bar was piled into four stories stretcher and three stories header, which is 770 mm high, 2,890 mm length and 782 mm width. Retaining wall was made by inserting stretchers into Steel bar after making 18 mm diameter of holes at top and bottom stretcher, and then stacking other stretchers and headers which have a slit of 66 mm depth and 18 mm width. The strength properties of retaining walls were investigated by horizontal loading test, and the deformation of structure by image processing (AlCON 3D OPA-PRO system). Joint (Type-A) made with a single long stretcher and two headers, and joint (Type-B) made with two short stretchers connected with half lap joint and two headers were in the retaining wall using Steel bar. The compressive shear strength of joint was tested. Three replicates were used in each test. In horizontal loading test the strength was 1.6 times stronger in wooden retaining wall using Steel bar than in wooden retaining wall using square timber. The timber and joints were not fractured in the test. When testing compressive shear strength, the maximum load of type-A and Type-B was 130.13 kN and 130.6 kN, respectively. Constructability and strength were better in the wooden retaining wall using Steel bar than in wooden retaining wall using square timber.

A Study of the Effect of a Mixture of Hyaluronic Acid and Sodium Carboxymethyl Cellulose ($Guardix-sol^{(R)}$) on the Prevention of Pericardial Adhesion (Hyaluronic Acid와 Sodium Carboxymethyl Cellulose 혼합용액($Guardix-sol^{(R)}$)의 섬유막유착 방지 효과에 관한 연구)

  • Lee, Song-Am;Kim, Jin-Sik;Kim, Jun-Seok;Hwang, Jae-Joon;Lee, Woo-Surng;Kim, Yo-Han;Cho, Yang-Kyu;Chee, Hyun-Keun
    • Journal of Chest Surgery
    • /
    • v.43 no.6
    • /
    • pp.596-601
    • /
    • 2010
  • Background: This study was designed to evaluate the efficacy of a mixture of hyaluronic acid and sodium carboxymethyl cellulose ($Guardix-sol^{(R)}$) on experimental pericardial adhesion. Material and Method: Thirty rats were divided into 2 groups of 15 rats each and pericardial mesothelial injury was induced during surgery by abrasion. In the control group, blood and normal saline were administered into pericardium; in the test group, blood and HA-CMC solution were administered. Pericardial adhesions were evaluated at 2 weeks (n=5), 4 weeks (n=5), and 6 weeks (n=5) after surgery. The severity of adhesions was graded by macroscopic examination, and the adhesion tissue thickness was analyzed microscopically with Masson trichrome stain and an image processing program. Result: The test group had significantly lower macroscopic adhesion scores ($2.9{\pm}0.6$ : $3.9{\pm}0.4$, p<0.000) compared with the control group. For microscopic adhesion tissue thickness, the test group had lower scores compared with the control group, but this difference was not statistically significant ($91.73{\pm}49.91$ : $117.67{\pm}46.4$, p=0.106). Conclusion: We conclude that an HA-CMC solution ($Guardix-sol^{(R)}$) reduces the formation of pericardial adhesions in this animal model.

Strength Properties of Wooden Model Erosion Control Dams Using Domestic Pinus rigida Miller I (국내산 리기다소나무를 이용한 목재 모형 사방댐의 강도 성능 평가 I)

  • Kim, Sang-Woo;Park, Jun-Chul;Lee, Dong-Heub;Son, Dong-Won;Hong, Soon-Il
    • Journal of the Korean Wood Science and Technology
    • /
    • v.36 no.6
    • /
    • pp.77-87
    • /
    • 2008
  • Wooden model erosion control dam was made with pitch pine, of which the strength properties was evaluated. Wooden model erosion control dam was made with diameter 90 mm of pitch pine round posts treated with CUAZ-2 (Copper Azole), changing joint in three different types. In each type, erosion control dam was made in nine floor (cross-bar of five floors and vertical-bar of four floors), of which the hight was 790 mm. And then strength properties were investigated through horizontal loading test and impact strength test, and the deformation of structure through image processing (AICON 3D DPA-PRO system). In horizontal loading test of wooden model erosion control dam using round post of diameter 90 mm, whether there was stone or not did not affect strength much when using self drill screw, but strength was decreased by 23%. In monolithic type of erosion control dam using screw bar, strength was increased by 1.5 times and deformation was decreased when filling with stone. When reinforcing with screw bar that ring is connected to self drill screw, strength was increased by 4.8 times. In impact strength test of wooden model erosion control dam made with round post of diameter 90 mm, the erosion control dam connected with self drilling screw not filling with stone was totally destroyed by the 1st impact, and the erosion control dam using screw bar was ruptured at cross-bar at which 779 kgf of impact was loaded in the 1st impact. In the 2nd impact, the base parts were ruptured, and reaction force was decreased to 545 kgf. In the 3rd impact, whole base parts were destroyed, and reaction force was decreased to 263 kgf.

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF