• Title/Summary/Keyword: Large-Scale Volume Data

Search Result 94, Processing Time 0.039 seconds

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Dependence of Barredness of Late-Type Galaxies on Galaxy Properties and Environment

  • Lee, Gwang-Ho;Park, Chang-Bom;Lee, Myung-Gyoon;Choi, Yun-Young
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.1
    • /
    • pp.75.2-75.2
    • /
    • 2010
  • We investigate the dependence of occurrence of bar in galaxies on galaxy properties and environment. The environmental conditions considered include the large-scale background density and distance to the nearest neighbor galaxy. We use a volume-limited sample of 33,296 galaxies brighter than $M_r$=-19.5+5logh at $0.02{\leqq}z{\leqq}0.05489$, drawn from the Sloan Digital Sky Survey Data Release 7. We classify the galaxies into early and late types, and identify bars by visual inspection. We find that the fraction of barred galaxies ($f_{bar}$) is 18.2% on average in the case of late-type galaxies, and depends on both u-r color and central velocity dispersion $(\sigma);f_{bar}$ is a monotonically increasing function of u-r color, and has a maximum value at intermediate velocity dispersion (${\sigma}{\simeq}170km\;s^{-1}$). This trend suggests that bars are dominantly hosted by systems having intermediate-mass with no recent interaction or merger history. We also find that $f_{bar}$ does not directly depend on the large-scale background density as its dependence disappears when other physical parameters are fixed. We discover the bar fraction decreases as the separation to the nearest neighbor galaxy becomes smaller than 0.1 times the virial radius of the neighbor regardless of neighbor's morphology. These results imply that it is difficult for bars to be maintained during strong tidal interactions, and that the source for this phenomenon is gravitational and not hydrodynamical.

  • PDF

The Effects of Merchandise Display on Distributor's Merchandise Selection -Focused on Multi-Level Marketing Company- (상품진열이 중간의 상품선택에 미치는 영향 -다단계 판매회사의 생필품 매장들 중심으로-)

  • Ahn, Gill-Sang;Yoon, Tae-Joong
    • Journal of Distribution Research
    • /
    • v.10 no.1
    • /
    • pp.33-57
    • /
    • 2005
  • After 1990's, many multi-level marketing companies lave been introduced in Korea. These MLM companies operate their stores as same way as general retailing stores. The major characteristics of these MLM companys' store is that their main customers are distributors who sell the purchased merchandise to another customers. Many studies about merchandise display in general retailing stores were reported. But, there was less research about merchandise display in these special type stores such as MLM companys' stores. This paper investigates the effects of merchandise display on distributor's merchandise selection in channel flow of multi-level marketing company. For this purpose, we formulated four hypotheses about display variance in quantity, height, location, and related merchandise to analyze the effect of merchandise display methods in MLM companys' stores. The experiment had been proceeded in three stores in a MLM company for 6 weeks and the sales data were collected by POS. The methods to analyze the data were used ANOVA and T-test. Findings of this study are as follows; First, there was no effect to store sale by the interaction effect between merchandise display method and scale of store. Second, scale of store affected considerably the volume of sales of each store according to main effect analysis. Third, display variance in quantity, height, and location did not affect store sale. In the related merchandise display, however, sale in all store was increased. Fourth, in additional analysis considering merchandise display only, display variance in both height and location affected their sale in large scale store. Based on the above results, we may predict merchandise display can affect sale in MLM companys' store as well as general retailing stores. Therefore if MLM company has large scale store, it should consider merchandise display methods in its stores.

  • PDF

A Novel Broadband Channel Estimation Technique Based on Dual-Module QGAN

  • Li Ting;Zhang Jinbiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1369-1389
    • /
    • 2024
  • In the era of 6G, the rapid increase in communication data volume poses higher demands on traditional channel estimation techniques and those based on deep learning, especially when processing large-scale data as their computational load and real-time performance often fail to meet practical requirements. To overcome this bottleneck, this paper introduces quantum computing techniques, exploring for the first time the application of Quantum Generative Adversarial Networks (QGAN) to broadband channel estimation challenges. Although generative adversarial technology has been applied to channel estimation, obtaining instantaneous channel information remains a significant challenge. To address the issue of instantaneous channel estimation, this paper proposes an innovative QGAN with a dual-module design in the generator. The adversarial loss function and the Mean Squared Error (MSE) loss function are separately applied for the parameter updates of these two modules, facilitating the learning of statistical channel information and the generation of instantaneous channel details. Experimental results demonstrate the efficiency and accuracy of the proposed dual-module QGAN technique in channel estimation on the Pennylane quantum computing simulation platform. This research opens a new direction for physical layer techniques in wireless communication and offers expanded possibilities for the future development of wireless communication technologies.

Yet Another BGP Archive Forensic Analysis Tool Using Hadoop and Hive (하둡과 하이브를 이용한 BGP 아카이브 데이터의 포렌직 분석 툴)

  • Lee, Yeonhee;Lee, YoungSeok
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.541-549
    • /
    • 2015
  • A large volume of continuously growing BGP data files can raise two technical challenges regarding scalability and manageability. Due to the recent development of the open-source distributed computing infrastructure, Hadoop, it becomes feasible to handle a large amount of data in a scalable manner. In this paper, we present a new Hadoop-based BGP tool (BGPdoop) that provides the scale-out performance as well as the extensible and agile analysis capability. In particular, BGPdoop realizes a query-based BGP record exploration function using Hive on the partitioned BGP data structure, which enables flexible and versatile analytics of BGP archive files. From the experiments for the scalability with a Hadoop cluster of 20 nodes, we demonstrate that BGPdoop achieves 5 times higher performance and the user-defined analysis capability by expressing diverse BGP routing analytics in Hive queries.

A Study on Distributed Parallel SWRL Inference in an In-Memory-Based Cluster Environment (인메모리 기반의 클러스터 환경에서 분산 병렬 SWRL 추론에 대한 연구)

  • Lee, Wan-Gon;Bae, Seok-Hyun;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.224-233
    • /
    • 2018
  • Recently, there are many of studies on SWRL reasoning engine based on user-defined rules in a distributed environment using a large-scale ontology. Unlike the schema based axiom rules, efficient inference orders cannot be defined in SWRL rules. There is also a large volumet of network shuffled data produced by unnecessary iterative processes. To solve these problems, in this study, we propose a method that uses Map-Reduce algorithm and distributed in-memory framework to deduce multiple rules simultaneously and minimizes the volume data shuffling occurring between distributed machines in the cluster. For the experiment, we use WiseKB ontology composed of 200 million triples and 36 user-defined rules. We found that the proposed reasoner makes inferences in 16 minutes and is 2.7 times faster than previous reasoning systems that used LUBM benchmark dataset.

A study on the enhancement and performance optimization of parallel data processing model for Big Data on Emissions of Air Pollutants Emitted from Vehicles (차량에서 배출되는 대기 오염 물질의 빅 데이터에 대한 병렬 데이터 처리 모델의 강화 및 성능 최적화에 관한 연구)

  • Kang, Seong-In;Cho, Sung-youn;Kim, Ji-Whan;Kim, Hyeon-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.1-6
    • /
    • 2020
  • Road movement pollutant air environment big data is a link between real-time traffic data such as vehicle type, speed, and load using AVC, VDS, WIM, and DTG, which are always traffic volume survey equipment, and road shape (uphill, downhill, turning section) data using GIS. It consists of traffic flow data. Also, unlike general data, a lot of data per unit time is generated and has various formats. In particular, since about 7.4 million cases/hour or more of large-scale real-time data collected as detailed traffic flow information are collected, stored and processed, a system that can efficiently process data is required. Therefore, in this study, an open source-based data parallel processing performance optimization study is conducted for the visualization of big data in the air environment of road transport pollution.

Criteria for calculation of CSO volume and frequency using rainfall-runoff model (우수유출 모형을 이용한 합류식하수관로시스템의 월류량, 월류빈도 산정 기준 결정 연구)

  • Lee, Gunyoung;Na, Yongun;Ryu, Jaena;Oh, Jeill
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.27 no.3
    • /
    • pp.313-324
    • /
    • 2013
  • It is widely known that untreated Combined Sewer Overflows (CSOs) that directly discharged from receiving water have a negative impact. Recent concerns on the CSO problem have produced several large scale constructions of treatment facilities, but the facilities are normally designed under empirical design criteria. In this study, several criteria for defining CSOs (e.g. determination of effective rainfall, sampling time, minimum duration of data used for rainfall-runoff simulation and so on) were investigated. Then this study suggested a standard methodology for the CSO calculation and support formalized standard on the design criteria for CSO facilities. Criteria decided for an effective rainfall was over 0.5 mm of total rainfall depth and at least 4 hours should be exist between two different events. An Antecedent dry weather period prior to storm event to satisfy the effective rainfall criteria was over 3 days. Sampling time for the rainfall-runoff model simulation was suggested as 1 hour. A duration of long-term simulation CSO overflow and frequency calculation should be at least recent 10 year data. A Management plan for the CSOs should be established under a phase-in of the plan. That should reflect site-specific conditions of different catchments, and formalized criteria for defining CSOs should be used to examine the management plans.

Numerical Model for Cross-Shore Sediment Transport (해안선 횡방향의 표사이동 예측모형)

  • 이철응;김무현
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.7 no.1
    • /
    • pp.57-69
    • /
    • 1995
  • The development of a finite difference model for cross-shore sediment transport prediction in the surf tone due to the storm surge event is presented in this paper. Using the inhomogeneous diffusion equation with moving boundaries. the present numerical model is found to be robust and efficient and does not possess a number of restrictions imposed in Kriebel and Dean's(1985) numerical model. Our numerical model is validated through comparison with the analytical solution. the data of a large-scale experiment and the field data of Hurricane Eloise. The Present model if able to predict the averaged volumetric erosion rate of a beach due to the time-varying real storm surge hydrographs and satisfies the conservation of sediment between eroded volume in the onshore region and deposited volume in the offshore region. In addition. the present model is able to reasonably predict the recession of a beach with wide berm and dune. and can describe the change of a breaking point by the offshore deposition. From the sensitivity analysis or the present numerical model with various input parameters, it is concluded that the present numerical model is able to analyze the beach change in a reliable manner including the effects of different sizes of sediments.

  • PDF

Reconfiguration of Physical Structure of Vegetation by Voxelization Based on 3D Point Clouds (3차원 포인트 클라우드 기반 복셀화에 의한 식생의 물리적 구조 재구현)

  • Ahn, Myeonghui;Jang, Eun-kyung;Bae, Inhyeok;Ji, Un
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.6
    • /
    • pp.571-581
    • /
    • 2020
  • Vegetation affects water level change and flow resistance in rivers and impacts waterway ecosystems as a whole. Therefore, it is important to have accurate information about the species, shape, and size of any river vegetation. However, it is not easy to collect full vegetation data on-site, so recent studies have attempted to obtain large amounts of vegetation data using terrestrial laser scanning (TLS). Also, due to the complex shape of vegetation, it is not easy to obtain accurate information about the canopy area, and there are limitations due to a complex range of variables. Therefore, the physical structure of vegetation was analyzed in this study by reconfiguring high-resolution point cloud data collected through 3-dimensional terrestrial laser scanning (3D TLS) in a voxel. Each physical structure was analyzed under three different conditions: a simple vegetation formation without leaves, a complete formation with leaves, and a patch-scale vegetation formation. In the raw data, the outlier and unnecessary data were filtered and removed by Statistical Outlier Removal (SOR), resulting in 17%, 26%, and 25% of data being removed, respectively. Also, vegetation volume by voxel size was reconfigured from post-processed point clouds and compared with vegetation volume; the analysis showed that the margin of error was 8%, 25%, and 63% for each condition, respectively. The larger the size of the target sample, the larger the error. The vegetation surface looked visually similar when resizing the voxel; however, the volume of the entire vegetation was susceptible to error.