• 제목/요약/키워드: In-network computation

검색결과 796건 처리시간 0.025초

지역단위 도로교통 탄소배출 인벤토리구축 방법론 (Region-wide Road Transport CO2 Emission Inventory)

  • 신용은;고광희
    • 대한토목학회논문집
    • /
    • 제33권1호
    • /
    • pp.297-304
    • /
    • 2013
  • 기후변화에 대응하여 수송부문의 탄소배출 저감노력이 세계적인 이슈가 되고 있다. 특히, 지역단위의 수송부문 탄소배출 저감전략의 수립 및 효율 집행을 위해서는 정밀한 탄소배출 인벤토리 구축은 필수적이나, 현재 도로교통의 특성과 수단의 다양성, 통행패턴의 복잡성 등 자료수집 한계로 신뢰성 있는 지역단위 인벤토리는 개략적 산정에 그치고 있는 실정이다. 본 연구는 이와 같은 문제점을 극복하기 위해 국가교통DB의 자료를 활용하여 지역단위 도로교통 탄소배출 인벤토리 구축 방법론을 제시하였다. 이를 위해 국가교통DB포함 자료 중 $CO_2$ 관련 속성을 파악하고, 이에 적합한 $CO_2$ 산정식을 도출하여 인벤토리 모형을 구축하였다. 구축된 모형의 적용성을 검토하기 위해 부산시를 사례로 2008년도 기준 도로부문의 탄소배출량을 산정하여 기존 연구결과와 비교분석하였다. 또한, 연구의 결과를 기초로 더욱 정밀한 인벤토리 구축을 위한 향후 연구방향을 제시하였다.

HEVC 복호기에서의 타일, 슬라이스, 디블록킹 필터 병렬화 방법 (Tile, Slice, and Deblocking Filter Parallelization Method in HEVC)

  • 손소희;백아람;최해철
    • 방송공학회논문지
    • /
    • 제22권4호
    • /
    • pp.484-495
    • /
    • 2017
  • 최근 디스플레이 기기의 발전과 기가 네트워크 등의 전송 대역폭 확대로 인해 대형 파노라마 영상, 4K Ultra High-Definition 방송, Ultra-Wide Viewing 영상 등 2K 이상의 초고해상도 영상의 수요가 폭발적으로 증가하고 있다. 이러한 초고해상도 영상은 데이터양이 매우 많기 때문에 부호화 효율이 가장 높은 High Efficiency Video Coding(HEVC) 비디오 부호화 표준을 사용하는 추세이다. HEVC는 가장 최신의 비디오 부호화 표준으로 다양한 부호화 툴을 이용하여 높은 부호화 효율을 제공하지만 복잡도 또한 이전 부호화 표준과 비교하여 매우 높다. 특히 초고해상도 영상을 HEVC 복호기로 실시간 복호화 하는 것은 매우 높은 복잡도를 요구한다. 따라서 본 논문에서는 고해상도 및 초고해상도 영상에 대한 HEVC 복호기의 복호화 속도를 개선시키고자 HEVC에서 지원하는 슬라이스(Slice)와 타일(Tile) 부호화 툴을 사용하여 각 슬라이스 혹은 타일을 동시에 처리하며 디블록킹 필터 과정에서도 소정의 블록 크기만큼 동시에 처리하는 데이터-레벨 병렬 처리 방법을 소개한다. 이는 독립 복호화가 가능한 타일, 슬라이스, 혹은 디블록킹 필터에서 동일 연산을 다중 스레드에 분배하는 방법으로 복호화 속도를 향상 시킬 수 있다. 실험에서 제안 방법이 HEVC 참조 소프트웨어 대비 4K 영상에 대해 최대 2.0배의 복호화 속도 개선을 얻을 수 있음을 보인다.

H.264 표준의 가변 움직임 블록을 위한 고속 움직임 탐색 기법 (Fast Motion Estimation for Variable Motion Block Size in H.264 Standard)

  • 최웅일;전병우
    • 대한전자공학회논문지SP
    • /
    • 제41권6호
    • /
    • pp.209-220
    • /
    • 2004
  • 기존 비디오 표준과 비교해 볼 때, H.264 비디오 표준이 갖는 중요한 두 가지 특징으로는 높은 부호화 효율과 네트워크 친화성을 들 수 있다. 그러나 이러한 중요한 특성에도 불구하고 H.264 표준은 구현시 요구되는 메모리 대역폭과 연산량의 복잡도가 높기 때문에 실시간 응용에 적용하는데 어려움이 있다. H.264 부호화 기술 가운데 특히 복수 참조 영상을 이용한 다양한 블록 단위 움직임 탐색은 높은 부호화 효율을 갖도록 하는 핵심 요소지만 최적의 움직임 벡터를 찾기 위해 다양한 블록 단위 조합의 모든 경우에 대하여 SAD (Sum of Absolute Difference)를 구해야 하므로 상당한 계산량을 요구한다. 그러므로 본 논문에서는 움직임 탐색의 연산량을 줄이기 위해 정수화소 움직임 탐색 및 부화소 움직임 탐색을 위한 고속 알고리즘을 제안한다. 정수화소 단위 움직임 탐색의 경우, 기존의 고속 움직임 탐색 기법은 H.264의 다양한 블록 단위 움직임 탐색 구조에 그대로 적용할 경우 효과적이지 못하기 때문에 본 논문에서는 종래 다이아몬드 탐색 기반 방법을 계층적 블록 구조에 맞게 개선한 적응적 움직임 탐색 기법을 제안하도록 한다. 또한 부화소 단위 움직임 탐색을 위해서는 움직임 벡터의 통계적 특성을 이용하여 예측벡터를 중심으로 한 다이아몬드 탐색 기반 고속 알고리즘을 제안한다.

RGG/WSN을 위한 분산 저장 부호의 성능 분석 (A Performance Analysis of Distributed Storage Codes for RGG/WSN)

  • 정호영
    • 한국정보전자통신기술학회논문지
    • /
    • 제10권5호
    • /
    • pp.462-468
    • /
    • 2017
  • 본 논문에서는 IoT/WSN을 랜덤 기하 그래프를 이용하여 모델링하고 WSN에서 발생되는 데이터를 효율적으로 저장하기 위해 사용되는 지역 부호의 성능을 고찰하였다.. 노드 수가 n=100, 200인 무선 센서 네트워크를 랜덤 기하 그래프로 모델링하여 분산화된 저장 코드의 복호 성능을 시뮬레이션을 통해 분석하였다. 네트워크의 총 노드 수가 n=100일 때와 200일 때 복호율 ${\eta}$에 따른 복호 성공률은 노드 수 n보다는 소스 노드 수 k값에 따라 좌우됨을 알 수 있었다. 특히 n 값에 관계없이 $${\eta}{\leq_-}2.0$$일 때 복호 성공 확률은 70%를 상회함을 알 수 있었다. 복호 율 ${\eta}$에 따른 복호 연산 량을 살펴본 바, BP 복호 방식의 복호 연산 량은 소스 노드 수 k 값이 증가함에 따라 기하급수적으로 증가함을 알 수 있었다. 이는 소스 노드의 수가 증가할수록 LT 부호의 길이가 길어지고 이에 따라 복호 연산량이 크게 증가하는데 원인이 있는 것으로 생각된다.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF