Purpose: We investigated quantification of dopaminergic transporter (DAT) and serotonergic transporter (SERT) on
The inhibitory effects of Boswellia serrata (BW) extracts on degenerative osteoarthritis were investigated in primary-cultured rat cartilage cells and a monosodium-iodoacetate (MIA)-induced osteoarthritis rat model. To identify the protective effects of BW extract against
This paper estimates a translog cost function for the Korea's banking industry and derives various implications on the prospect for the Korean banking structure in the future based on the estimated efficiency indicators for the banking sector. The Korean banking industry is permitted to operate trust business to the full extent and the security business to a limited extent, while it is formally subjected to the strict, specialized banking system. Security underwriting and investment businesses are allowed in a very limited extent only for stocks and bonds of maturity longer than three year and only up to 100 percent of the bank paid-in capital. Until the end of 1991, the ceiling was only up to 25 percent of the total balance of the demand deposits. However, they are prohibited from the security brokerage business. While the in-house integration of security businesses with the traditional business of deposit and commercial lending is restrictively regulated as such, Korean banks can enter the security business by establishing subsidiaries in the industry. This paper, therefore, estimates the efficiency indicators as well as the cost functions, identifying the in-house integrated trust business and security investment business as important banking activities, for various cases where both the production and the intermediation function approaches in modelling the financial intermediaries are separately applied, and the banking businesses of deposit, lending and security investment as one group and the trust businesses as another group are separately and integrally analyzed. The estimation results of the efficiency indicators for various cases are summarized in Table 1 and Table 2. First, security businesses exhibit economies of scale but also economies of scope with traditional banking activities, which implies that in-house integration of the banking and security businesses may not be a nonoptimal banking structure. Therefore, this result further implies that the transformation of Korea's banking system from the current, specialized system to the universal banking system will not impede the improvement of the banking industry's efficiency. Second, the lending businesses turn out to be subjected to diseconomies of scale, while exhibiting unclear evidence for economies of scope. In sum, it implies potential efficiency gain of the continued in-house integration of the lending activity. Third, the continued integration of the trust businesses seems to contribute to improving the efficiency of the banking businesses, since the trust businesses exhibit economies of scope. Fourth, deposit services and fee-based activities, such as foreign exchange and credit card businesses, exhibit economies of scale but constant returns to scope, which implies, the possibility of separating those businesses from other banking and trust activities. The recent trend of the credit card business being operated separately from other banking activities by an independent identity in Korea as well as in the global banking market seems to be consistent with this finding. Then, how can the possibility of separating deposit services from the remaining activities be interpreted? If one insists a strict definition of commercial banking that is confined to deposit and commercial lending activities, separating the deposit service will suggest a resolution or a disappearance of banking, itself. Recently, however, there has been a suggestion that separating banks' deposit and lending activities by allowing a depository institution which specialize in deposit taking and investing deposit fund only in the safest securities such as government securities to administer the deposit activity will alleviate the risk of a bank run. This method, in turn, will help improve the safety of the payment system (Robert E. Litan, What should Banks Do? Washington, D.C., The Brookings Institution, 1987). In this context, the possibility of separating the deposit activity will imply that a new type of depository institution will arise naturally without contradicting the efficiency of the banking businesses, as the size of the banking market grows in the future. Moreover, it is also interesting to see additional evidences confirming this statement that deposit taking and security business are cost complementarity but deposit taking and lending businesses are cost substitute (see Table 2 for cost complementarity relationship in Korea's banking industry). Finally, it has been observed that the Korea's banking industry is lacking in the characteristics of natural monopoly. Therefore, it may not be optimal to encourage the merger and acquisition in the banking industry only for the purpose of improving the efficiency.
Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.
Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.
It is difficult to ascertain exactly when the Hongjuupseong (Town Wall) was first constructed, due to it had undergone several times of repair and maintenance works since it was piled up newly in 1415, when the first year of the reign of King Munjong (the 5th King of the Joseon Dynasty). Parts of its walls were demolished during the Japanese occupation, leaving the wall as it is today. Hongseong region is also susceptible to historical earthquakes for geological reasons. There have been records of earthquakes, such as the ones in 1978 and 1979 having magnitudes of 5.0 and 4.0, respectively, which left part of the walls collapsed. Again, in 2010, heavy rainfall destroyed another part of the wall. The fortress walls of the Hongjuupseong comprise various rocks, types of facing, building methods, and filling materials, according to sections. Moreover, the remaining wall parts were reused in repair works, and characteristics of each period are reflected vertically in the wall. Therefore, based on the vertical distribution of the walls, the Hongjuupseong was divided into type I, type II, and type III, according to building types. The walls consist mainly of coarse-grained granites, but, clearly different types of rocks were used for varying types of walls. The bottom of the wall shows a mixed variety of rocks and natural and split stones, whereas the center is made up mostly of coarse-grained granites. For repairs, pink feldspar granites was used, but it was different from the rock variety utilized for Suguji and Joyangmun Gate. Deterioration types to the wall can be categorized into bulging, protrusion of stones, missing stones at the basement, separation of framework, fissure and fragmentation, basement instability, and structural deformation. Manually and light-wave measurements were used to check the amount and direction of behavior of the fortress walls. A manual measurement revealed the sections that were undergoing structural deformation. Compared with the result of the light-wave measurement, the two monitoring methods proved correlational. As a result, the two measuring methods can be used complementarily for the long-term conservation and management of the wall. Additionally, the measurement system must be maintained, managed, and improved for the stability of the Hongjuupseong. The measurement of Nammunji indicated continuing changes in behavior due to collapse and rainfall. It can be greatly presumed that accumulated changes over the long period reached the threshold due to concentrated rainfall and subsequent behavioral irregularities, leading to the walls' collapse. Based on the findings, suggestions of the six grades of management from 0 to 5 have been made, to manage the Hongjuupseong more effectively. The applied suggested grade system of 501.9 m (61.10%) was assessed to grade 1, 29.5 m (3.77%) to grade 2, 10.4 m (1.33%) to grade 3, 241.2 m (30.80%) and grade 4. The sections with grade 4 concentrated around the west of Honghwamun Gate and the east of the battlement, which must be monitored regularly in preparation for a potential emergency. The six-staged management grade system is cyclical, where after performing repair and maintenance works through a comprehensive stability review, the section returned to grade 0. It is necessary to monitor thoroughly and evaluate grades on a regular basis.
Brand switching data frequently used in market structure analysis is adequate to analyze non- durable goods, because it can capture competition between specific two brands. But brand switching data sometimes can not be used to analyze goods like automobiles having long term duration because one of main assumptions that consumer preference toward brand attributes is not changed against time can be violated. Therefore a new type of data which can precisely capture competition among durable goods is needed. Another problem of using brand switching data collected from actual purchase behavior is short of explanation why consumers consider different set of brands. Considering above problems, main purpose of this study is to analyze market structure for durable goods with consideration set. The author uses exploratory approach and latent class clustering to identify market structure based on heterogeneous consideration set among consumers. Then the relationship between some factors and consideration set formation is analyzed. Some benefits and two demographic variables - age and income - are selected as factors based on consumer behavior theory. The author analyzed USA automotive market with top 11 brands using exploratory approach and latent class clustering. 2,500 respondents are randomly selected from the total sample and used for analysis. Six models concerning market structure are established to test. Model 1 means non-structured market and model 6 means market structure composed of six sub-markets. It is exploratory approach because any hypothetical market structure is not defined. The result showed that model 1 is insufficient to fit data. It implies that USA automotive market is a structured market. Model 3 with three market structures is significant and identified as the optimal market structure in USA automotive market. Three sub markets are named as USA brands, Asian Brands, and European Brands. And it implies that country of origin effect may exist in USA automotive market. Comparison between modal classification by derived market structures and probabilistic classification by research model was conducted to test how model 3 can correctly classify respondents. The model classify 97% of respondents exactly. The result of this study is different from those of previous research. Previous research used confirmatory approach. Car type and price were chosen as criteria for market structuring and car type-price structure was revealed as the optimal structure for USA automotive market. But this research used exploratory approach without hypothetical market structures. It is not concluded yet which approach is superior. For confirmatory approach, hypothetical market structures should be established exhaustively, because the optimal market structure is selected among hypothetical structures. On the other hand, exploratory approach has a potential problem that validity for derived optimal market structure is somewhat difficult to verify. There also exist market boundary difference between this research and previous research. While previous research analyzed seven car brands, this research analyzed eleven car brands. Both researches seemed to represent entire car market, because cumulative market shares for analyzed brands exceeds 50%. But market boundary difference might affect the different results. Though both researches showed different results, it is obvious that country of origin effect among brands should be considered as important criteria to analyze USA automotive market structure. This research tried to explain heterogeneity of consideration sets among consumers using benefits and two demographic factors, sex and income. Benefit works as a key variable for consumer decision process, and also works as an important criterion in market segmentation. Three factors - trust/safety, image/fun to drive, and economy - are identified among nine benefit related measure. Then the relationship between market structures and independent variables is analyzed using multinomial regression. Independent variables are three benefit factors and two demographic factors. The result showed that all independent variables can be used to explain why there exist different market structures in USA automotive market. For example, a male consumer who perceives all benefits important and has lower income tends to consider domestic brands more than European brands. And the result also showed benefits, sex, and income have an effect to consideration set formation. Though it is generally perceived that a consumer who has higher income is likely to purchase a high priced car, it is notable that American consumers perceived benefits of domestic brands much positive regardless of income. Male consumers especially showed higher loyalty for domestic brands. Managerial implications of this research are as follow. Though implication may be confined to the USA automotive market, the effect of sex on automotive buying behavior should be analyzed. The automotive market is traditionally conceived as male consumers oriented market. But the proportion of female consumers has grown over the years in the automotive market. It is natural outcome that Volvo and Hyundai motors recently developed new cars which are targeted for women market. Secondly, the model used in this research can be applied easier than that of previous researches. Exploratory approach has many advantages except difficulty to apply for practice, because it tends to accompany with complicated model and to require various types of data. The data needed for the model in this research are a few items such as purchased brands, consideration set, some benefits, and some demographic factors and easy to collect from consumers.
Global challenges such as the corona pandemic, climate change and the war-on-tech ensure that the demand who the technologies of the future develops and monitors prominently for will be on the agenda. Development of, and applications in, agrifood, biotech, high-tech, medtech, quantum, AI and photonics are the basis of the future earning capacity of the Netherlands and contribute to solving societal challenges, close to home and worldwide. To be like the Netherlands and Europe a strategic position in the to obtain knowledge and innovation chain, and with it our autonomy in relation to from China and the United States insurance, clear choices are needed. Brainport Eindhoven: Building on Philips' knowledge base, there is create an innovative ecosystem where more than 7,000 companies in the High-tech Systems & Materials (HTSM) collaborate on new technologies, future earning potential and international value chains. Nearly 20,000 private R&D employees work in 5 regional high-end campuses and for companies such as ASML, NXP, DAF, Prodrive Technologies, Lightyear and many others. Brainport Eindhoven has a internationally leading position in the field of system engineering, semicon, micro and nanoelectronics, AI, integrated photonics and additive manufacturing. What is being developed in Brainport leads to the growth of the manufacturing industry far beyond the region thanks to chain cooperation between large companies and SMEs. South-Holland: The South Holland ecosystem includes companies as KPN, Shell, DSM and Janssen Pharmaceutical, large and innovative SMEs and leading educational and knowledge institutions that have more than Invest €3.3 billion in R&D. Bearing Cores are formed by the top campuses of Leiden and Delft, good for more than 40,000 innovative jobs, the port-industrial complex (logistics & energy), the manufacturing industry cluster on maritime and aerospace and the horticultural cluster in the Westland. South Holland trains thematically key technologies such as biotech, quantum technology and AI. Twente: The green, technological top region of Twente has a long tradition of collaboration in triple helix bandage. Technological innovations from Twente offer worldwide solutions for the large social issues. Work is in progress to key technologies such as AI, photonics, robotics and nanotechnology. New technology is applied in sectors such as medtech, the manufacturing industry, agriculture and circular value chains, such as textiles and construction. Being for Twente start-ups and SMEs of great importance to the jobs of tomorrow. Connect these companies technology from Twente with knowledge regions and OEMs, at home and abroad. Wageningen in FoodValley: Wageningen Campus is a global agri-food magnet for startups and corporates by the national accelerator StartLife and student incubator StartHub. FoodvalleyNL also connects with an ambitious 2030 programme, the versatile ecosystem regional, national and international - including through the WEF European food innovation hub. The campus offers guests and the 3,000 private R&D put in an interesting programming science, innovation and social dialogue around the challenges in agro production, food processing, biobased/circular, climate and biodiversity. The Netherlands succeeded in industrializing in logistics countries, but it is striving for sustainable growth by creating an innovative ecosystem through a regional industry-academic research model. In particular, the Brainport Cluster, centered on the high-tech industry, pursues regional innovation and is opening a new horizon for existing industry-academic models. Brainport is a state-of-the-art forward base that leads the innovation ecosystem of Dutch manufacturing. The history of ports in the Netherlands is transforming from a logistics-oriented port symbolized by Rotterdam into a "port of digital knowledge" centered on Brainport. On the basis of this, it can be seen that the industry-academic cluster model linking the central government's vision to create an innovative ecosystem and the specialized industry in the region serves as the biggest stepping stone. The Netherlands' innovation policy is expected to be more faithful to its role as Europe's "digital gateway" through regional development centered on the innovation cluster ecosystem and investment in job creation and new industries.
The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70