• Title/Summary/Keyword: Limited Speed

Search Result 1,123, Processing Time 0.029 seconds

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Possibility of Establishing an International Court of Air and Space Law (국제항공우주재판소의 설립 가능성)

  • Kim, Doo-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.2
    • /
    • pp.139-161
    • /
    • 2009
  • The idea of establishing an International Court of Air and Space Law (hereinafter referred to ICASL) is only my academic and practical opinion as first proposal in the global community. The establishment of the International Court of Air and Space Law can promote the speed and promote fairness of the trial in air and space law cases. The creation of an ICASL would lead to strengthening of the international cooperation deemed essential by the global community towards joint settlement in the transnational air and space cases, claims and would act as a catalyst for the efforts and solution on aircraft, satellite and space shuttle's accidents and cases and all manpower, information, trial and lawsuit to be centrally managed in an independent fashion to the benefit of global community. The aircraft, satellite and spacecraft's accidents attributes to the particular and different features between the road, railway and maritime's accidents. These aircraft, satellite and spacecraft's accidents have incurred many disputes between the victims and the air and space carriers in deciding on the limited or unlimited liability for compensation and the appraisal of damages caused by the aircraft's accidents, terror attack, satellite, space shuttle's accidents and space debris. This International Court of Air and Space Law could hear any claim growing out of both international air and space crash accidents and transnational accidents in which plaintiffs and defendants are from different nations. This alternative would eliminate the lack of uniformity of decisions under the air and space conventions, protocols and agreements. In addition, national courts would no longer have to apply their own choice of law analysis in choosing the applicable liability limits or un-limit for cases that do not fall under the air and space system. Thus, creation of an International Court of Air and Space Law would eliminate any disparity of damage awards among similarly situated passengers and shippers in nonmembers of air and space conventions, protocols, agreements and cases. Furthermore, I would like to explain the main items of the abovementioned Draft for the Convention or Statute of the International Court of Air and Space Law framed in comparison with the Statute of the International Court of Justice, the Statue of the International Tribunal for the Law of the Sea and the Statute of the International Criminal Court. First of all, in order to create the International Court of Air and Space Law, it is necessary for us to legislate a Draft for the Convention on the Establishment of the International Court of Air and Space Law. This Draft for the Convention must include the elected method of judges, term, duty and competence of judge, chambers, jurisdiction, hearing and judgment of the ICASL. The members of the Court shall be elected by the General Assembly and Council of the ICAO and by the General Assembly and Legal Committee of the UNCOPUOS from a list of persons nominated by the national groups in the six continent (the North American, South American, African, Oceania and Asian Continent) and two international organization such as ICAO and UNCOPUOS. The members of the Court shall be elected for nine years and may be re-elected as one time. However, I would like to propose a creation an International Court of Air and Space Law in extending jurisdiction to the International Court of Justice at the Hague to in order to decide the air and space convention‘s cases. My personal opinion is that if an International Court on Air and Space Law will be created in future, it will be settled quickly and reasonably the difficulty and complicated disputes, cases or lawsuit between the wrongdoer and victims and the injured person caused by aircraft, satellite, spacecraft's accidents or hijacker and terrorists etc. on account of deciding the standard of judgment by judges of that’s court. It is indeed a great necessary and desirable for us to make a new Draft for the Convention on a creation of the International Court of Air and Space Law to handle international air and space crash litigation. I shall propose to make a new brief Draft for the Convention on the Creation of an International Court of Air and Space Law in the near future.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Study on the Midwater Trawl Available in the Korean Waters ( V ) - Opening Efficiency of the Otter Board with a Large Float on the Top - (한국 근해에 있어서의 중층 트로올의 연구 ( V ) - 전개판에 대형 뜸을 달았을 때의 전개성능 -)

  • Lee, Byong-Gee;Kim, Min-Suk
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.24 no.2
    • /
    • pp.78-82
    • /
    • 1988
  • Near sea trawlers of Korea sometimes catch pelagic fishes like file fish by using midwater trawl gear even though usually catch bottom fish. It is reasonable to use the specific otter board as well as specific net in bottom trawling and in midwater trawling respectively. But, the trawlers are so small ranging 100 to 120GT, 700 to 100ps that it is very complicated to use different otter board for bottom trawling and for midwater trawling. The otter board for bottom trawling. is also used for the midwater trawling without any change even though the net is changed into the specific one. Although the otter board in the midwater trawling should be lighter than that for bottom trawling, to use otter board for bottom trawling directly for the midwater trawling without any change makes the net easily touch the sea bed and also make the horizontal opening of the otter boards be limited owing to the length of warp in the southern sea of Korea, main fishing ground of midwater trawling, which is 100m or so in depth. That is why the otter board for the midwater trawling should be made lighter than that in the bottom trawling, even if temporary. The authors carried out an experiment to achieve this purpose by attaching a large styropol float on the top of the otter board. In this experiment, underwater weight of the otter board was 630kg and buoyancy of the float was 510kg. To determine the depth and horizontal opening of the otter board, two fish finder was used. A transmitter of 50KHz fish finder was set downward through the shoe plate of otter board to determine the elevation of otter board from the sea bed, and a transmitter of 200KHz fish finder was set sideways on the starboard otter board to be able to detect the distance between otter boards. The obtained results can be summarized as follows: 1. The actual towing speed in the experiment varied 1.1 to 1.8 m/sec. 2. The depth of otter board was within 41 to 25m with float on the top and 45 to 26m without float in case of the warp length 100m, whereas the depth 68-44m with float and 74-46m without float in case of the warp length 150m. This fact means that the depth with float was 9-4% shallower than that without float. 3. The horizontal opening between otter boards was within 34-41m with float and 30-38m without float in case of the warp length 100m, whereas the opening was 44-50m with float and 37-46m without float in case of the warp length 150m. This fact means the opening with float was 10% greater than that without float in case of the warp length 100m, and 15% greater in case of the warp length 150m. 4. The horizontal opening between wing tips by using the otter board with float was 1m greater than by without float in case of the warp length 100m, whereas the opening by with float was 2m greater than by without float in case of warp length 150m. From this fact, it can be estimated that the effective opening area of the net mouth by using the otter board with float could be made 10% greater than by without float in case of warp length 100m, whereas the area with float 20% greater than by without float in case of warp length 150m.

  • PDF

Evaluation of Reliability about Short TAT (Turn-Around Time) of Domestic Automation Equipment (Gamma Pro) (국산 자동화 장비(Gamma Pro)의 결과보고시간 단축에 대한 유용성 평가)

  • Oh, Yun-Jeong;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.197-202
    • /
    • 2010
  • Purpose: Recently, many hospitals have been tried to increase the satisfaction of the outpatients through blood-gathering, exam, result notice and process in a day. Each laboratory has been used the automatic equipment for the rapid requests of the result notice and the increase of the reliability and efficiency. Current automatic equipments that have been limited short TAT(Turn-Around Time)because of the restricted batch lists and 1 tip-5 detectors. The Gamma Pro which is made in Korea to improve the shortcomings of existing automation equipment, complemented with capacity to perform a wide range of domestic automation equipment. In this study, we evaluated the usefulness and reliability of short TAT by comparing Gamma Pro with current automatic equipment. Materials and Methods: We studied the correlation between Gamma Pro and RIA-mat 280 using the respective 100 specimens of low or high density to the patients who were requested the thyroid hormone test (Total T3, TSH and Free T4) in Samsung Medical Center Sep. 2009. To evaluate the split-level Gamma Pro, First, we measured accuracy and carry over on the tips. Second, the condition of optimal incubation was measured by the RPM (Revolution Per Minute) and revolution axis diameter on the incubator. For the analysis for the speed of the specimen-processing, TAT was investigated with the results in a certain time. Result: The correlation coefficients (R2) between the Gamma Pro and RIA-mat 280 showed a good correlation as T3 (0.98), TSH (0.99), FT4 (0.92). The coefficient of variation (C.V) and accuracy was 0.38 % and 98.3 % at tip 1 and 0.39 % and 98.6 % at tip 2. Carry over showed 0.80 % and 1.04% at tip 1 and tip 2, respectively. These results indicate that tips had no effect on carry over contamination. At the incubator condition, we found that the optimal condition was 1.0mm of diameter at 600RPM in 1.0mm and 1.5mm of at 500RPM or 1.0mm and 1.5 mm of diameter at 600 RPM. the Gamma Pro showed that the number of exam times were increased as maximum 20 times/day comparing to 6 times/day by current automatic equipment. These results also led to the short TAT from 4.20 hour to 2.19 hours in whole processing. Conclusion: The correlation of between the Gamma Pro and RIA-mat 280 was good and has not carry over contamination in tips. The domestic automation equipment (Gamma Pro) decreases the TAT in whole test comparing to RIA-280. These results demonstrate that Gamma Pro has a good efficiency, reliability and practical usefulness, which may contribute to the excellent skill to process the large scale specimens.

  • PDF

Design and Implementation of Game Server using the Efficient Load Balancing Technology based on CPU Utilization (게임서버의 CPU 사용율 기반 효율적인 부하균등화 기술의 설계 및 구현)

  • Myung, Won-Shig;Han, Jun-Tak
    • Journal of Korea Game Society
    • /
    • v.4 no.4
    • /
    • pp.11-18
    • /
    • 2004
  • The on-line games in the past were played by only two persons exchanging data based on one-to-one connections, whereas recent ones (e.g. MMORPG: Massively Multi-player Online Role-playings Game) enable tens of thousands of people to be connected simultaneously. Specifically, Korea has established an excellent network infrastructure that can't be found anywhere in the world. Almost every household has a high-speed Internet access. What made this possible was, in part, high density of population that has accelerated the formation of good Internet infrastructure. However, this rapid increase in the use of on-line games may lead to surging traffics exceeding the limited Internet communication capacity so that the connection to the games is unstable or the server fails. expanding the servers though this measure is very costly could solve this problem. To deal with this problem, the present study proposes the load distribution technology that connects in the form of local clustering the game servers divided by their contents used in each on-line game reduces the loads of specific servers using the load balancer, and enhances performance of sewer for their efficient operation. In this paper, a cluster system is proposed where each Game server in the system has different contents service and loads are distributed efficiently using the game server resource information such as CPU utilization. Game sewers having different contents are mutually connected and managed with a network file system to maintain information consistency required to support resource information updates, deletions, and additions. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 12%, 10% respectively.

  • PDF

Interaction Between TCP and MAC-layer to Improve TCP Flow Performance over WLANs (유무선랜 환경에서 TCP Flow의 성능향상을 위한 MAC 계층과 TCP 계층의 연동기법)

  • Kim, Jae-Hoon;Chung, Kwang-Sue
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.2
    • /
    • pp.99-111
    • /
    • 2008
  • In recent years, the needs for WLANs(Wireless Local Area Networks) technology which can access to Internet anywhere have been dramatically increased particularly in SOHO(Small Office Home Office) and Hot Spot. However, unlike wired networks, there are some unique characteristics of wireless networks. These characteristics include the burst packet losses due to unreliable wireless channel. Note that burst packet losses, which occur when the distance between the wireless station and the AP(Access Point) increase or when obstacles move temporarily between the station and AP, are very frequent in 802.11 networks. Conversely, due to burst packet losses, the performance of 802.11 networks are not always as sufficient as the current application require, particularly when they use TCP at the transport layer. The high packet loss rate over wireless links can trigger unnecessary execution of TCP congestion control algorithm, resulting in performance degradation. In order to overcome the limitations of WLANs environment, MAC-layer LDA(Loss Differentiation Algorithm)has been proposed. MAC-layer LDA prevents TCP's timeout by increasing CRD(Consecutive Retry Duration) higher than burst packet loss duration. However, in the wireless channel with high packet loss rate, MAC-layer LDA does not work well because of two reason: (a) If the CRD is lower than burst packet loss duration due to the limited increase of retry limit, end-to-end performance is degraded. (b) energy of mobile device and bandwidth utilization in the wireless link are wasted unnecessarily by Reducing the drainage speed of the network buffer due to the increase of CRD. In this paper, we propose a new retransmission module based on Cross-layer approach, called BLD(Burst Loss Detection) module, to solve the limitation of previous link layer retransmission schemes. BLD module's algorithm is retransmission mechanism at IEEE 802.11 networks and performs retransmission based on the interaction between retransmission mechanisms of the MAC layer and TCP. From the simulation by using ns-2(Network Simulator), we could see more improved TCP throughput and energy efficiency with the proposed scheme than previous mechanisms.

EFFECT OF CHLORHEXIDINE ON MICROTENSILE BOND STRENGTH OF DENTIN BONDING SYSTEMS (Chlorhexidine 처리가 상아질 접착제의 미세인장결합강도에 미치는 영향)

  • Oh, Eun-Hwa;Choi, Kyoung-Kyu;Kim, Jong-Ryul;Park, Sang-Jin
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.2
    • /
    • pp.148-161
    • /
    • 2008
  • The purpose of this study was to evaluate the effect of chlorhexidine (CHX) on microtensile bond strength (${\mu}TBS$) of dentin bonding systems. Dentin collagenolytic and gelatinolytic activities can be suppressed by protease inhibitors, indicating that MMPs (Matrix metalloproteinases) inhibition could be beneficial in the preservation of hybrid layers. Chlorhexidine (CHX) is known as an inhibitor of MMPs activity in vitro. The experiment was proceeded as follows: At first, flat occlusal surfaces were prepared on mid-coronal dentin of extracted third molars. GI (Glass Ionomer) group was treated with dentin conditioner, and then, applied with 2 % CHX. Both SM (Scotchbond Multipurpose) and SB (Single Bond) group were applied with CHX after acid-etched with 37% phosphoric acid. TS (Clearfil Tri-S) group was applied with CHX, and then, with adhesives. Hybrid composite Z-250 and resin-modified glass ionomer Fuji-II LC was built up on experimental dentin surfaces. Half of them were subjected to 10,000 thermocycle, while the others were tested immediately. With the resulting data, statistically two-way ANOVA was performed to assess the ${\mu}TBS$ before and after thermo cycling and the effect of CHX. All statistical tests were carried out at the 95 % level of confidence. The failure mode of the testing samples was observed under a scanning electron microscopy (SEM). Within limited results, the results of this study were as follows; 1. In all experimental groups applied with 2 % chlorhexidine, the microtensile bond strength increased, and thermo cycling decreased the micro tensile bond strength (P > 0.05). 2. Compared to the thermocycling groups without chlorhexidine, those with both thermocycling and chlorhexidine showed higher microtensile bond strength, and there was significant difference especially in GI and TS groups. 3. SEM analysis of failure mode distribution revealed the adhesive failure at hybrid layer in most of the specimen. and the shift of the failure site from bottom to top of the hybrid layer with chlorhexidine groups. 2 % chlorhexidine application after acid-etching proved to preserve the durability of the hybrid layer and microtensile bond strength of dentin bonding systems.

Validation of Satellite SMAP Sea Surface Salinity using Ieodo Ocean Research Station Data (이어도 해양과학기지 자료를 활용한 SMAP 인공위성 염분 검증)

  • Park, Jae-Jin;Park, Kyung-Ae;Kim, Hee-Young;Lee, Eunil;Byun, Do-Seong;Jeong, Kwang-Yeong
    • Journal of the Korean earth science society
    • /
    • v.41 no.5
    • /
    • pp.469-477
    • /
    • 2020
  • Salinity is not only an important variable that determines the density of the ocean but also one of the main parameters representing the global water cycle. Ocean salinity observations have been mainly conducted using ships, Argo floats, and buoys. Since the first satellite salinity was launched in 2009, it is also possible to observe sea surface salinity in the global ocean using satellite salinity data. However, the satellite salinity data contain various errors, it is necessary to validate its accuracy before applying it as research data. In this study, the salinity accuracy between the Soil Moisture Active Passive (SMAP) satellite salinity data and the in-situ salinity data provided by the Ieodo ocean research station was evaluated, and the error characteristics were analyzed from April 2015 to August 2020. As a result, a total of 314 match-up points were produced, and the root mean square error (RMSE) and mean bias of salinity were 1.79 and 0.91 psu, respectively. Overall, the satellite salinity was overestimated compare to the in-situ salinity. Satellite salinity is dependent on various marine environmental factors such as season, sea surface temperature (SST), and wind speed. In summer, the difference between the satellite salinity and the in-situ salinity was less than 0.18 psu. This means that the accuracy of satellite salinity increases at high SST rather than at low SST. This accuracy was affected by the sensitivity of the sensor. Likewise, the error was reduced at wind speeds greater than 5 m s-1. This study suggests that satellite-derived salinity data should be used in coastal areas for limited use by checking if they are suitable for specific research purposes.