• Title/Summary/Keyword: System-level Simulation

Search Result 2,138, Processing Time 0.037 seconds

Evaluation of applicability of linkage modeling using PHABSIM and SWAT (PHABSIM과 SWAT을 이용한 연계모델링 적용성 평가)

  • Kim, Yongwon;Byeon, Sangdon;Park, Jinseok;Woo, Soyoung;Kim, Seongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.819-833
    • /
    • 2021
  • This study is to evaluate applicability of linkage modeling using PHABSIM (Physical Habitat Simulation System) and SWAT (Soil and Water Assessment Tool) and to estimate ecological flow for target fishes of Andong downstream (4,565.7 km2). The SWAT was established considering 2 multi purpose dam (ADD, IHD) and 1 streamflow gauging station (GD). The SWAT was calibrated and validated with 9 years (2012 ~ 2020) data of 1 stream (GD) and 2 multi-purpose dam (ADD, IHD). For streamflow and dam inflows (GD, ADD and IHD), R2, NSE and RMSE were 0.52 ~ 0.74, 0.48 ~ 0.71, and 0.92 ~ 2.51 mm/day respectively. As a result of flow duration analysis for 9 years (2012 ~ 2020) using calibrated streamflow, the average Q185 and Q275 were 36.5 m3/sec (-1.4%) and 23.8 m3/sec (0%) respectively compared with the observed flow duration and were applied to flow boundary condition of PHABSIM. The target stream was selected as the 410 m section where GD is located, and stream cross-section and hydraulic factors were constructed based on Nakdong River Basic Plan Report and HEC-RAS. The dominant species of the target stream was Zacco platypus and the sub-dominant species was Puntungia herzi Herzenstein, and the HSI (Habitat Suitability Index) of target species was collected through references research. As the result of PHABSIM water level and velocity simulation, error of Q185 and Q275 were analyzed -0.12 m, +0.00 m and +0.06 m/s, +0.09 m/s respectively. The average WUA (Weighted Usable Area) and ecological flow of Zacco platypus and Puntungia herzi Herzenstein were evaluated 76,817.0 m2/1000m, 20.0 m3/sec and 46,628.6 m2/1000m, 9.0 m3/sec. This results indicated Zacco platypus is more adaptable to target stream than Puntungia herzi Herzenstein.

Development of an Adaptive Capacity Indicator to Climate Change in the Agricultural Water Sector (농업용수의 기후변화 적응능력 지표 개발 - 가뭄에 대한 적응을 중심으로 -)

  • Yoo, Ga-Young;Kim, Jin-Teak;Kim, Jung-Eun
    • Journal of Environmental Policy
    • /
    • v.7 no.4
    • /
    • pp.35-55
    • /
    • 2008
  • Assessing vulnerability to climate change is the first step to take when setting up appropriate adaptation strategies. Adaptive capacity to climate change is the important factor comprising vulnerability. An adaptive capacity index in agricultural water management system was developed considering agricultural water supply and demand for rice production in Jeolla-do, Korea. The agricultural water supply was assumed to be equal to the amount of water stored in the major agricultural reservoirs, while data on the agricultural water demand was obtained from the dynamic simulation results by Korea Agriculture Corporation(KAC). The spatial unit for analysis was conducted at the county(Si, Gun, Gu) level and temporal scale was based on every month from 1991-2003. Adaptive capacity for drought stress index(ACDS index) was calculated as the percentage of data points where the irrigated water supply was greater than the crop water demand. The ACDS index was compared with SWSCI(Standard Water Storage Capacity Index) and the relationship showed high degree of fit($R^2$=0.84) using the exponential function, indicating that the developed ACDS index is useful for evaluating the status of the balance between agricultural water supply and demand, especially for the small sized agricultural reservoirs. This study provided the methodological basis for developing climate change vulnerability index in agricultural water system which is projected to be more frequently exposed to drought condition in the future due to climate change. Further research should be extended to the study on the water demand of the crops other than rice and to the projection of the change in ACDS index in the future.

  • PDF

Numerical Study on Thermochemical Conversion of Non-Condensable Pyrolysis Gas of PP and PE Using 0D Reaction Model (0D 반응 모델을 활용한 PP와 PE의 비응축성 열분해 기체의 열화학적 전환에 대한 수치해석 연구)

  • Eunji Lee;Won Yang;Uendo Lee;Youngjae Lee
    • Clean Technology
    • /
    • v.30 no.1
    • /
    • pp.37-46
    • /
    • 2024
  • Environmental problems caused by plastic waste have been continuously growing around the world, and plastic waste is increasing even faster after COVID-19. In particular, PP and PE account for more than half of all plastic production, and the amount of waste from these two materials is at a serious level. As a result, researchers are searching for an alternative method to plastic recycling, and plastic pyrolysis is one such alternative. In this paper, a numerical study was conducted on the pyrolysis behavior of non-condensable gas to predict the chemical reaction behavior of the pyrolysis gas. Based on gas products estimated from preceding literature, the behavior of non-condensable gas was analyzed according to temperature and residence time. Numerical analysis showed that as the temperature and residence time increased, the production of H2 and heavy hydrocarbons increased through the conversion of the non-condensable gas, and at the same time, the CH4 and C6H6 species decreased by participating in the reaction. In addition, analysis of the production rate showed that the decomposition reaction of C2H4 was the dominant reaction for H2 generation. Also, it was found that more H2 was produced by PE with higher C2H4 contents. As a future work, an experiment is needed to confirm how to increase the conversion rate of H2 and carbon in plastics through the various operating conditions derived from this study's numerical analysis results.

Simulation of Drying Grain with Solar-Heated Air (태양에너지를 이용한 곡물건조시스템의 시뮬레이션에 관한 연구)

  • 금동혁;김용운
    • Journal of Biosystems Engineering
    • /
    • v.4 no.2
    • /
    • pp.65-83
    • /
    • 1979
  • Low-temperature drying systems have been extensively used for drying cereal grain such as shelled corn and wheat. Since the 1973 energy crisis, many researches have been conducted to apply solar energy as supplemental heat to natural air drying systems. However, little research on rough rice drying has been done in this area, especially very little in Korea. In designing a solar drying system, quality loss, airflow requirements, temperature rise of drying air, fan power and energy requirements should be throughly studied. The factors affecting solar drying systems are airflow rate, initial moisture content, the amount of heat added to drying air, fan operation method and the weather conditions. The major objectives of this study were to analyze the effects of the performance factors and determine design parameters such as airflow requirements, optimum bed depth, optimum temperature rise of drying air, fan operation method and collector size. Three hourly observations based on the 4-year weather data in Chuncheon area were used to simulate rough rice drying. The results can be summarized as follows: 1. The results of the statistical analysis indicated that the experimental and predicted values of the temperature rise of the air passing through the collector agreed well. 2. Equilibrium moisture content was affected a little by airflow rate, but affected mainly by the amount of heat added, to drying air. Equilibrium moisture content ranged from 12.2 to 13.2 percent wet basis for the continuous fan operation, from 10.4 to 11.7 percent wet basis for the intermittent fan operation respectively, in range of 1. 6 to 5. 9 degrees Centigrade average temperature rise of drying air. 3. Average moisture content when top layer was dried to 15 percent wet basis ranged from 13.1 to 13.9 percent wet basis for the continuous fan operation, from 11.9 to 13.4 percent wet basis for the intermittent fan operation respectively, in the range of 1.6 to 5.9 degrees Centigrade average temperature rise of drying air and 18 to 24 percent wet basis initial moisture content. The results indicated that grain was overdried with the intermittent fan operation in any range of temperature rise of drying air. Therefore, the continuous fan operation is usually more effective than the intermittent fan operation considering the overdrying. 4. For the continuous fan operation, the average temperature rise of drying air may be limited to 2.2 to 3. 3 degrees Centigrade considering safe storage moisture level of 13.5 to 14 perceut wet basis. 5. Required drying time decrease ranged from 40 to 50 percent each time the airflow rate was doubled and from 3.9 to 4.3 percent approximately for each one degrees Centigrade in average temperature rise of drying air regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on required drying time. 6. Required drying time increase ranged from 18 to 30 percent approximately for each 2 percent increase in initial moisture content regardless of the fan operation methods, in the range of 18 to 24 percent moisture. 7. The intermittent fan operation showed about 36 to 42 percent decrease in required drying time as compared with the continuous fan operation. 8. Drymatter loss decrease ranged from 34 to 46 percent each time the airflow rate was doubled and from 2 to 3 percent approximately for each one degrees Centigrade in average temperature rise of drying air, regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on drymatter loss. 9. Drymatter loss increase ranged from 50 to 78 percent approximately for each 2 percent increase in initial moisture content, in the range of 18 to 24 percent moisture. 10. The intermittent fan operation: showed about 40 to 50 percent increase in drymatter loss as compared with the continuous fan operation and the increasing rate was higher at high level of initial moisture and average temperature rise. 11. Year-to-year weather conditions had a little effect on required drying time and drymatter loss. 12. The equations for estimating time required to dry top layer to 16 and 1536 wet basis and drymatter loss were derived as functions of the performance factors. by the least square method. 13. Minimum airflow rates based on 0.5 percent drymatter loss were estimated. Minimum airflow rates for the intermittent fan operation were approximately 1.5 to 1.8 times as much as compared with the continuous fan operation, but a few differences among year-to-year. 14. Required fan horsepower and energy for the intermittent fan operation were 3. 7 and 1. 5 times respectively as much as compared with the continuous fan operation. 15. The continuous fan operation may be more effective than the intermittent fan operation considering overdrying, fan horsepower requirements, and energy use. 16. A method for estimating the required collection area of flat-plate solar collector using average temperature rise and airflow rate was presented.

  • PDF

Simulation of Drying Grain with Solar-Heated Air (태양에너지를 이용한 곡물건조시스템의 시뮬레이션에 관한 연구)

  • Keum, Dong-Hyuk
    • Journal of Biosystems Engineering
    • /
    • v.4 no.2
    • /
    • pp.64-64
    • /
    • 1979
  • Low-temperature drying systems have been extensively used for drying cereal grain such as shelled corn and wheat. Since the 1973 energy crisis, many researches have been conducted to apply solar energy as supplemental heat to natural air drying systems. However, little research on rough rice drying has been done in this area, especially very little in Korea. In designing a solar drying system, quality loss, airflow requirements, temperature rise of drying air, fan power and energy requirements should be throughly studied. The factors affecting solar drying systems are airflow rate, initial moisture content, the amount of heat added to drying air, fan operation method and the weather conditions. The major objectives of this study were to analyze the effects of the performance factors and determine design parameters such as airflow requirements, optimum bed depth, optimum temperature rise of drying air, fan operation method and collector size. Three hourly observations based on the 4-year weather data in Chuncheon area were used to simulate rough rice drying. The results can be summarized as follows: 1. The results of the statistical analysis indicated that the experimental and predicted values of the temperature rise of the air passing through the collector agreed well.2. Equilibrium moisture content was affected a little by airflow rate, but affected mainly by the amount of heat added, to drying air. Equilibrium moisture content ranged from 12.2 to 13.2 percent wet basis for the continuous fan operation, from 10.4 to 11.7 percent wet basis for the intermittent fan operation respectively, in range of 1. 6 to 5. 9 degrees Centigrade average temperature rise of drying air.3. Average moisture content when top layer was dried to 15 percent wet basis ranged from 13.1 to 13.9 percent wet basis for the continuous fan operation, from 11.9 to 13.4 percent wet basis for the intermittent fan operation respectively, in the range of 1.6 to 5.9 degrees Centigrade average temperature rise of drying air and 18 to 24 percent wet basis initial moisture content. The results indicated that grain was overdried with the intermittent fan operation in any range of temperature rise of drying air. Therefore, the continuous fan operation is usually more effective than the intermittent fan operation considering the overdrying.4. For the continuous fan operation, the average temperature rise of drying air may be limited to 2.2 to 3. 3 degrees Centigrade considering safe storage moisture level of 13.5 to 14 perceut wet basis.5. Required drying time decrease ranged from 40 to 50 percent each time the airflow rate was doubled and from 3.9 to 4.3 percent approximately for each one degrees Centigrade in average temperature rise of drying air regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on required drying time.6. Required drying time increase ranged from 18 to 30 percent approximately for each 2 percent increase in initial moisture content regardless of the fan operation methods, in the range of 18 to 24 percent moisture.7. The intermittent fan operation showed about 36 to 42 percent decrease in required drying time as compared with the continuous fan operation.8. Drymatter loss decrease ranged from 34 to 46 percent each time the airflow rate was doubled and from 2 to 3 percent approximately for each one degrees Centigrade in average temperature rise of drying air, regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on drymatter loss. 9. Drymatter loss increase ranged from 50 to 78 percent approximately for each 2 percent increase in initial moisture content, in the range of 18 to 24 percent moisture. 10. The intermittent fan operation: showed about 40 to 50 percent increase in drymatter loss as compared with the continuous fan operation and the increasing rate was higher at high level of initial moisture and average temperature rise.11. Year-to-year weather conditions had a little effect on required drying time and drymatter loss.12. The equations for estimating time required to dry top layer to 16 and 1536 wet basis and drymatter loss were derived as functions of the performance factors. by the least square method.13. Minimum airflow rates based on 0.5 percent drymatter loss were estimated.Minimum airflow rates for the intermittent fan operation were approximately 1.5 to 1.8 times as much as compared with the continuous fan operation, but a few differences among year-to-year.14. Required fan horsepower and energy for the intermittent fan operation were3. 7 and 1. 5 times respectively as much as compared with the continuous fan operation.15. The continuous fan operation may be more effective than the intermittent fan operation considering overdrying, fan horsepower requirements, and energy use.16. A method for estimating the required collection area of flat-plate solar collector using average temperature rise and airflow rate was presented.

Perspective of breaking stagnation of soybean yield under monsoon climate

  • Shiraiwa, Tatsuhiko
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.8-9
    • /
    • 2017
  • Soybean yield has been low and unstable in Japan and other areas in East Asia, despite long history of cultivation. This is contrasting with consistent increase of yield in North and South America. This presentation tries to describe perspective of breaking stagnation of soybean yield in East Asia, considering the factors of the different yields between regions. Large amount of rainfall with occasional dry-spell in the summer is a nature of monsoon climate and as frequently stated excess water is the factor of low and unstable soybean yield. For example, there exists a great deal of field-to-field variation in yield of 'Tanbaguro' soybean, which is reputed for high market value and thus cultivated intensively and this results in low average yield. According to our field survey, a major portion of yield variation occurs in early growth period. Soybean production on drained paddy fields is also vulnerable to drought stress after flowering. An analysis at the above study site demonstrated a substantial field-to-field variation of canopy transpiration activity in the mid-summer, but the variation of pod-set was not as large as that of early growth. As frequently mentioned by the contest winners of good practice farming, avoidance of excess water problem in the early growth period is of greatest importance. A series of technological development took place in Japan in crop management for stable crop establishment and growth, that includes seed-bed preparation with ridge and/or chisel ploughing, adjustment of seed moisture content, seed treatment with mancozeb+metalaxyl and the water table control system, FOEAS. A unique success is seen in the tidal swamp area in South Sumatra with the Saturated Soil Culture (SSC), which is for managing acidity problem of pyrite soils. In 2016, an average yield of $2.4tha^{-1}$ was recorded for a 450 ha area with SSC (Ghulamahdi 2017, personal communication). This is a sort of raised bed culture and thus the moisture condition is kept markedly stable during growth period. For genetic control, too, many attempts are on-going for better emergence and plant growth after emergence under excess water. There seems to exist two aspects of excess water resistance, one related to phytophthora resistance and the other with better growth under excess water. The improvement for the latter is particularly challenging and genomic approach is expected to be effectively utilized. The crop model simulation would estimate/evaluate the impact of environmental and genetic factors. But comprehensive crop models for soybean are mainly for cultivations on upland fields and crop response to excess water is not fully accounted for. A soybean model for production on drained paddy fields under monsoon climate is demanded to coordinate technological development under changing climate. We recently recognized that the yield potential of recent US cultivars is greater than that of Japanese cultivars and this also may be responsible for different yield trends. Cultivar comparisons proved that higher yields are associated with greater biomass production specifically during early seed filling, in which high and well sustained activity of leaf gas exchange is related. In fact, the leaf stomatal conductance is considered to have been improved during last a couple of decades in the USA through selections for high yield in several crop species. It is suspected that priority to product quality of soybean as food crop, especially large seed size in Japan, did not allow efficient improvement of productivity. We also recently found a substantial variation of yielding performance under an environment of Indonesia among divergent cultivars from tropical and temperate regions through in a part biomass productivity. Gas exchange activity again seems to be involved. Unlike in North America where transpiration adjustment is considered necessary to avoid terminal drought, under the monsoon climate with wet summer plants with higher activity of gas exchange than current level might be advantageous. In order to explore higher or better-adjusted canopy function, the methodological development is demanded for canopy-level evaluation of transpiration activity. The stagnation of soybean yield would be broken through controlling variable water environment and breeding efforts to improve the quality-oriented cultivars for stable and high yield.

  • PDF

A Stochastic Study for the Emergency Treatment of Carbon Monoxide Poisoning in Korea (일산화탄소중독(一酸化炭素中毒)의 진료대책(診療對策) 수립(樹立)을 위한 추계학적(推計學的) 연구(硏究))

  • Kim, Yong-Ik;Yun, Dork-Ro;Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.16 no.1
    • /
    • pp.135-152
    • /
    • 1983
  • Emergency medical service is an important part of the health care delivery system, and the optimal allocation of resources and their efficient utilization are essentially demanded. Since these conditions are the prerequisite to prompt treatment which, in turn, will be crucial for life saving and in reducing the undesirable sequelae of the event. This study, taking the hyperbaric chamber for carbon monoxide poisoning as an example, is to develop a stochastic approach for solving the problems of optimal allocation of such emergency medical facility in Korea. The hyperbaric chamber, in Korea, is used almost exclusively for the treatment of acute carbon monoxide poisoning, most of which occur at home, since the coal briquette is used as domestic fuel by 69.6 per cent of the Korean population. The annual incidence rate of the comatous and fatal carbon monoxide poisoning is estimated at 45.5 per 10,000 of coal briquette-using population. It offers a serious public health problem and occupies a large portion of the emergency outpatients, especially in the winter season. The requirement of hyperbaric chambers can be calculated by setting the level of the annual queueing rate, which is here defined as the proportion of the annual number of the queued patients among the annual number of the total patients. The rate is determined by the size of the coal briquette-using population which generate a certain number of carbon monoxide poisoning patients in terms of the annual incidence rate, and the number of hyperbaric chambers per hospital to which the patients are sent, assuming that there is no referral of the patients among hospitals. The queueing occurs due to the conflicting events of the 'arrival' of the patients and the 'service' of the hyperbaric chambers. Here, we can assume that the length of the service time of hyperbaric chambers is fixed at sixty minutes, and the service discipline is based on 'first come, first served'. The arrival pattern of the carbon monoxide poisoning is relatively unique, because it usually occurs while the people are in bed. Diurnal variation of the carbon monoxide poisoning can hardly be formulated mathematically, so empirical cumulative distribution of the probability of the hourly arrival of the patients was used for Monte Carlo simulation to calculate the probability of queueing by the number of the patients per day, for the cases of one, two or three hyperbaric chambers assumed to be available per hospital. Incidence of the carbon monoxide poisoning also has strong seasonal variation, because of the four distinctive seasons in Korea. So the number of the patients per day could not be assumed to be distributed according to the Poisson distribution. Testing the fitness of various distributions of rare event, it turned out to be that the daily distribution of the carbon monoxide poisoning fits well to the Polya-Eggenberger distribution. With this model, we could forecast the number of the poisonings per day by the size of the coal-briquette using population. By combining the probability of queueing by the number of patients per day, and the probability of the number of patients per day in a year, we can estimate the number of the queued patients and the number of the patients in a year by the number of hyperbaric chamber per hospital and by the size of coal briquette-using population. Setting 5 per cent as the annual queueing rate, the required number of hyperbaric chambers was calculated for each province and for the whole country, in the cases of 25, 50, 75 and 100 per cent of the treatment rate which stand for the rate of the patients treated by hyperbaric chamber among the patients who are to be treated. Findings of the study were as follows. 1. Probability of the number of patients per day follows Polya-Eggenberger distribution. $$P(X=\gamma)=\frac{\Pi\limits_{k=1}^\gamma[m+(K-1)\times10.86]}{\gamma!}\times11.86^{-{(\frac{m}{10.86}+\gamma)}}$$ when$${\gamma}=1,2,...,n$$$$P(X=0)=11.86^{-(m/10.86)}$$ when $${\gamma}=0$$ Hourly arrival pattern of the patients turned out to be bimodal, the large peak was observed in $7 : 00{\sim}8 : 00$ a.m., and the small peak in $11 : 00{\sim}12 : 00$ p.m. 2. In the cases of only one or two hyperbaric chambers installed per hospital, the annual queueing rate will be at the level of more than 5 per cent. Only in case of three chambers, however, the rate will reach 5 per cent when the average number of the patients per day is 0.481. 3. According to the results above, a hospital equipped with three hyperbaric chambers will be able to serve 166,485, 83,242, 55,495 and 41,620 of population, when the treatmet rate are 25, 50, 75 and 100 per cent. 4. The required number of hyperbaric chambers are estimated at 483, 963, 1,441 and 1,923 when the treatment rate are taken as 25, 50, 75 and 100 per cent. Therefore, the shortage are respectively turned out to be 312, 791. 1,270 and 1,752. The author believes that the methodology developed in this study will also be applicable to the problems of resource allocation for the other kinds of the emergency medical facilities.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

A Theoretical Model for the Analysis of Residual Motion Artifacts in 4D CT Scans (이론적 모델을 이용한 4DCT에서의 Motion Artifact 분석)

  • Kim, Tae-Ho;Yoon, Jai-Woong;Kang, Seong-Hee;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.23 no.3
    • /
    • pp.145-153
    • /
    • 2012
  • In this study, we quantify the residual motion artifact in 4D-CT scan using the dynamic lung phantom which could simulate respiratory target motion and suggest a simple one-dimension theoretical model to explain and characterize the source of motion artifacts in 4DCT scanning. We set-up regular 1D sine motion and adjusted three level of amplitude (10, 20, 30 mm) with fixed period (4s). The 4DCT scans are acquired in helical mode and phase information provided by the belt type respiratory monitoring system. The images were sorted into ten phase bins ranging from 0% to 90%. The reconstructed images were subsequently imported into the Treatment Planning System (CorePLAN, SC&J) for target delineation using a fixed contour window and dimensions of the three targets are measured along the direction of motion. Target dimension of each phase image have same changing trend. The error is minimum at 50% phase in all case (10, 20, 30 mm) and we found that ${\Delta}S$ (target dimension change) of 10, 20 and 30 mm amplitude were 0 (0%), 0.1 (5%), 0.1 (5%) cm respectively compare to the static image of target diameter (2 cm). while the error is maximum at 30% and 80% phase ${\Delta}S$ of 10, 20 and 30 mm amplitude were 0.2 (10%), 0.7 (35%), 0.9 (45%) cm respectively. Based on these result, we try to analysis the residual motion artifact in 4D-CT scan using a simple one-dimension theoretical model and also we developed a simulation program. Our results explain the effect of residual motion on each phase target displacement and also shown that residual motion artifact was affected that the target velocity at each phase. In this study, we focus on provides a more intuitive understanding about the residual motion artifact and try to explain the relationship motion parameters of the scanner, treatment couch and tumor. In conclusion, our results could help to decide the appropriate reconstruction phase and CT parameters which reduce the residual motion artifact in 4DCT.