• Title/Summary/Keyword: System-Level Simulator

Search Result 276, Processing Time 0.032 seconds

Analysis of Distributed Computational Loads in Large-scale AC/DC Power System using Real-Time EMT Simulation (대규모 AC/DC 전력 시스템 실시간 EMP 시뮬레이션의 부하 분산 연구)

  • In Kwon, Park;Yi, Zhong Hu;Yi, Zhang;Hyun Keun, Ku;Yong Han, Kwon
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.8 no.2
    • /
    • pp.159-179
    • /
    • 2022
  • Often a network becomes complex, and multiple entities would get in charge of managing part of the whole network. An example is a utility grid. While the entire grid would go under a single utility company's responsibility, the network is often split into multiple subsections. Subsequently, each subsection would be given as the responsibility area to the corresponding sub-organization in the utility company. The issue of how to make subsystems of adequate size and minimum number of interconnections between subsystems becomes more critical, especially in real-time simulations. Because the computation capability limit of a single computation unit, regardless of whether it is a high-speed conventional CPU core or an FPGA computational engine, it comes with a maximum limit that can be completed within a given amount of execution time. The issue becomes worsened in real time simulation, in which the computation needs to be in precise synchronization with the real-world clock. When the subject of the computation allows for a longer execution time, i.e., a larger time step size, a larger portion of the network can be put on a computation unit. This translates into a larger margin of the difference between the worst and the best. In other words, even though the worst (or the largest) computational burden is orders of magnitude larger than the best (or the smallest) computational burden, all the necessary computation can still be completed within the given amount of time. However, the requirement of real-time makes the margin much smaller. In other words, the difference between the worst and the best should be as small as possible in order to ensure the even distribution of the computational load. Besides, data exchange/communication is essential in parallel computation, affecting the overall performance. However, the exchange of data takes time. Therefore, the corresponding consideration needs to be with the computational load distribution among multiple calculation units. If it turns out in a satisfactory way, such distribution will raise the possibility of completing the necessary computation in a given amount of time, which might come down in the level of microsecond order. This paper presents an effective way to split a given electrical network, according to multiple criteria, for the purpose of distributing the entire computational load into a set of even (or close to even) sized computational loads. Based on the proposed system splitting method, heavy computation burdens of large-scale electrical networks can be distributed to multiple calculation units, such as an RTDS real time simulator, achieving either more efficient usage of the calculation units, a reduction of the necessary size of the simulation time step, or both.

Development of an Algorithm for Dynamic Traffic Operations of Freeway Climbing Lane Toward Traffic Safety (교통안전성을 고려한 고속도로 오르막차로 동적운영 알고리즘 개발)

  • PARK, Hyunjin;YOUN, Seokmin;OH, Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.1
    • /
    • pp.68-80
    • /
    • 2016
  • Interest in freeway truck traffic has increased largely due to greater safety concerns regarding truck-related crashes. The negative interactions between slow-moving trucks and other vehicles are a primary cause of hazardous conditions, which lead to crashes with larger speed variations. To improve operational efficiency and safety, providing a climbing lane that separates slow-moving trucks from higher performance vehicles is frequently considered when upgrading geometrics. This study developed an operations strategy for freeway climbing lanes based on traffic conditions in real time. To consider traffic safety when designing a dynamic strategy to determine whether a climbing lane is closed or open, various factors, including the level of service (LOS) and the percentage of trucks, are investigated through microscopic simulations. A microscopic traffic simulator, VISSIM, was used to simulate freeway traffic streams and collect vehicle-maneuvering data. Additionally, an external application program interface, VISSIM's COM-interface, was used to implement the proposed climbing lane operations strategies. Surrogate safety measures (SSM), including the frequency of rear-end conflicts and, were used to quantitatively evaluate the traffic safety using an analysis of individual vehicle trajectories obtained from VISSIM simulations with various operations scenarios. It is expected that the proposed algorithm can be the backbone for operating the climbing lane in real time for safer traffic management.

A Study on Signal Control Algorithms using Internal Metering for an Oversaturated Network (내부 미터링을 이용한 과포화 네트워크 신호제어 알고리즘 연구)

  • Song, Myeong-Gyun;Lee, Yeong-In
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.6
    • /
    • pp.185-196
    • /
    • 2007
  • The aim of this research is to develop a signal control algorithm using internal metering to minimize total delay that vehicles go through, in case a network is oversaturated. To calculate total delay on the network, the authors first detect vehicles' arrivals and departures in the network through the detecting system, and chase the vehicles' flow in the links with a platoon dispersion model. Following these, the authors calculate the queue length in all the inks of the network through the chase of vehicles, deduce the stopped time delay, and finally convert the stopped time delay to the approach delay with a time-space diagram. Based on this calculated delay, an algorithm that calculates the level of the internal metering necessary to minimize the deduced approach delay is suggested. To verify effectiveness of this suggested algorithm, the authors also conduct simulation with the micro-simulator VISSIM. The result of the simulation shows that the average delay per vehicle is 82.3 sec/veh and this delay is lower than COSMOS (89.9sec/veh) and TOD (99.1sec/veh). It is concluded that this new signal control algorithm suggested in this paper is more effective in controlling an oversaturated network.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Structure of Korean Radiation Oncology in 1997 (국내 병원 별 방사선치료의 진료 구조 현황(1997년 현황을 중심으로 한 선진국과의 비교 구))

  • Kim Mi Sook;Yoo Seoung Yul;Cho Chul Koo;Yoo Hyung Jun;Yang Kwang Mo;Je Young Hoon;Lee Dong Hun;Lee Dong Han;Kim Do Jun
    • Radiation Oncology Journal
    • /
    • v.17 no.2
    • /
    • pp.172-178
    • /
    • 1999
  • Purpose : To measure the basic structural characteristics of radiation oncology facilities in Korea during 1997 and to compare personnel, equipments and patient loads between Korea and developed countries. Methods and Materials : Mail serveys we conducted in 1998 and data on treatment machines, personnel and peformed new patients were collected. Responses were obtained from the 100 percent of facilities. The consensus data of the whole country were summarized using Microsoft Excel program. Results: In Korea during 1997, 42 facilities delivered megavoltage radiation theraphy with 71 treatment machines, 100 radiation oncologists, 26 medical physicist, 205 technologists and 19,773 new patients. Eighty nine percent of facilities in Korea had linear accelators at least 6 MeV maximum photon energy. Ninety five percent of facilities had simulators while five percent of facilities had no simulator, Ninety one percent of facilities had computer planning systems and eighty three percent of facilities reported that they had a written quality assurance program. Thirty six percent of facilities had only one radiation oncologist and thirty eight percent of facilities had no medical physicists. The median of the distribution of annual patients load of a facility, patients load per a machine, patients load per a radiation oncologist, patients load per a therapist and therapists per a machine in Korea were 348 patients per a year, 263 patients per a machine, 171 patients per a radiation oncologist, 81 patients per a therapist, and 3 therapists per a machine respectively. Conclusions : The whole scale of the radiation oncology departments in Korea was smaller than Japan and USA in population ratio regard. In case of hardware level like linear accelerators, simulators and computer planning systems, there was no big differences between Korea and USA. The patients loads of radiation oncologists and therapists had no significant differences as compared with USA. However, it was desirable to consider the part time system in USA because there were a lot of hospitals which did not employ medical physicists.

  • PDF

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.