• Title/Summary/Keyword: 코어 네트워크

Search Result 214, Processing Time 0.019 seconds

Table-Based Fault Tolerant Routing Method for Voltage-Frequency-Island NoC (Voltage-Frequency-Island NoC를 위한 테이블 기반의 고장 감내 라우팅 기법)

  • Yoon, Sung Jae;Li, Chang-Lin;Kim, Yong Seok;Han, Tae Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.8
    • /
    • pp.66-75
    • /
    • 2016
  • Due to aggressive scaling of device sizes and reduced noise margins, physical defects caused by aging and process variation are continuously increasing. Additionally, with scaling limitation of metal wire and the increasing of communication volume, fault tolerant method in manycore network-on-chip (NoC) has been actively researched. However, there are few researches investigating reliability in NoC with voltage-frequency-island (VFI) regime. In this paper, we propose a table-based routing technique that can communicate, even if link failures occur in the VFI NoC. The output port is alternatively selected between best and the detour routing path in order to improve reliability with minimized hardware cost. Experimental results show that the proposed method achieves full coverage within 1% faulty links. Compared to $d^2$-LBDR that also considers a routing method for searching a detour path in real time, the proposed method, on average, produces 0.8% savings in execution time and 15.9% savings in energy consumption.

Development of a Pipe Network Fluid-Flow Modelling Technique for Porous Media based on Statistical Percolation Theory (통계적 확산이론에 기초한 다공질체의 유동관망 유동해석 기법 개발)

  • Shin, Hyu-Soung
    • The Journal of Engineering Geology
    • /
    • v.23 no.4
    • /
    • pp.447-455
    • /
    • 2013
  • A micro-mechanical pipe network model with the shape of a cube was developed to simulate the behavior of fluid flow through a porous medium. The fluid-flow mechanism through the cubic pipe network channels was defined mainly by introducing a well-known percolation theory (Stauffer and Aharony, 1994). A non-uniform flow generally appeared because all of the pipe diameters were allocated individually in a stochastic manner based on a given pore-size distribution curve and porosity. Fluid was supplied to one surface of the pipe network under a certain driving pressure head and allowed to percolate through the pipe networks. A percolation condition defined by capillary pressure with respect to each pipe diameter was applied first to all of the network pipes. That is, depending on pipe diameter, the fluid may or may not penetrate a specific pipe. Once pore pressures had reached equilibrium and steady-state flow had been attained throughout the network system, Darcy's law was used to compute the resultant permeability. This study investigated the sensitivity of network size to permeability calculations in order to find out the optimum network size which would be used for all the network modelling in this study. Mean pore size and pore size distribution curve obtained from field are used to define each of pipe sizes as being representative of actual oil sites. The calculated and measured permeabilities are in good agreement.

Delayed offloading scheme for IoT tasks considering opportunistic fog computing environment (기회적 포그 컴퓨팅 환경을 고려한 IoT 테스크의 지연된 오프로딩 제공 방안)

  • Kyung, Yeunwoong
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.4
    • /
    • pp.89-92
    • /
    • 2020
  • According to the various IoT(Internet of Things) services, there have been lots of task offloading researches for IoT devices. Since there are service response delay and core network load issues in conventional cloud computing based offloadings, fog computing based offloading has been focused whose location is close to the IoT devices. However, even in the fog computing architecture, the load can be concentrated on the for computing node when the number of requests increase. To solve this problem, the opportunistic fog computing concept which offloads task to available computing resources such as cars and drones is introduced. In previous fog and opportunistic fog node researches, the offloading is performed immediately whenever the service request occurs. This means that the service requests can be offloaded to the opportunistic fog nodes only while they are available. However, if the service response delay requirement is satisfied, there is no need to offload the request immediately. In addition, the load can be distributed by making the best use of the opportunistic fog nodes. Therefore, this paper proposes a delayed offloading scheme to satisfy the response delay requirements and offload the request to the opportunistic fog nodes as efficiently as possible.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.