• 제목/요약/키워드: resource disaggregation

검색결과 4건 처리시간 0.019초

부문 분리된 산업연관표 추계방법 (A Method for Estimating Input-output Tables with Disaggregated Sector)

  • 정기호
    • 자원ㆍ환경경제연구
    • /
    • 제31권4호
    • /
    • pp.849-864
    • /
    • 2022
  • 본 연구는 에너지 및 환경경제학에서 기초 데이터로 많이 활용되는 산업연관표에서 특정 부문이 하부 부문들로 분리되는 경우 새로운 산업연관표를 추계하는 과정을 제시하였다. 보편적으로 산업연관표 추계에 이용되는 RAS 방법은 새로운 산업연관표의 부문별 생산액, 중간투입계, 중간수요계의 정보를 반드시 필요로 하지만, 많은 경우에 부문별 중간수요계 정보를 확보하기 어렵다는 문제가 있다. 본 연구는 특정 부문이 하부 부문들로 분리되는 상황에서 부문별 중간수요계 정보를 사용하지 않고도 새로운 산업연관표를 추계할 수 있는 과정을 제시하였다. 핵심 아이디어는 분리 후 산업연관표의 많은 원소들의 값이 분리 전 산업연관표의 원소들 값과 같다는 점과 분리 후 부문들의 원소 합이 분리 전 부문의 원소 값과 같다는 점이다. 중간수요계 정보 대신에 이들 정보를 이용해서 부문 분리 후의 산업연관표에 대한 중간거래행렬이나 투입계수행렬을 추계하는 과정을 제시하였다. 소규모 시뮬레이션 결과, 본 연구가 제시한 과정은 투입계수행렬의 경우 평균 약 11.23%의 추정 오차를 가지며 이것은 중간수요계 정보를 활용하는 RAS의 11.30%의 평균 추정 오차보다 작은 것으로 나타났다. 그러나 여러 선행연구들에서 추가 정보를 활용하는 것이 활용하지 않는 것보다 추정 성과를 항상 향상시키지 않는 것으로 나타났기 때문에, 본 연구의 방법을 현실에 적용하기 위해서는 다양한 시뮬레이션 연구가 필요하다고 판단된다.

CXL 메모리 및 활용 소프트웨어 기술 동향 (Technology Trends in CXL Memory and Utilization Software )

  • 안후영;김선영;박유미;한우종
    • 전자통신동향분석
    • /
    • 제39권1호
    • /
    • pp.62-73
    • /
    • 2024
  • Artificial intelligence relies on data-driven analysis, and the data processing performance strongly depends on factors such as memory capacity, bandwidth, and latency. Fast and large-capacity memory can be achieved by composing numerous high-performance memory units connected via high-performance interconnects, such as Compute Express Link (CXL). CXL is designed to enable efficient communication between central processing units, memory, accelerators, storage, and other computing resources. By adopting CXL, a composable computing architecture can be implemented, enabling flexible server resource configuration using a pool of computing resources. Thus, manufacturers are actively developing hardware and software solutions to support CXL. We present a survey of the latest software for CXL memory utilization and the most recent CXL memory emulation software. The former supports efficient use of CXL memory, and the latter offers a development environment that allows developers to optimize their software for the hardware architecture before commercial release of CXL memory devices. Furthermore, we review key technologies for improving the performance of both the CXL memory pool and CXL-based composable computing architecture along with various use cases.

Implementation of Light-weight I/O Stack for NVMe-over-Fabrics

  • Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.253-259
    • /
    • 2020
  • Most of today's large-scale cloud systems and enterprise data centers are distributing resources to improve scalability and resource utilization. NVMe-over-Fabric protocol allows submitting NVMe commands to a remote NVMe SSD through RDMA (Remote Direct Memory Access) network. It is attracting attention recently because it is possible to construct a disaggregation storage system with low latency through the protocol. However, the current I/O stack of NVMe-over-Fabric has an inefficient structure for maintaining compatibility with the traditional I/O stack. Therefore, in this paper, we propose a new mechanism to reduce I/O latency and CPU overhead by modifying I/O path of NVMe-over-Fabric to pass through legacy block layer. According to the performance evaluation results, the proposed mechanism is able to reduce the I/O latency and CPU overhead by up to 22% and 24% compared to the existing NVMe-over-Fabrics protocol, respectively.

Distributed memory access architecture and control for fully disaggregated datacenter network

  • Kyeong-Eun Han;Ji Wook Youn;Jongtae Song;Dae-Ub Kim;Joon Ki Lee
    • ETRI Journal
    • /
    • 제44권6호
    • /
    • pp.1020-1033
    • /
    • 2022
  • In this paper, we propose novel disaggregated memory module (dMM) architecture and memory access control schemes to solve the collision and contention problems of memory disaggregation, reducing the average memory access time to less than 1 ㎲. In the schemes, the distributed scheduler in each dMM determines the order of memory read/write access based on delay-sensitive priority requests in the disaggregated memory access frame (dMAF). We used the memory-intensive first (MIF) algorithm and priority-based MIF (p-MIF) algorithm that prioritize delay-sensitive and/or memory-intensive (MI) traffic over CPU-intensive (CI) traffic. We evaluated the performance of the proposed schemes through simulation using OPNET and hardware implementation. Our results showed that when the offered load was below 0.7 and the payload of dMAF was 256 bytes, the average round trip time (RTT) was the lowest, ~0.676 ㎲. The dMM scheduling algorithms, MIF and p-MIF, achieved delay less than 1 ㎲ for all MI traffic with less than 10% of transmission overhead.