• Title/Summary/Keyword: 40 기가비트 이더넷

Search Result 6, Processing Time 0.024 seconds

차세대기가비트 이더넷 스위치 기술

  • 백정훈;주범순
    • The Magazine of the IEIE
    • /
    • v.31 no.8
    • /
    • pp.83-95
    • /
    • 2004
  • 이더넷 특유의 범용성과 라인 속도의 포워딩 기능을 제공하는 고성능 네트워크 프로세서의 등장으로 메트로 이더넷의 핵심 장비로 선보인 이더넷 스위치는 메트로 영역에서의 성공 여세를 몰아코어 영역까지 적용범위를 확장하고 있다. 이러한 이더넷 스위치의 시장 변화에 따라 세계 유수의 이더넷 스위치 벤더는 스위칭 용량에 있어서는 수 Tbps ∼ 수 십 Tbps, 라인 인터페이스 및 패킷 처리 능력에 있어서는 10 기가비트 이더넷을 넘어 이것의 후속 버전인 40 기가비트 이더넷 혹은 100 기가비트 이더넷을 수용하면서 캐리어 수준의 신뢰도를 제공하는 차세대 이더넷 스위치 개발을 가속화하고 있는 실정이다.(중략)

  • PDF

고속 인터페이스 기술과 표준화 동향

  • 정태식;주범순;정해원
    • The Magazine of the IEIE
    • /
    • v.31 no.8
    • /
    • pp.73-82
    • /
    • 2004
  • SONET/SDH 전송망에서의 데이터 전송율은 10Gb/s급인 OC-192에서 40Gb/s급인 OC-768 로 발전하였으며, 이더넷 (Ethernet)에서의 데이터 전송율은 1998년 1기가비트 이더넷 기술이 표준화된데 이어 2002년에 10기가비트 이더넷기술의 표준화가 완료되었고 조만간 후속 기술로서 40Gb/s또는 100Gb/s급의 이더넷에 대한 논의가 대두될 것으로 예측된다.(중략)

  • PDF

Dynamic Core Affinity for High-Performance I/O Devices Supporting Multiple Queues (다중 큐를 지원하는 고속 I/O 장치를 위한 동적 코어 친화도)

  • Cho, Joong-Yeon;Uhm, Junyong;Jin, Hyun-Wook;Jung, Sungin
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.736-743
    • /
    • 2016
  • Several studies have reported the impact of core affinity on the network I/O performance of multi-core systems. As the network bandwidth increases significantly, it becomes more important to determine the effective core affinity. Although a framework for dynamic core affinity that considers both network and disk I/O has been suggested, the multiple queues provided by high-speed I/O devices are not properly supported. In this paper, we extend the existing framework of dynamic core affinity to efficiently support the multiple queues of high-speed I/O devices, such as 40 Gigabit Ethernet and NVM Express. Our experimental results show that the extended framework can improve the HDFS file upload throughput by up to 32%, and can provide improved scalability in terms of the number of cores. In addition, we analyze the impact of the assignment policy of multiple I/O queues across a number of cores.

The Technology Trend of Interconnection Network for High Performance Computing (고성능 컴퓨팅을 위한 인터커넥션 네트워크 기술 동향)

  • Cho, Hyeyoung;Jun, Tae Joon;Han, Jiyong
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.8
    • /
    • pp.9-15
    • /
    • 2017
  • With the development of semiconductor integration technology, central processing units and storage devices have been miniaturized and performance has been rapidly developed, interconnection network technology is becoming a more important factor in terms of the performance of high performance computing system. In this paper, we analyze the trend of interconnection network technology used in high performance computing. Interconnect technology, which is the most widely used in the Supercomputer Top 500(2017. 06.), is an Infiniband. Recently, Ethernet is the second highest share after InfiniBand due to the emergence of 40/100Gbps Gigabit Ethernet technology. Gigabit Ethernet, where latency performance is lower than InfiniBand, is preferred in cost-effective medium-sized data centers. In addition, top-end HPC systems that demand high performance are devoting themselves from Ethernet and InfiniBand technologies and are attempting to maximize system performance by introducing their own interconnect networks. In the future, high-performance interconnects are expected to utilize silicon-based optical communication technology to exchange data with light.

Development and Performance Study of a Zero-Copy File Transfer Mechanism for Ink-based PC Cluster Systems (VIA 기반 PC 클러스터 시스템을 위한 무복사 파일 전송 메커니즘의 개발 및 성능분석)

  • Park Sejin;Chung Sang-Hwa;Choi Bong-Sik;Kim Sang-Moon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.557-565
    • /
    • 2005
  • This paper presents the development and implementation of a zero-copy file transfer mechanism that improves the efficiency of file transfers for PC cluster systems using hardware-based VIA(Virtual Interface Architecture) network adapters. VIA is one of the representative user-level communication interfaces, but because there is no library for file transfer, one copy occurs between kernel buffer and user boilers. Our mechanism presents a file transfer primitive that does not require the file system to be modified and allows the NIC to transfer data from the kernel buffer to the remote node directly without copying. To do this, we have developed a hardware-based VIA network adapter, which supports the PCI 64bit/66MHz bus and Gigabit Ethernet, as a NIC, and implemented a zero-copy file transfer mechanism. The experimental results show that the overhead of data coy and context switching in the sender is greatly reduced and the CPU utilization of the sender is reduced to $30\%\~40\%$ of the VIA send/receive mechanism. We demonstrate the performance of the zero-copy file transfer mechanism experimentally. and compare the results with those from existing file transfer mechanisms.

System-Call-Level Core Affinity for Improving Network Performance (네트워크 성능향상을 위한 시스템 호출 수준 코어 친화도)

  • Uhm, Junyong;Cho, Joong-Yeon;Jin, Hyun-Wook
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.1
    • /
    • pp.80-84
    • /
    • 2017
  • Existing operating systems experience scalability issues as the number of cores increases. The network I/O performance on manycore systems is faced with the major limiting factors of cache consistency costs and locking overheads. Legacy methods resolve this issue include the new microkernel-like operating system or modification of existing kernels; however, these solutions are not fully application transparent. In this study, we proposed a library that improves the network performance by separating system call context from user context and by applying the core affinity without any kernel and application modifications. Experiment results showed that our implementation can improve the network throughput of Apache by up to 30%.