• Title/Summary/Keyword: 코어 네트워크

Search Result 214, Processing Time 0.022 seconds

Design of Software and Hardware Modules for a TCP/IP Offload Engine with Separated Transmission and Reception Paths (송수신 분리형 TCP/IP Offload Engine을 위한 소프트웨어 및 하드웨어 모듈의 설계)

  • Jang Hank-Kok;Chung Sang-Hwa;Choi Young-In
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.691-698
    • /
    • 2006
  • TCP/IP Offload Engine (TOE) is a technology that processes TCP/IP on a network adapter instead of a host CPU to reduce protocol processing overhead from the host CPU. There have been some approaches to implementing TOE: software TOE based on an embedded processor; hardware TOE based on ASIC implementation; and hybrid TOE in which software and hardware functions are combined. In this paper, we designed software modules and hardware modules for a hybrid TOE on an FPGA that had two processor cores. Software modules are based on the embedded Linux. Hardware modules are for data transmission (TX) and reception (RX). One core controls the TX path and the other controls the RX path of the Linux. This TX/RX path separation mechanism can reduce task switching overheads between processes and overcome poor performance of single embedded processor. Hardware modules deal with creating headers for outgoing packets, processing headers of incoming packets, and fetching or storing data from or to the host memory by DMA. These can make it possible to improve the performance of data transmission and reception. We proved performance of the TOE with separated transmission and reception paths by performing experiments with a TOE network adapter that was equipped with the FPGA having processor cores.

Voltage-Frequency-Island Aware Energy Optimization Methodology for Network-on-Chip Design (전압-주파수-구역을 고려한 에너지 최적화 네트워크-온-칩 설계 방법론)

  • Kim, Woo-Joong;Kwon, Soon-Tae;Shin, Dong-Kun;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.8
    • /
    • pp.22-30
    • /
    • 2009
  • Due to high levels of integration and complexity, the Network-on-Chip (NoC) approach has emerged as a new design paradigm to overcome on-chip communication issues and data bandwidth limits in conventional SoC(System-on-Chip) design. In particular, exponentially growing of energy consumption caused by high frequency, synchronization and distributing a single global clock signal throughout the chip have become major design bottlenecks. To deal with these issues, a globally asynchronous, locally synchronous (GALS) design combined with low power techniques is considered. Such a design style fits nicely with the concept of voltage-frequency-islands (VFI) which has been recently introduced for achieving fine-grain system-level power management. In this paper, we propose an efficient design methodology that minimizes energy consumption by VFI partitioning on an NoC architecture as well as assigning supply and threshold voltage levels to each VFI. The proposed algorithm which find VFI and appropriate core (or processing element) supply voltage consists of traffic-aware core graph partitioning, communication contention delay-aware tile mapping, power variation-aware core dynamic voltage scaling (DVS), power efficient VFI merging and voltage update on the VFIs Simulation results show that average 10.3% improvement in energy consumption compared to other existing works.

A Scalable Dynamic QoS Support Protocol (확장성 있는 동적 QoS 지원 프로토콜)

  • 문새롬;이미정
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.6
    • /
    • pp.722-737
    • /
    • 2002
  • As the number of multimedia applications increases, various protocols and architectures have been proposed to provide QoS(Quality of Service) guarantees in the Internet. Most of these techniques, though, bear inherent contradiction between the scalability and the capability of providing QoS guarantees. In this paper, we propose a protocol, named DQSP(Dynamic QoS Support Protocol), which provides the dynamic resource allocation and admission control for QoS guarantees in a scalable way. In DQSP, the core routers only maintain the per source-edge router resource allocation state information. Each of the source-edge routers maintains the usage information for the resources allocated to itself on each of the network links. Based on this information, the source edge routers perform admission control for the incoming flows. For the resource reservation and admission control, DQSP does not incur per flow signaling at the core network, and the amount of state information at the core routers depends on the scale of the topology instead of the number of user flows. Simulation results show that DQSP achieves efficient resource utilization without incurring the number of user flow related scalability problems as with IntServ, which is one of the representative architectures providing end-to-end QoS guarantees.

A Packet Processing of Handling Large-capacity Traffic over 20Gbps Method Using Multi Core and Huge Page Memory Approache

  • Kwon, Young-Sun;Park, Byeong-Chan;Chang, Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.73-80
    • /
    • 2021
  • In this paper, we propose a packet processing method capable of handling large-capacity traffic over 20Gbps using multi-core and huge page memory approaches. As ICT technology advances, the global average monthly traffic is expected to reach 396 exabytes by 2022. With the increase in network traffic, cyber threats are also increasing, increasing the importance of traffic analysis. Traffic analyzed as an existing high-cost foreign product simply stores statistical data and visually shows it. Network administrators introduce and analyze many traffic analysis systems to analyze traffic in various sections, but they cannot check the aggregated traffic of the entire network. In addition, since most of the existing equipment is of the 10Gbps class, it cannot handle the increasing traffic every year at a fast speed. In this paper, as a method of processing large-capacity traffic over 20Gbps, the process of processing raw packets without copying from single-core and basic SMA memory approaches to high-performance packet reception, packet detection, and statistics using multi-core and NUMA memory approaches suggest When using the proposed method, it was confirmed that more than 50% of the traffic was processed compared to the existing equipment.

Social Network Analysis of Shared Bicycle Usage Pattern Based on Urban Characteristics: A Case Study of Seoul Data (도시특성에 기반한 공유 자전거 이용 패턴의 소셜 네트워크 분석 연구: 서울시 데이터 사례 분석)

  • Byung Hyun Lee;Il Young Choi;Jae Kyeong Kim
    • Information Systems Review
    • /
    • v.22 no.1
    • /
    • pp.147-165
    • /
    • 2020
  • The sharing economy service is now spreading in various fields such as accommodation, cars and bicycles. In particular, bicycle-sharing service have become very popular around the world, and since September 2015, Seoul has been providing a bicycle-sharing service called 'Ttareungi'. However, the number of bicycles is unbalanced among rental stations continuously according to the user's bicycle use. In order to solve these problems, we employed social network analysis using Ttareungi data in Seoul, Korea. We analyzed degree centrality, closeness centrality, betweenness centrality and k-core. As a result, the degree centrality was found to be closely linked with bus or subway transfer center. Closeness centrality was found to be in an unbalanced departure and arrival frequency or poor public transport proximity. Betweenness centrality means where the frequency of departure and arrival occurs frequently. Finally, the k-core analysis showed that Mapo-gu was the most important group by time zone. Therefore, the results of this study may contribute to the planning of relocation and additional installation of bike rental station in Seoul.

QoS Gurantieeing Scheme based on Deflection Routing in the Optical Burst Switching Networks (광 버스트 교환망에서 우회 라우팅을 이용한 QoS 보장 방법)

  • Kim, Jong-Won;Kim, Jung-Youp;Choi, Young-Bok
    • The KIPS Transactions:PartC
    • /
    • v.10C no.4
    • /
    • pp.447-454
    • /
    • 2003
  • Optical burst switching (OBS) has been proposed to reduce the use of fiber delay lines (FDLs) and to realize the optical switching paradigm of the next-generation ail optical networks. The OBS can provide improvements over wavelength routing in terms of bandwidth efficiency and core network scalability via statistical multiplexing of bursts. Recently, another challenging issue is how to upport quality of service (QoS) in the optical burst switching networks. In this paper, we propose a deflection routing scheme to guarantee the QoS for the OBS networks to detour lower priority burst forward to the deflection routing path when congested. A big advantage of the proposed scheme is the simplicity of QoS provision, that comes from the simple QoS provisioning algorithm. Also, the QoS provisioning scheme be able to make efficient networks by fairly traffic distributing with the reduce of the use of FDLs at core routers. The QoS provisioning scheme has been verified to reliably guarantee the QoS of priority 0, 1, 2 burst and to efficiently utilize network resources by computer simulations using OPNET As results, the end-to-end delay of high priority burst is improved, and the network efficiency is also improved.

Design and Verification of Flow Mobility Scheme tn the AIMS System (AIMS 시스템에서 플로우 이동성 기법의 설계와 검증)

  • Lee, Sung-Kuen;Lee, Kyoung-Hee;Min, Sung-Gi;Lee, Hyo-Beom;Lee, Hyun-Woo;Han, Youn-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.7B
    • /
    • pp.760-770
    • /
    • 2011
  • The existing mobility management schemes do not fully support the next generation network, which is composed of IP-based core network and various access networks. Currently, ETRI has been developing the AIMS (Access Independent Mobility Service) system which satisfies the ITU-T requirements of mobility management in the next generation network. The AIMS system is designed to provide a mobile host with a fast and reliable mobility service among heterogeneous access networks. Recently, many user devices have multiple communication interfaces, e.g., 3G and WLAN, and thus they can make two or more network connections at the same time. In this paper, we design a scheme of flow mobility, i.e., the movement of selected data flows from one access technology to another, to be applied in the AIMS system, and verify the proposed scheme through the NS-3 simulation study. From the simulation results, we can know that the proposed flow mobility scheme can utilize the network resource efficiently in the heterogeneous mobile networks.

A Research about Open Source Distributed Computing System for Realtime CFD Modeling (SU2 with OpenCL and MPI) (실시간 CFD 모델링을 위한 오픈소스 분산 컴퓨팅 기술 연구)

  • Lee, Jun-Yeob;Oh, Jong-woo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.171-171
    • /
    • 2017
  • 전산유체역학(CFD: Computational Fluid Dynamics)를 이용한 스마트팜 환경 내부의 정밀 제어 연구가 진행 중이다. 시계열 데이터의 난해한 동적 해석을 극복하기위해, 비선형 모델링 기법의 일종인 인공신경망을 이용하는 방안을 고려하였다. 선행 연구를 통하여 환경 데이터의 비선형 모델링을 위한 Tensorflow활용 방법이 하드웨어 가속 기능을 바탕으로 월등한 성능을 보임을 확인하였다. 그럼에도 오프라인 일괄(Offline batch)처리 방식의 한계가 있는 인공신경망 모델링 기법과 현장 보급이 불가능한 고성능 하드웨어 연산 장치에 대한 대안 마련이 필요하다고 판단되었다. CFD 해석을 위한 Solver로 SU2(http://su2.stanford.edu)를 이용하였다. 운영 체제 및 컴파일러는 1) Mac OS X Sierra 10.12.2 Apple LLVM version 8.0.0 (clang-800.0.38), 2) Windows 10 x64: Intel C++ Compiler version 16.0, update 2, 3) Linux (Ubuntu 16.04 x64): g++ 5.4.0, 4) Clustered Linux (Ubuntu 16.04 x32): MPICC 3.3.a2를 선정하였다. 4번째 개발환경인 병렬 시스템의 경우 하드웨어 가속는 OpenCL(https://www.khronos.org/opencl/) 엔진을 이용하고 저전력 ARM 프로세서의 일종인 옥타코어 Samsung Exynos5422 칩을 장착한 ODROID-XU4(Hardkernel, AnYang, Korea) SBC(Single Board Computer)를 32식 병렬 구성하였다. 분산 컴퓨팅을 위한 환경은 Gbit 로컬 네트워크 기반 NFS(Network File System)과 MPICH(http://www.mpich.org/)로 구성하였다. 공간 분해능을 계측 주기보다 작게 분할할 경우 발생하는 미지의 바운더리 정보를 정의하기 위하여 3차원 Kriging Spatial Interpolation Method를 실험적으로 적용하였다. 한편 병렬 시스템 구성이 불가능한 1,2,3번 환경의 경우 내부적으로 이미 존재하는 멀티코어를 활용하고자 OpenMP(http://www.openmp.org/) 라이브러리를 활용하였다. 64비트 병렬 8코어로 동작하는 1,2,3번 운영환경의 경우 32비트 병렬 128코어로 동작하는 환경에 비하여 근소하게 2배 내외로 연산 속도가 빨랐다. 실시간 CFD 수행을 위한 분산 컴퓨팅 기술이 프로세서의 속도 및 운영체제의 정보 분배 능력에 따라 결정된다고 판단할 수 있었다. 이를 검증하기 위하여 4번 개발환경에서 운영체제를 64비트로 개선하여 5번째 환경을 구성하여 검증하였다. 상반되는 결과로 64비트 72코어로 동작하는 분산 컴퓨팅 환경에서 단일 프로세서 기반 멀티 코어(1,2,3번) 환경보다 보다 2.5배 내외 연산속도 향상이 있었다. ARM 프로세서용 64비트 운영체제의 완성도가 낮은 시점에서 추후 성공적인 실시간 CFD 모델링을 위한 지속적인 검토가 필요하다.

  • PDF

Implementation of File System for Embedded System (임베디드 시스템을 위한 파일 시스템 구현)

  • 강석민;송재영;조정철;권택근
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04a
    • /
    • pp.61-63
    • /
    • 2002
  • 컴퓨터 및 네트워크 기술의 눈부신 성장은 PDA, MP3 플레이어, 디지털 카메라와 같은 임베디드 시스템의 급성장을 가져왔다. 이러한 임베디드 시스템에는 그 시스템의 목적에 맞도록 특화된 실시간 운영체제가 탑재되게 되고, 그에 맞게 각 저장 장치들을 제어할 수 있는 파일 시스템도 필요하다. 본 논문에서는 삼성전자에서 개발한 CalmRISC16 마이크로 프로세서 코어를 사용하는 임베디드 시스템에 탑재될 실시간 운영체제를 위한 임베디드 파일 시스템을 구현하였다. 부 논문에서 구현된 임베디드 파일 시스템은 가상 과인 시스템으로 동작하며 In-memory 파일 시스템과 FAT를 사용하는 SmartMedia를 지원한다.

  • PDF

차세대기가비트 이더넷 스위치 기술

  • 백정훈;주범순
    • The Magazine of the IEIE
    • /
    • v.31 no.8
    • /
    • pp.83-95
    • /
    • 2004
  • 이더넷 특유의 범용성과 라인 속도의 포워딩 기능을 제공하는 고성능 네트워크 프로세서의 등장으로 메트로 이더넷의 핵심 장비로 선보인 이더넷 스위치는 메트로 영역에서의 성공 여세를 몰아코어 영역까지 적용범위를 확장하고 있다. 이러한 이더넷 스위치의 시장 변화에 따라 세계 유수의 이더넷 스위치 벤더는 스위칭 용량에 있어서는 수 Tbps ∼ 수 십 Tbps, 라인 인터페이스 및 패킷 처리 능력에 있어서는 10 기가비트 이더넷을 넘어 이것의 후속 버전인 40 기가비트 이더넷 혹은 100 기가비트 이더넷을 수용하면서 캐리어 수준의 신뢰도를 제공하는 차세대 이더넷 스위치 개발을 가속화하고 있는 실정이다.(중략)

  • PDF