• Title/Summary/Keyword: 컴퓨터 CPU

Search Result 439, Processing Time 0.031 seconds

Effective Scheduling Algorithm using Queue Separation and Packet Segmentation for Jumbo Packets (큐 분리 및 패킷 분할을 이용한 효율적인 점보패킷 스케쥴링 방법)

  • 윤빈영;고남석;김환우
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9A
    • /
    • pp.663-668
    • /
    • 2003
  • With the advent of high speed networking technology, computers connected to the high-speed networks tend to consume more of their CPU cycles to process data. So one of the solutions to improve the performance of the computers is to reduce the CPU cycles for processing the data. As the consumption of the CPU cycles is increased in proportion to the number of the packets per second to be processed, reducing the number of the packets per second by increasing the length of the packet is one of the solutions. In order to meet this requirement, two types of jumbo packets such as jumbograms and jumbo frames have already been standardized or being discussed. In case that the jumbograms and general packets are interleaved and scheduled together in a router, the jumbogrms may deteriorate the QoS of the general packets due to the transfer delay. They also frequently exhaust the memory with storing the huge length of the packets. This produces the congestion state easily in the router that results in the loss of the packets. In this paper, we analyze the problems in processing the jumbo packets and suggest a noble solution to overcome the problems.

Problem Analysis and Recommendations of CPU Contents in Korean Middle School Informatics Textbooks (중학교 정보 교과서에 제시된 중앙처리장치 내용 문제점 분석 및 개선 방안)

  • Lee, Sangwook;Suh, Taeweon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.4
    • /
    • pp.143-150
    • /
    • 2013
  • The School Curriculum amend in 2007 mandates the contents from which students can learn the principles and concepts of computer science. Computer Science is one of the most rapidly changing subjects, and the Informatics textbook should accurately explain the basic principles and concepts based on the latest technology. However, we found that the middle school textbooks in circulation lack accuracy and consistency in describing CPU. This paper attempted to discover the root-cause of the fallacy and suggest timely and appropriate explanation based on the historical and technical analysis. According to our study, it is appropriate to state that CPU is composed of datapath and control unit. The Datapath performs operations on data and holds data temporarily, and it is composed of the hardware components such as memory, register, ALU and adder. The Control unit decides the operation types of datapath elements, main memory and I/O devices. Nevertheless, considering the technological literacy of middle school students, we suggest the terms, 'arithmetic part' and 'control part' instead of datapath and control unit.

A CPU and GPU Heterogeneous Computing Techniques for Fast Representation of Thin Features in Liquid Simulations (액체 시뮬레이션의 얇은 특징을 빠르게 표현하기 위한 CPU와 GPU 이기종 컴퓨팅 기술)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.11-20
    • /
    • 2018
  • We propose a new method particle-based method that explicitly preserves thin liquid sheets for animating liquids on CPU-GPU heterogeneous computing framework. Our primary contribution is a particle-based framework that splits at thin points and collapses at dense points to prevent the breakup of liquid on GPU. In contrast to existing surface tracking methods, the our method does not suffer from numerical diffusion or tangles, and robustly handles topology changes on CPU-GPU framework. The thin features are detected by examining stretches of distributions of neighboring particles by performing PCA(Principle component analysis), which is used to reconstruct thin surfaces with anisotropic kernels. The efficiency of the candidate position extraction process to calculate the position of the fluid particle was rapidly improved based on the CPU-GPU heterogeneous computing techniques. Proposed algorithm is intuitively implemented, easy to parallelize and capable of producing quickly detailed thin liquid animations.

Analysis on the Performance Impact of Partitioned LLC for Heterogeneous Multicore Processors (이종 멀티코어 프로세서에서 분할된 공유 LLC가 성능에 미치는 영향 분석)

  • Moon, Min Goo;Kim, Cheol Hong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.2
    • /
    • pp.39-49
    • /
    • 2019
  • Recently, CPU-GPU integrated heterogeneous multicore processors have been widely used for improving the performance of computing systems. Heterogeneous multicore processors integrate CPUs and GPUs on a single chip where CPUs and GPUs share the LLC(Last Level Cache). This causes a serious cache contention problem inside the processor, resulting in significant performance degradation. In this paper, we propose the partitioned LLC architecture to solve the cache contention problem in heterogeneous multicore processors. We analyze the performance impact varying the LLC size of CPUs and GPUs, respectively. According to our simulation results, the bigger the LLC size of the CPU, the CPU performance improves by up to 21%. However, the GPU shows negligible performance difference when the assigned LLC size increases. In other words, the GPU is less likely to lose the performance when the LLC size decreases. Because the performance degradation due to the LLC size reduction in GPU is much smaller than the performance improvement due to the increase of the LLC size of the CPU, the overall performance of heterogeneous multicore processors is expected to be improved by applying partitioned LLC to CPUs and GPUs. In addition, if we develop a memory management technique that can maximize the performance of each core in the future, we can greatly improve the performance of heterogeneous multicore processors.

A Study on Heat Transfer and Fluid Flow Characteristics of Radiator for Computer CPU Cooling (컴퓨터 CPU 냉각용 방열기의 열유동특성에 관한 연구)

  • Cha, Dong-An;Kwon, Oh-Kyung;Yun, Jae-Ho
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.23 no.1
    • /
    • pp.1-7
    • /
    • 2011
  • The performance of louver-finned flat-tube and fin and tube radiators for computer CPU liquid cooling was experimentally investigated. In this study, 7 samples of radiators with different shape and pass number (1, 2, 10) were tested in a wind tunnel. The experiments were conducted under the different air velocity ranged from 1 to 4 m/s. The water flow rate through a pass was 1.2 LPM. Inlet temperatures of air and water were $20^{\circ}C$ and $30^{\circ}C$ respectively. It was found that the best performance was observed in the louver-finned flat-tube sample considering pressure drop and heat transfer coefficient.

Parallel Processing of Multi-Core Processor and GPUs in Projection Step for Efficient Fluid Simulation (효율적인 유체 시뮬레이션을 위한 투영 단계에서의 멀티 코어 프로세서와 그래픽 프로세서의 병렬처리)

  • Kim, Sun-Tae;Jung, Hwi-Ryong;Hong, Jeong-Mo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.48-54
    • /
    • 2013
  • In these days, the state-of-art technologies employ the heterogeneous parallelization of CPU and GPU for fluid simulations in the field of computer graphics. In this paper, we present a novel CPU-GPU parallel algorithm that solves projection step of fluid simulation more efficiently than existing sequential CPU-GPU processing. Fluid simulation that requires high computational resources can be carried out efficiently by the proposed method.

A Review on the CPU Scheduling Algorithms: Comparative Study

  • Ali, Shahad M.;Alshahrani, Razan F.;Hadadi, Amjad H.;Alghamdi, Tahany A.;Almuhsin, Fatimah H.;El-Sharawy, Enas E.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.19-26
    • /
    • 2021
  • CPU is considered the main and most important resource in the computer system. The CPU scheduling is defined as a procedure that determines which process will enter the CPU to be executed, and another process will be waiting for its turn to be performed. CPU management scheduling algorithms are the major service in the operating systems that fulfill the maximum utilization of the CPU. This article aims to review the studies on the CPU scheduling algorithms towards comparing which is the best algorithm. After we conducted a review of the Round Robin, Shortest Job First, First Come First Served, and Priority algorithms, we found that several researchers have suggested various ways to improve CPU optimization criteria through different algorithms to improve the waiting time, response time, and turnaround time but there is no algorithm is better in all criteria.

Towards Characterization of Modern FPGAs: A Case Study with Adders and MIPS CPU (가산기와 MIPS CPU 사례를 이용한 현대 FPGA의 특성연구)

  • Lee, Boseon;Suh, Taewon
    • The Journal of Korean Association of Computer Education
    • /
    • v.16 no.3
    • /
    • pp.99-105
    • /
    • 2013
  • The FPGA-based emulation is an essential step in ASIC design for validation. For emulation with maximal frequency, it is crucial to understand the FPGA characteristics. This paper attempts to analyze the performance characteristics of the modern FPGAs from renowned vendors, Xilinx and Altera, with a case study utilizing various adders and MIPS CPU. Unlike the common wisdom, ripple-carry adder (RCA) does not utilize the inherent carry-chain inside FPGAs when structurally designed based on 1-bit adders. Thus, the RCA shows the inferior performance to the other types of adders in FPGAs. Our study also reveals that FPGAs from Xilinx exhibit different characteristics from the ones from Altera. That is, the prefix adder, which is optimized for speed in ASIC design, shows the poor performance on Xilinx devices, whereas it provides a comparable speed to the IP core on Altera devices. It suggests that error-prone manual change of the original design can be avoided on Altera devices if area is permitted. Experiments with MIPS CPU confirm the arguments.

  • PDF

Speed-optimized Implementation of HIGHT Block Cipher Algorithm (HIGHT 블록 암호 알고리즘의 고속화 구현)

  • Baek, Eun-Tae;Lee, Mun-Kyu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.3
    • /
    • pp.495-504
    • /
    • 2012
  • This paper presents various speed optimization techniques for software implementation of the HIGHT block cipher on CPUs and GPUs. We considered 32-bit and 64-bit operating systems for CPU implementations. After we applied the bit-slicing and byte-slicing techniques to HIGHT, the encryption speed recorded 1.48Gbps over the intel core i7 920 CPU with a 64-bit operating system, which is up to 2.4 times faster than the previous implementation. We also implemented HIGHT on an NVIDIA GPU equipped with CUDA, and applied various optimization techniques, such as storing most frequently used data like subkeys and the F lookup table in the shared memory; and using coalesced access when reading data from the global memory. To our knowledge, this is the first result that implements and optimizes HIGHT on a GPU. We verified that the byte-slicing technique guarantees a speed-up of more than 20%, resulting a speed which is 31 times faster than that on a CPU.

One-Chip Computer Design for Hard-Ware Implementation of Genetic Algorithm (유전자 알고리즘 하드웨어 구현을 위한 전용 원칩 컴퓨터의 설계)

  • 박세현;이언학;박상필
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.11a
    • /
    • pp.575-579
    • /
    • 2000
  • 유전자 알고리즘을 구현하기 위해서 전용 원칩 컴퓨터를 설계하였다. 유전자 알고리즘의 전용 원칩 컴퓨터는 16Bit CPU CORE와 유전자 알고리즘의 하드웨어로 구성되어 있다. 구현된 전용 원칩 컴퓨터는 기존이 하드웨어 GAP와 달리 메인 컴퓨터에 독립적으로 동작되며 멀티미디어 통신에 사용되는 비트 동기용 하드웨어를 생성시켜본 결과 효과적임을 알 수 있었다.

  • PDF