• Title/Summary/Keyword: Memory allocation

Search Result 205, Processing Time 0.031 seconds

A Resource-Aware Mapping Algorithm for Coarse-Grained Reconfigurable Architecture Using List Scheduling (리스트 스케줄링을 통한 Coarse-Grained 재구성 구조의 맵핑 알고리즘 개발)

  • Kim, Hyun-Jin;Hong, Hye-Jeong;Kim, Hong-Sik;Kang, Sung-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.6
    • /
    • pp.58-64
    • /
    • 2009
  • For the success of the reconfigurable computing, the algorithm for mapping operations onto coarse-grained reconfigurable architecture is very important. This paper proposes a resource-aware mapping system for the coarse-grained reconfigurable architecture and its own underlying heuristic algorithm. The operation assignment and the routing path allocation are simultaneously performed with a cycle-accurate time-exclusive resource model. The proposed algorithm minimizes the communication resource usage and the global memory access with the list scheduling heuristic. The operation to be mapped are prioritized with general properties of data flow. The evaluations of the proposed algorithm show that the performance is significantly enhanced in several benchmark applications.

Design and Implementation of a High Performance Web Crawler (고성능 웹크롤러의 설계 및 구현)

  • 권성호;이영탁;김영준;이용두
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.4
    • /
    • pp.64-72
    • /
    • 2003
  • A Web crawler is an important Internet software technology used in a variety of Internet application software which includes search engines. As Internet continues to grow, implementations of high performance web crawlers are urgently demanded. In this paper, we study how to support dynamic scheduling for a multiprocess-based web crawler. For high peformance, web crawlers are usually based on multiprocess in their implementations. In these systems, crawl scheduling which manages the allocation of web pages to each process for loading is one of the important issues. In this paper, we identify issues which are important and challenging in the crawl scheduling. To address the issue, we propose a dynamic crawl scheduling framework and subsequently a system architecture for a web crawler with dynamic crawl scheduling support. And we analysed the behaviors of Web crawler. Based on the analysis result, we suggest the direction for the design of high performance Web crawler.

  • PDF

Low-Power Data Cache Architecture and Microarchitecture-level Management Policy for Multimedia Application (멀티미디어 응용을 위한 저전력 데이터 캐쉬 구조 및 마이크로 아키텍쳐 수준 관리기법)

  • Yang Hoon-Mo;Kim Cheong-Gil;Park Gi-Ho;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.3 s.100
    • /
    • pp.191-198
    • /
    • 2006
  • Today's portable electric consumer devices, which are operated by battery, tend to integrate more multimedia processing capabilities. In the multimedia processing devices, multimedia system-on-chips can handle specific algorithms which need intensive processing capabilities and significant power consumption. As a result, the power-efficiency of multimedia processing devices becomes important increasingly. In this paper, we propose a reconfigurable data caching architecture, in which data allocation is constrained by software support, and evaluate its performance and power efficiency. Comparing with conventional cache architectures, power consumption can be reduced significantly, while miss rate of the proposed architecture is very similar to that of the conventional caches. The reduction of power consumption for the reconfigurable data cache architecture shows 33.2%, 53.3%, and 70.4%, when compared with direct-mapped, 2-way, and 4-way caches respectively.

Improved Task Scheduling Algorithm Considering the Successive Communication Features of Heterogeneous Message-passing System (메시지 패싱 시스템의 통신 특성을 고려한 개선된 태스크 스케줄링 기법)

  • 노두호;김성천
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.347-352
    • /
    • 2004
  • This thesis deals with a task scheduling on a message-passing system. Scheduling and allocation are very important issues since the inappropriate scheduling of tasks cannot exploit the true potential of the system and it can offset the grain from parallelization. It is difficult to apply previous schemes to message-passing system, because previous schemes assume the shared memory system. This thesis proposes an modified priority function and processor selection technique that consider the problems caused by the difference between previous models and message-passing environments. The priority function includes the cumulative communication cost which causes task execution to be delayed. The processor selection technique avoids the situation that a child task is assigned to the same Processor allocated to its parent task that has other unscheduled child tasks. We showed by some simulations that our modified features of task scheduling algorithm can make the better scheduling results than the previous algorithms.

Forecasting of the COVID-19 pandemic situation of Korea

  • Goo, Taewan;Apio, Catherine;Heo, Gyujin;Lee, Doeun;Lee, Jong Hyeok;Lim, Jisun;Han, Kyulhee;Park, Taesung
    • Genomics & Informatics
    • /
    • v.19 no.1
    • /
    • pp.11.1-11.8
    • /
    • 2021
  • For the novel coronavirus disease 2019 (COVID-19), predictive modeling, in the literature, uses broadly susceptible exposed infected recoverd (SEIR)/SIR, agent-based, curve-fitting models. Governments and legislative bodies rely on insights from prediction models to suggest new policies and to assess the effectiveness of enforced policies. Therefore, access to accurate outbreak prediction models is essential to obtain insights into the likely spread and consequences of infectious diseases. The objective of this study is to predict the future COVID-19 situation of Korea. Here, we employed 5 models for this analysis; SEIR, local linear regression (LLR), negative binomial (NB) regression, segment Poisson, deep-learning based long short-term memory models (LSTM) and tree based gradient boosting machine (GBM). After prediction, model performance comparison was evelauated using relative mean squared errors (RMSE) for two sets of train (January 20, 2020-December 31, 2020 and January 20, 2020-January 31, 2021) and testing data (January 1, 2021-February 28, 2021 and February 1, 2021-February 28, 2021) . Except for segmented Poisson model, the other models predicted a decline in the daily confirmed cases in the country for the coming future. RMSE values' comparison showed that LLR, GBM, SEIR, NB, and LSTM respectively, performed well in the forecasting of the pandemic situation of the country. A good understanding of the epidemic dynamics would greatly enhance the control and prevention of COVID-19 and other infectious diseases. Therefore, with increasing daily confirmed cases since this year, these results could help in the pandemic response by informing decisions about planning, resource allocation, and decision concerning social distancing policies.

Distributed Processing of Big Data Analysis based on R using SparkR (SparkR을 이용한 R 기반 빅데이터 분석의 분산 처리)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.161-166
    • /
    • 2022
  • In this paper, we analyze the problems that occur when performing the big data analysis using R as a data analysis tool, and present the usefulness of the data analysis with SparkR which connects R and Spark to support distributed processing of big data effectively. First, we study the memory allocation problem of R which occurs when loading large amounts of data and performing operations, and the characteristics and programming environment of SparkR. And then, we perform the comparison analysis of the execution performance when linear regression analysis is performed in each environment. As a result of the analysis, it was shown that R can be used for data analysis through SparkR without additional language learning, and the code written in R can be effectively processed distributedly according to the increase in the number of nodes in the cluster.

Thread Block Scheduling for GPGPU based on Fine-Grained Resource Utilization (상세 자원 이용률에 기반한 병렬 가속기용 스레드 블록 스케줄링)

  • Bahn, Hyokyung;Cho, Kyungwoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.49-54
    • /
    • 2022
  • With the recent widespread adoption of general-purpose GPUs (GPGPUs) in cloud systems, maximizing the resource utilization through multitasking in GPGPU has become an important issue. In this article, we show that resource allocation based on the workload classification of computing-bound and memory-bound is not sufficient with respect to resource utilization, and present a new thread block scheduling policy for GPGPU that makes use of fine-grained resource utilizations of each workload. Unlike previous approaches, the proposed policy reduces scheduling overhead by separating profiling and scheduling, and maximizes resource utilizations by co-locating workloads with different bottleneck resources. Through simulations under various virtual machine scenarios, we show that the proposed policy improves the GPGPU throughput by 130.6% on average and up to 161.4%.

Block Allocation Method for Efficiently Managing Temporary Files of Hash Joins on SSDs (SSD상에서 해시조인 임시 파일의 효과적인 관리를 위한 블록 할당 방법)

  • Joontae, Kim;Sangwon, Lee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.12
    • /
    • pp.429-436
    • /
    • 2022
  • Temporary files are generated when the Hash Join is performed on tables larger than the memory. During the join process, each temporary file is deleted sequentially after it completes the I/O operations. This paper reveals for that the fallocate system call and file deletion-related trim options significantly impact the hash join performance when temporary files are managed on SSDs rather than hard disks. The experiment was conducted on various commercial and research SSDs using PostgreSQL, a representative open-source database. We find that it is possible to improve the join performance up to 3 to 5 times compared to the default combination depending on whether fallocate and trim options are used for temporary files. In addition, we investigate the write amplification and trim command overhead in the SSD according to the combination of the two options for temporary files.

Futures Price Prediction based on News Articles using LDA and LSTM (LDA와 LSTM를 응용한 뉴스 기사 기반 선물가격 예측)

  • Jin-Hyeon Joo;Keun-Deok Park
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.167-173
    • /
    • 2023
  • As research has been published to predict future data using regression analysis or artificial intelligence as a method of analyzing economic indicators. In this study, we designed a system that predicts prospective futures prices using artificial intelligence that utilizes topic probability data obtained from past news articles using topic modeling. Topic probability distribution data for each news article were obtained using the Latent Dirichlet Allocation (LDA) method that can extract the topic of a document from past news articles via unsupervised learning. Further, the topic probability distribution data were used as the input for a Long Short-Term Memory (LSTM) network, a derivative of Recurrent Neural Networks (RNN) in artificial intelligence, in order to predict prospective futures prices. The method proposed in this study was able to predict the trend of futures prices. Later, this method will also be able to predict the trend of prices for derivative products like options. However, because statistical errors occurred for certain data; further research is required to improve accuracy.

Low Power Security Architecture for the Internet of Things (사물인터넷을 위한 저전력 보안 아키텍쳐)

  • Yun, Sun-woo;Park, Na-eun;Lee, Il-gu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.199-201
    • /
    • 2021
  • The Internet of Things (IoT) is a technology that can organically connect people and things without time and space constraints by using communication network technology and sensors, and transmit and receive data in real time. The IoT used in all industrial fields has limitations in terms of storage allocation, such as device size, memory capacity, and data transmission performance, so it is important to manage power consumption to effectively utilize the limited battery capacity. In the prior research, there is a problem in that security is deteriorated instead of improving power efficiency by lightening the security algorithm of the encryption module. In this study, we proposes a low-power security architecture that can utilize high-performance security algorithms in the IoT environment. This can provide high security and power efficiency by using relatively complex security modules in low-power environments by executing security modules only when threat detection is required based on inspection results.

  • PDF