• Title/Summary/Keyword: High-Throughput Computing

Search Result 95, Processing Time 0.021 seconds

Design and implementation of a Shared-Concurrent File System in distributed UNIX environment (분산 UNIX 환경에서 Shared-Concurrent File System의 설계 및 구현)

  • Jang, Si-Ung;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.617-630
    • /
    • 1996
  • In this paper, a shared-concurrent file system (S-CFS) is designed and implemented using conventional disks as disk arrays on a Workstation Cluster which can be used as a small-scale server. Since it is implemented on UNIX operating systems, S_CFS is not only portable and flexible but also efficient in resource usage because it does not require additional I/O nodes. The result of the research shows that on small-scale systems with enough disks, the performance of the concurrent file system on transaction processing applications is bounded by the bottleneck of CPUs computing powers while the performance of the concurrent file system on massive data I/Os is bounded by the time required to copy data between buffers. The concurrent file system,which has been implemented on a Workstation Cluster with 8 disks,shows a throughput of 388 tps in case of transaction processing applications and can provide the bandwidth of 15.8 Mbytes/sec in case of massive data processing applications. Moreover,the concurrent file system has been dsigned to enhance the throughput of applications requirring high performance I/O by controlling the paralleism of the concurrent file system on user's side.

  • PDF

Semantic Computing-based Dynamic Job Scheduling Model and Simulation (시멘틱 컴퓨팅 기반의 동적 작업 스케줄링 모델 및 시뮬레이션)

  • Noh, Chang-Hyeon;Jang, Sung-Ho;Kim, Tae-Young;Lee, Jong-Sik
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.29-38
    • /
    • 2009
  • In the computing environment with heterogeneous resources, a job scheduling model is necessary for effective resource utilization and high-speed data processing. And, the job scheduling model has to cope with a dynamic change in the condition of resources. There have been lots of researches on resource estimation methods and heuristic algorithms about how to distribute and allocate jobs to heterogeneous resources. But, existing researches have a weakness for system compatibility and scalability because they do not support the standard language. Also, they are impossible to process jobs effectively and deal with a variety of computing situations in which the condition of resources is dynamically changed in real-time. In order to solve the problems of existing researches, this paper proposes a semantic computing-based dynamic job scheduling model that defines various knowledge-based rules for job scheduling methods adaptable to changes in resource condition and allocate a job to the best suited resource through inference. This paper also constructs a resource ontology to manage information about heterogeneous resources without difficulty as using the OWL, the standard ontology language established by W3C. Experimental results shows that the proposed scheduling model outperforms existing scheduling models, in terms of throughput, job loss, and turn around time.

Bottleneck-Free Architecture of A High Speed Printer (고속프린터의 병목 없는 구조)

  • Lee, Kang-Woo
    • The KIPS Transactions:PartA
    • /
    • v.11A no.3
    • /
    • pp.115-128
    • /
    • 2004
  • The proliferation of Information Technology and Business Automation become the mainstay of the growth of world printer market. Especially the demand for high speed printers increases drastically as the need for printing service station in networked computing environment grows. However outstanding research activities are seldom reported. Only efforts are currently focused on the printing quality and development of application programs for low or midium speed printers. This stimulates the research for high-speed printers. In this paper, we first defined the architectural components of high-speed printers so that the performance requirements can be satisfied. The individual impacts of each module on overall throughput are then elaborately analysed. Based on this result, we accomplished well-tuned system architecture which presents up to 163.52 PPM. Another contribution of this paper can be found in that this is one of the first articles that deal with the architecture and the performance of high-speed printers.

Combining Support Vector Machine Recursive Feature Elimination and Intensity-dependent Normalization for Gene Selection in RNAseq (RNAseq 빅데이터에서 유전자 선택을 위한 밀집도-의존 정규화 기반의 서포트-벡터 머신 병합법)

  • Kim, Chayoung
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.47-53
    • /
    • 2017
  • In past few years, high-throughput sequencing, big-data generation, cloud computing, and computational biology are revolutionary. RNA sequencing is emerging as an attractive alternative to DNA microarrays. And the methods for constructing Gene Regulatory Network (GRN) from RNA-Seq are extremely lacking and urgently required. Because GRN has obtained substantial observation from genomics and bioinformatics, an elementary requirement of the GRN has been to maximize distinguishable genes. Despite of RNA sequencing techniques to generate a big amount of data, there are few computational methods to exploit the huge amount of the big data. Therefore, we have suggested a novel gene selection algorithm combining Support Vector Machines and Intensity-dependent normalization, which uses log differential expression ratio in RNAseq. It is an extended variation of support vector machine recursive feature elimination (SVM-RFE) algorithm. This algorithm accomplishes minimum relevancy with subsets of Big-Data, such as NCBI-GEO. The proposed algorithm was compared to the existing one which uses gene expression profiling DNA microarrays. It finds that the proposed algorithm have provided as convenient and quick method than previous because it uses all functions in R package and have more improvement with regard to the classification accuracy based on gene ontology and time consuming in terms of Big-Data. The comparison was performed based on the number of genes selected in RNAseq Big-Data.

A High Speed 2D-DWT Parallel Hardware Architecture Using the Lifting Scheme (Lifting scheme을 이용한 고속 병렬 2D-DWT 하드웨어 구조)

  • 김종욱;정정화
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.7
    • /
    • pp.518-525
    • /
    • 2003
  • In this paper, we present a fast hardware architecture to implement a parallel 2-dimensional discrete wavelet transform(DWT)based on the lifting scheme DWT framework. The conventional 2-D DWT had a long initial and total latencies to get the final 2D transformed coefficients because the DWT used an entire input data set for the transformation and transformed sequentially The proposed architecture increased the parallel performance at computing the row directional transform using new data splitting method. And, we used the hardware resource sharing architecture for improving the total throughput of 2D DWT. Finally, we proposed a scheduling of hardware resource which is optimized to the proposed hardware architecture and splitting method. Due to the use of the proposed architecture, the parallel computing efficiency is increased. This architecture shows the initial and total latencies are improved by 50% and 66%.

UDP with Flow Control for Myrinet (Myrinet을 위한 흐름 제어 기능을 갖는 UDP)

  • Kim, Jin-Ug;Jin, Hyun-Wook;Yoo, Hyuck
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.5
    • /
    • pp.649-655
    • /
    • 2003
  • Network-based computing such as cluster computing requires a reliable high-speed transport protocol. TCP is a representative reliable transport protocol on the Internet, which implements many mechanisms, such as flow control, congestion control, and retransmission, for reliable packet delivery. This paper, however, finds out that Myrinet does not incur any packet losses caused by network congestion. In addition, we ascertain that Myrinet supports reliable and ordered packet delivery. Consequently, most of reliable routines implemented in TCP produce unnecessarily additional overheads on Myrinet. In this paper, we show that we can attain the reliability only by flow control on Myrinet and propose a new reliable protocol based on UDP named RUM (Reliable UDP on Myrinet) that performs a flow control. As a result, RUM achieves a higher throughput by 45% than TCP and shows a similar one-way latency to UDP.

An Advanced Coding for Video Streaming System: Hardware and Software Video Coding

  • Le, Tuan Thanh;Ryu, Eun-Seok
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.51-57
    • /
    • 2020
  • Currently, High-efficient video coding (HEVC) has become the most promising video coding technology. However, the implementation of HEVC in video streaming systems is restricted by factors such as cost, design complexity, and compatibility with existing systems. While HEVC is considering deploying to various systems with different reached methods, H264/AVC can be one of the best choices for current video streaming systems. This paper presents an adaptive method for manipulating video streams using video coding on an integrated circuit (IC) designed with a private network processor. The proposed system allows to transfer multimedia data from cameras or other video sources to client. For this work, a series of video or audio packages from the video source are forwarded to the designed IC via HDMI cable, called Tx transmitter. The Tx processes input data into a real-time stream using its own protocol according to the Real-Time Transmission Protocol for both video and audio, then Tx transmits output packages to the video client though internet. The client includes hardware or software video/audio decoders to decode the received packages. Tx uses H264/AVC or HEVC video coding to encode video data, and its audio coding is PCM format. By handling the message exchanges between Tx and the client, the transmitted session can be set up quickly. Output results show that transmission's throughput can be achieved about 50 Mbps with approximately 80 msec latency.

Application of Adaptive Neuro-Fuzzy Inference System for Interference Management in Heterogeneous Network

  • Palanisamy, Padmaloshani;Sivaraj, Nirmala
    • ETRI Journal
    • /
    • v.40 no.3
    • /
    • pp.318-329
    • /
    • 2018
  • Femtocell (FC) technology envisaged as a cost-effective approach to attain better indoor coverage of mobile voice and data service. Deployment of FCs over macrocell forms a heterogeneous network. In urban areas, the key factor limits the successful deployment of FCs is inter-cell interference (ICI), which severely affects the performance of victim users. Autonomous FC transmission power setting is one straightforward way for coordinating ICI in the downlink. Application of intelligent control using soft computing techniques has not yet explored well for wireless networks. In this work, autonomous FC transmission power setting strategy using Adaptive Neuro Fuzzy Inference System is proposed. The main advantage of the proposed method is zero signaling overhead, reduced computational complexity and bare minimum delay in performing power setting of FC base station because only the periodic channel measurement reports fed back by the user equipment are needed. System level simulation results validate the effectiveness of the proposed method by providing much better throughput, even under high interference activation scenario and cell edge users can be prevented from going outage.

iHaplor: A Hybrid Method for Haplotype Reconstruction

  • Jung, Ho-Youl;Heo, Jee-Yeon;Cho, Hye-Yeung;Ryu, Gil-Mi;Lee, Ju-Young;Koh, In-Song;Kimm, Ku-Chan;Oh, Berm-Seok
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.221-228
    • /
    • 2003
  • This paper presents a novel method that can identify the individual's haplotype from the given genotypes. Because of the limitation of the conventional single-locus analysis, haplotypes have gained increasing attention in the mapping of complex-disease genes. Conventionally there are two approaches which resolve the individual's haplotypes. One is the molecular haplotypings which have many potential limitations in cost and convenience. The other is the in-silico haplotypings which phase the haplotypes from the diploid genotyped populations, and are cost effective and high-throughput method. In-silico haplotyping is divided into two sub-categories - statistical and computational method. The former computes the frequencies of the common haplotypes, and then resolves the individual's haplotypes. The latter directly resolves the individual's haplotypes using the perfect phylogeny model first proposed by Dan Gusfield [7]. Our method combines two approaches in order to increase the accuracy and the running time. The individuals' haplotypes are resolved by considering the MLE (Maximum Likelihood Estimation) in the process of computing the frequencies of the common haplotypes.

  • PDF

Fast Retransmission Scheme for Overcoming Hidden Node Problem in IEEE 802.11 Networks

  • Jeon, Jung-Hwi;Kim, Chul-Min;Lee, Ki-Seok;Kim, Chee-Ha
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.324-330
    • /
    • 2011
  • To avoid collisions, IEEE 802.11 medium access control (MAC) uses predetermined inter-frame spaces and the random back-off process. However, the retransmission strategy of IEEE 802.11 MAC results in considerable time wastage. The hidden node problem is well known in wireless networks; it aggravates the consequences of time wastage for retransmission. Many collision prevention and recovery approaches have been proposed to solve the hidden node problem, but all of them have complex control overhead. In this paper, we propose a fast retransmission scheme as a recovery approach. The proposed scheme identifies collisions caused by hidden nodes and then allows retransmission without collision. Analysis and simulations show that the proposed scheme has greater throughput than request-to-send and clear-to-send (RTS/CTS) and a shorter average waiting time.