• Title/Summary/Keyword: data center servers

Search Result 59, Processing Time 0.02 seconds

Bioinformatics Resources of the Korean Bioinformation Center (KOBIC)

  • Lee, Byung-Wook;Chu, In-Sun;Kim, Nam-Shin;Lee, Jin-Hyuk;Kim, Seon-Yong;Kim, Wan-Kyu;Lee, Sang-Hyuk
    • Genomics & Informatics
    • /
    • v.8 no.4
    • /
    • pp.165-169
    • /
    • 2010
  • The Korean Bioinformation Center (KOBIC) is a national bioinformatics research center in Korea. We developed many bioinformatics algorithms and applications to facilitate the biological interpretation of OMICS data. Here we present an introduction to major bioinformatics resources of databases and tools developed at KOBIC. These resources are classified into three main fields: genome, proteome, and literature. In the genomic resources, we constructed several pipelines for next generation sequencing (NGS) data processing and developed analysis algorithms and web-based database servers including miRGator, ESTpass, and CleanEST. We also built integrated databases and servers for microarray expression data such as MDCDP. As for the proteome data, VnD database, WDAC, Localizome, and CHARMM_HM web servers are available for various purposes. We constructed IntoPub server and Patome database in the literature field. We continue constructing and maintaining the bioinformatics infrastructure and developing algorithms.

A new model and testing verification for evaluating the carbon efficiency of server

  • Liang Guo;Yue Wang;Yixing Zhang;Caihong Zhou;Kexin Xu;Shaopeng Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2682-2700
    • /
    • 2023
  • To cope with the risks of climate change and promote the realization of carbon peaking and carbon neutrality, this paper first comprehensively considers the policy background, technical trends and carbon reduction paths of energy conservation and emission reduction in data center server industry. Second, we propose a computing power carbon efficiency of data center server, and constructs the carbon emission per performance of server (CEPS) model. According to the model, this paper selects the mainstream data center servers for testing. The result shows that with the improvement of server performance, the total carbon emissions are rising. However, the speed of performance improvement is faster than that of carbon emission, hence the relative carbon emission per unit computing power shows a continuous decreasing trend. Moreover, there are some differences between different products, and it is calculated that the carbon emission per unit performance is 20-60KG when the service life of the server is five years.

Design of Data Center Environmental Monitoring System Based On Lower Hardware Cost

  • Nkenyereye, Lionel;Jang, Jongwook
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.63-68
    • /
    • 2016
  • Environmental downtime produces a significant cost to organizations and makes them unable to do business because what happens in the data center affects everyone. In addition, the amount of electrical energy consumed by data centers increases with the amount of computing power installed. Installation of physical Information Technology and facilities related to environmental concerns, such as monitoring temperature, humidity, power, flood, smoke, air flow, and room entry, is the most proactive way to reduce the unnecessary costs of expensive hardware replacement or unplanned downtime and decrease energy consumed by servers. In this paper, we present remote system for monitoring datacenter implementing using open-source hardware platforms; Arduino, Raspberry Pi, and the Gobetwino. The sensed data displayed through Arduino are transferred using Gobetwino to the nearest host server such as temperature, humidity and distance every time an object hitting another object or a person coming in entrance. The raspberry Pi records the sensed data at the remote location. The objective of collecting temperature and humidity data allows monitoring of the server's health and getting alerts if things start to go wrong. When the temperature hits $50^{\circ}C$, the supervisor at remote headquarters would get a SMS, and then they would take appropriate actions to reduce electrical costs and preserve functionality of servers in data centers.

Energy-aware Virtual Resource Mapping Algorithm in Wireless Data Center

  • Luo, Juan;Fu, Shan;Wu, Di
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.819-837
    • /
    • 2014
  • Data centers, which implement cloud service, have been faced up with quick growth of energy consumption and low efficiency of energy. 60GHz wireless communication technology, as a new option to data centers, can provide feasible approach to alleviate the problems. Aiming at energy optimization in 60GHz wireless data centers (WDCs), we investigate virtualization technology to assign virtual resources to minimum number of servers, and turn off other servers or adjust them to the state of low power. By comprehensive analysis of wireless data centers, we model virtual network and physical network in WDCs firstly, and propose Virtual Resource Mapping Packing Algorithm (VRMPA) to solve energy management problems. According to VRMPA, we adopt packing algorithm and sort physical resource only once, which improves efficiency of virtual resource allocation. Simulation results show that, under the condition of guaranteeing network load, VPMPA algorithm can achieve better virtual request acceptance rate and higher utilization rate of energy consumption.

Reliability Modeling of Direct Current Power Feeding Systems for Green Data Center

  • Choi, Jung Yul
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.4
    • /
    • pp.704-711
    • /
    • 2013
  • Data center is an information hub and resource for information-centric society. Since data center houses hundreds to ten thousands servers, networking and communication equipment, and supporting systems energy saving is one of the hottest issues for green data center. Among several solutions for green data center this paper introduces higher voltage direct current (DC) power feeding system. Contrary to legacy alternating current (AC) power feeding system equipped with Uninterruptible Power Supply (UPS), higher voltage DC power feeding system is reported to be a more energy efficient and reliable solution for green data center thanks to less AC/DC and DC/AC conversions. Main focus of this paper is on reliability issue for reliable and continuous operation of higher voltage DC power feeding system. We present different types of configuration of the power feeding systems according to the level of reliability. We analyze the reliability of the power feeding systems based on M/M/1/N+1/N+1 queueing model. Operation of the power feeding system in case of failure is also presented.

Towards Carbon-Neutralization: Deep Learning-Based Server Management Method for Efficient Energy Operation in Data Centers (탄소중립을 향하여: 데이터 센터에서의 효율적인 에너지 운영을 위한 딥러닝 기반 서버 관리 방안)

  • Sang-Gyun Ma;Jaehyun Park;Yeong-Seok Seo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.149-158
    • /
    • 2023
  • As data utilization is becoming more important recently, the importance of data centers is also increasing. However, the data center is a problem in terms of environment and economy because it is a massive power-consuming facility that runs 24 hours a day. Recently, studies using deep learning techniques to reduce power used in data centers or servers or predict traffic have been conducted from various perspectives. However, the amount of traffic data processed by the server is anomalous, which makes it difficult to manage the server. In addition, many studies on dynamic server management techniques are still required. Therefore, in this paper, we propose a dynamic server management technique based on Long-Term Short Memory (LSTM), which is robust to time series data prediction. The proposed model allows servers to be managed more reliably and efficiently in the field environment than before, and reduces power used by servers more effectively. For verification of the proposed model, we collect transmission and reception traffic data from six of Wikipedia's data centers, and then analyze and experiment with statistical-based analysis on the relationship of each traffic data. Experimental results show that the proposed model is helpful for reliably and efficiently running servers.

A Study on Air-distribution method for the Thermal Environmental Control in the Data Center (데이터센터의 합리적인 환경제어를 위한 공기분배 시스템에 대한 연구)

  • Cho, Jin-Kyun;Cha, Ji-Hyoung;Hong, Min-Ho;Yeon, Chang-Kun
    • Proceedings of the SAREK Conference
    • /
    • 2008.11a
    • /
    • pp.487-492
    • /
    • 2008
  • The cooling of data centers has emerged as a significant challenge as the density of IT server increases. Server installations, along with the shrinking physical size of servers and storage systems, has resulted in high power density and high heat density. The introduction of high density enclosures into a data center creates the potential for "hot spots" within the room that the cooling system may not be able to address, since traditional designs assume relatively uniform cooling patterns within a data center. The cooling system for data center consists of a CRAC or CRAH unit and the associated air distribution system. It is the configuration of the distribution system that primarily distinguishes the different types of data center cooling systems, this is the main subject of this paper.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

OCP Cold Storage Test-bed (OCP Cold Storage 테스트베드)

  • Lee, Jaemyoun;Kang, Kyungtae
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.151-156
    • /
    • 2016
  • Cloud computing systems require a huge number of storage servers due to the growing implications of power bills, carbon emissions, and logistics of data centers. These considerations have motivated researchers to improve the energy efficiency of storage servers. Most servers use a lot of power irrespective of the amount of computing that they are doing, and one important goal is to redesign servers to be power-proportional. However, Research on large-scale storage systems is hampered by their cost. It is therefore desirable to develop a scalable test-bed for evaluating the power consumption of large-scale storage systems. We are building on open-source projects to construct a test-bed which will contribute to the assessment of power consumption in tiered storage systems. Integrating the cloud application platform can easily extend the proposed testbed laying a foundation for the design and evaluation of low-power storage servers.

Design and Implementation of Network Management System Based on Mobile Sink Networks (모바일 싱크 네트워크를 적용한 망 관리 시스템의 설계 및 구현)

  • Kim, Dong-Ok
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.8 no.4
    • /
    • pp.216-222
    • /
    • 2009
  • This paper proposes an integrated mobile sink networks management system which can monitor and control various kinds of wireless lan access points, located in many different areas divided by their managing groups, from multi-vendors, and their operations in networks. The proposed system has the center-local interoperability structure cooperating with local-center servers which can perform the same operations as the central servers for wireless lan access points from multi-vendors and wireless lan centric management features. For this purpose, we propose a new way of data design, messaging policy, and hierarchical system structure such that we can achieve stable and consistent management methods for various wireless access points on distributed networks.

  • PDF