• Title/Summary/Keyword: Computing Power

Search Result 1,380, Processing Time 0.03 seconds

Dynamic Scheduling Method for Cooperative Resource Sharing in Mobile Cloud Computing Environments

  • Kwon, Kyunglag;Park, Hansaem;Jung, Sungwoo;Lee, Jeungmin;Chung, In-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.484-503
    • /
    • 2016
  • Mobile cloud computing has recently become a new paradigm for the utilization of a variety of shared mobile resources via wireless network environments. However, due to the inherent characteristics of mobile devices, a limited battery life, and a network access requirement, it is necessary for mobile servers to provide a dynamic approach for managing mobile resources efficiently in mobile cloud computing environments. Since on-demand job requests occur frequently and the number of mobile devices is drastically increased in mobile cloud computing environments, a different mobile resource management method is required to maximize the computational power. In this paper, we therefore propose a cooperative, mobile resource sharing method that considers both the inherent properties and the number of mobile devices in mobile cloud environments. The proposed method is composed of four main components: mobile resource monitor, job handler, resource handler, and results consolidator. In contrast with conventional mobile cloud computing, each mobile device under the proposed method can be either a service consumer or a service provider in the cloud. Even though each device is resource-poor when a job is processed independently, the computational power is dramatically increased under the proposed method, as the devices cooperate simultaneously for a job. Therefore, the mobile computing power throughput is dynamically increased, while the computation time for a given job is reduced. We conduct case-based experiments to validate the proposed method, whereby the feasibility of the method for the purpose of cooperative computation is shown.

Energy-Efficient Traffic Grooming in Bandwidth Constrained IP over WDM Networks

  • Chen, Bin;Yang, Zijian;Lin, Rongping;Dai, Mingjun;Lin, Xiaohui;Su, Gongchao;Wang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2711-2733
    • /
    • 2018
  • Minimizing power consumption in bandwidth limited optical traffic grooming networks is presented as a two-objective optimization problem. Since the main objective is to route a connection, the network throughput is maximized first, and then the minimum power consumption solution is found for this maximized throughput. Both transparent IP over WDM (Tp-IPoWDM) and translucent IP over WDM (Tl-IPoWDM) network may be applied to examine such bi-objective algorithms. Simulations show that the bi-objective algorithms are more energy-efficient than the single objective algorithms where only the throughput is optimized. For a Tp-IPoWDM network, both link based ILP (LB-ILP) and path based ILP (PB-ILP) methods are formulated and solved. Simulation results show that PB-ILP can save more power than LB-ILP because PB-ILP has more path selections when lightpath lengths are limited. For a Tl-IPoWDM network, only PB-ILP is formulated and we show that the Tl-IPoWDM network consumes less energy than the Tp-IPoWDM network, especially under a sparse network topology. For both kinds of networks, it is shown that network energy efficiency can be improved by over-provisioning wavelengths, which gives the network more path choices.

Design of In-Memory Computing Adder Using Low-Power 8+T SRAM (저 전력 8+T SRAM을 이용한 인 메모리 컴퓨팅 가산기 설계)

  • Chang-Ki Hong;Jeong-Beom Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.291-298
    • /
    • 2023
  • SRAM-based in-memory computing is one of the technologies to solve the bottleneck of von Neumann architecture. In order to achieve SRAM-based in-memory computing, it is essential to design efficient SRAM bit-cell. In this paper, we propose a low-power differential sensing 8+T SRAM bit-cell which reduces power consumption and improves circuit performance. The proposed 8+T SRAM bit-cell is applied to ripple carry adder which performs SRAM read and bitwise operations simultaneously and executes each logic operation in parallel. Compared to the previous work, the designed 8+T SRAM-based ripple carry adder is reduced power consumption by 11.53%, but increased propagation delay time by 6.36%. Also, this adder is reduced power-delay-product (PDP) by 5.90% and increased energy-delay- product (EDP) by 0.08%. The proposed circuit was designed using TSMC 65nm CMOS process, and its feasibility was verified through SPECTRE simulation.

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.

ETRI AI Strategy #2: Strengthening Competencies in AI Semiconductor & Computing Technologies (ETRI AI 실행전략 2: AI 반도체 및 컴퓨팅시스템 기술경쟁력 강화)

  • Choi, S.S.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.13-22
    • /
    • 2020
  • There is no denying that computing power has been a crucial driving force behind the development of artificial intelligence today. In addition, artificial intelligence (AI) semiconductors and computing systems are perceived to have promising industrial value in the market along with rapid technological advances. Therefore, success in this field is also meaningful to the nation's growth and competitiveness. In this context, ETRI's AI strategy proposes implementation directions and tasks with the aim of strengthening the technological competitiveness of AI semiconductors and computing systems. The paper contains a brief background of ETRI's AI Strategy #2, research and development trends, and key tasks in four major areas: 1) AI processors, 2) AI computing systems, 3) neuromorphic computing, and 4) quantum computing.

A study of the method of computing Transmission cost in competitive market (경쟁적 전력시장에서 적용 가능한 송전선이용료 산정기법에 관한 연구)

  • Kim, Kyung-Min;Kim, Kang-Wan;Kim, Jong-Man;Han, Seok-Man;Kim, Bal-Ho H.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11b
    • /
    • pp.284-286
    • /
    • 2004
  • When Power market is restructured, we have an argument with selecting Transmission loss factors. First, we determine the method of calculating Transmission loss factor using OPF (Optimal Power Flow) or PF (Power Flow Equation). Then, we decide that we applicate the factor which are identical value all the year or which are floating Transmission loss factor every each hour. In this study, we accomplish the method of computing Transmission cost in competitive market.

  • PDF

Comparison of the Process-level Power Consumption Profilers (프로세스 레벨 전력 소비 프로파일러의 비교)

  • Kang, Min-jae;Noh, Dong-kun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.749-752
    • /
    • 2012
  • Recent social issues is energy issues, green computing has attracted attention. Active research on the power consumption of computer profiling is one of the various approaches for green computing. As a representative tool PowerAPI, PowerTop, JouleMeter, pTop, and EnergyChecker. These studies can be used to measure the power consumption of each computer device because it is based on a pure software. Based on this profiling process at the level of power consumption by performing the power consumption of each program can be analyzed. Therefore to identify the processes that consume a lot of power and control the total power consumption by reducing, also when designing the program, based on data profiling power enables the design of low-power programs, and ultimately can be oriented green computing. In this paper, by comparing and analyzing the associated representative studies, the ideal process level will draw on the characteristics of the power consumption profiler.

  • PDF

Local Scheduling method based on the User Pattern for Korea@Home Agent (Korea@Home 에이전트를 위한 사용자패턴기반의 로컬 스케줄링기법)

  • Choi, JiHyun;Kim, Mikyoung;Choi, JangWon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.226-230
    • /
    • 2007
  • This paper proposes a local scheduling method based on user pattern for Volunteer computing project, Korea@Home. It enables Korea@Home participants to run the agent without disturbance. It is devised to prevent user's application from delay while running the agent and decreases the frequency of switching resource between the user and the agent. We analyze the user's patterns of donating computing resource with Korea@Home which is a representative volunteer computing project in Korea. It has contributed the computing power to several applications including climate prediction and virtual screening. It promotes the volunteers to participate continuously without disturbance and increases the potential computing power with non-disturbance scheduling based on user usage pattern for Volunteer Computing.

  • PDF

A CMOS Analog Front End for a WPAN Zero-IF Receiver

  • Moon, Yeon-Kug;Seo, Hae-Moon;Park, Yong-Kuk;Won, Kwang-Ho;Lim, Seung-Ok;Kang, Jeong-Hoon;Park, Young-Choong;Yoon, Myung-Hyun;Yoo, June-Jae;Kim, Seong-Dong
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.769-772
    • /
    • 2005
  • This paper describes a low-voltage and low-power channel selection analog front end with continuous-time low pass filters and highly linear programmable-gain amplifier(PGA). The filters were realized as balanced Gm-C biquadratic filters to achieve a low current consumption. High linearity and a constant wide bandwidth are achieved by using a new transconductance(Gm) cell. The PGA has a voltage gain varying from 0 to 65dB, while maintaining a constant bandwidth. A filter tuning circuit that requires an accurate time base but no external components is presented. With a 1-Vrms differential input and output, the filter achieves -85dB THD and a 78dB signal-to-noise ratio. Both the filter and PGA were implemented in a 0.18um 1P6M n-well CMOS process. They consume 3.2mW from a 1.8V power supply and occupy an area of $0.19mm^2$.

  • PDF

Applying Workload Shaping Toward Green Cloud Computing

  • Kim, Woongsup
    • International journal of advanced smart convergence
    • /
    • v.1 no.2
    • /
    • pp.12-15
    • /
    • 2012
  • Energy costs for operating and cooling computing resources in Cloud infrastructure have increased significantly up to the point where they would surpass the hardware purchasing costs. Thus, reducing the energy consumption can save a significant amount of management cost. One of major approach is removing hardware over-provisioning. In this paper, we propose a technique that facilitates power saving through reducing resource over provisioning based on virtualization technology. To this end, we use dynamic workload shaping to reschedule and redistribute job requests considering overall power consumption. In this paper, we present our approach to shape workloads dynamically and distribute them on virtual machines and physical machines through virtualization technology. We generated synthetic workload data and evaluated it in simulating and real implementation. Our simulated results demonstrate our approach outperforms to when not using no workload shaping methodology.