• Title/Summary/Keyword: optimization algorithms

Search Result 1,708, Processing Time 0.025 seconds

Active VM Consolidation for Cloud Data Centers under Energy Saving Approach

  • Saxena, Shailesh;Khan, Mohammad Zubair;Singh, Ravendra;Noorwali, Abdulfattah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.345-353
    • /
    • 2021
  • Cloud computing represent a new era of computing that's forms through the combination of service-oriented architecture (SOA), Internet and grid computing with virtualization technology. Virtualization is a concept through which every cloud is enable to provide on-demand services to the users. Most IT service provider adopt cloud based services for their users to meet the high demand of computation, as it is most flexible, reliable and scalable technology. Energy based performance tradeoff become the main challenge in cloud computing, as its acceptance and popularity increases day by day. Cloud data centers required a huge amount of power supply to the virtualization of servers for maintain on- demand high computing. High power demand increase the energy cost of service providers as well as it also harm the environment through the emission of CO2. An optimization of cloud computing based on energy-performance tradeoff is required to obtain the balance between energy saving and QoS (quality of services) policies of cloud. A study about power usage of resources in cloud data centers based on workload assign to them, says that an idle server consume near about 50% of its peak utilization power [1]. Therefore, more number of underutilized servers in any cloud data center is responsible to reduce the energy performance tradeoff. To handle this issue, a lots of research proposed as energy efficient algorithms for minimize the consumption of energy and also maintain the SLA (service level agreement) at a satisfactory level. VM (virtual machine) consolidation is one such technique that ensured about the balance of energy based SLA. In the scope of this paper, we explore reinforcement with fuzzy logic (RFL) for VM consolidation to achieve energy based SLA. In this proposed RFL based active VM consolidation, the primary objective is to manage physical server (PS) nodes in order to avoid over-utilized and under-utilized, and to optimize the placement of VMs. A dynamic threshold (based on RFL) is proposed for over-utilized PS detection. For over-utilized PS, a VM selection policy based on fuzzy logic is proposed, which selects VM for migration to maintain the balance of SLA. Additionally, it incorporate VM placement policy through categorization of non-overutilized servers as- balanced, under-utilized and critical. CloudSim toolkit is used to simulate the proposed work on real-world work load traces of CoMon Project define by PlanetLab. Simulation results shows that the proposed policies is most energy efficient compared to others in terms of reduction in both electricity usage and SLA violation.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Parameter search methodology of support vector machines for improving performance (속도 향상을 위한 서포트 벡터 머신의 파라미터 탐색 방법론)

  • Lee, Sung-Bo;Kim, Jae-young;Kim, Cheol-Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.329-337
    • /
    • 2017
  • This paper proposes a search method that explores parameters C and σ values of support vector machines (SVM) to improve performance while maintaining search accuracy. A traditional grid search method requires tremendous computational times because it searches all available combinations of C and σ values to find optimal combinations which provide the best performance of SVM. To address this issue, this paper proposes a deep search method that reduces computational time. In the first stage, it divides C-σ- accurate metrics into four regions, searches a median value of each region, and then selects a point of the highest accurate value as a start point. In the second stage, the selected start points are re-divided into four regions, and then the highest accurate point is assigned as a new search point. In the third stage, after eight points near the search point. are explored and the highest accurate value is assigned as a new search point, corresponding points are divided into four parts and it calculates an accurate value. In the last stage, it is continued until an accurate metric value is the highest compared to the neighborhood point values. If it is not satisfied, it is repeated from the second stage with the input level value. Experimental results using normal and defect bearings show that the proposed deep search algorithm outperforms the conventional algorithms in terms of performance and search time.

Driving Behaivor Optimization Using Genetic Algorithm and Analysis of Traffic Safety for Non-Autonomous Vehicles by Autonomous Vehicle Penetration Rate (유전알고리즘을 이용한 주행행태 최적화 및 자율주행차 도입률별 일반자동차 교통류 안전성 분석)

  • Somyoung Shin;Shinhyoung Park;Jiho Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.30-42
    • /
    • 2023
  • Various studies have been conducted using microtraffic simulation (VISSIM) to analyze the safety of traffic flow when introducing autonomous vehicles. However, no studies have analyzed traffic safety in mixed traffic while considering the driving behavior of general vehicles as a parameter in VISSIM. Therefore, the aim of this study was to optimize the input variables of VISSIM for non-autonomous vehicles through genetic algorithms to obtain realistic behavior. A traffic safety analysis was then performed according to the penetration rate of autonomous vehicles. In a 640 meter section of US highway I-101, the number of conflicts was analyzed when the trailing vehicle was a non-autonomous vehicle. The total number of conflicts increased until the proportion of autonomous vehicles exceeded 20%, and the number of conflicts decreased continuously after exceeding 20%. The number of conflicts between non-autonomous vehicles and autonomous vehicles increased with proportions of autonomous vehicles of up to 60%. However, there was a limitation in that the driving behavior of autonomous vehicles was based on the results of the literature and did not represent actual driving behavior. Therefore, for a more accurate analysis, future studies should reflect the actual driving behavior of autonomous vehicles.

Optimal Sensor Placement for Improved Prediction Accuracy of Structural Responses in Model Test of Multi-Linked Floating Offshore Systems Using Genetic Algorithms (다중연결 해양부유체의 모형시험 구조응답 예측정확도 향상을 위한 유전알고리즘을 이용한 센서배치 최적화)

  • Kichan Sim;Kangsu Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.163-171
    • /
    • 2024
  • Structural health monitoring for ships and offshore structures is important in various aspects. Ships and offshore structures are continuously exposed to various environmental conditions, such as waves, wind, and currents. In the event of an accident, immense economic losses, environmental pollution, and safety problems can occur, so it is necessary to detect structural damage or defects early. In this study, structural response data of multi-linked floating offshore structures under various wave load conditions was calculated by performing fluid-structure coupled analysis. Furthermore, the order reduction method with distortion base mode was applied to the structures for predicting the structural response by using the results of numerical analysis. The distortion base mode order reduction method can predict the structural response of a desired area with high accuracy, but prediction performance is affected by sensor arrangement. Optimization based on a genetic algorithm was performed to search for optimal sensor arrangement and improve the prediction performance of the distortion base mode-based reduced-order model. Consequently, a sensor arrangement that predicted the structural response with an error of about 84.0% less than the initial sensor arrangement was derived based on the root mean squared error, which is a prediction performance evaluation index. The computational cost was reduced by about 8 times compared to evaluating the prediction performance of reduced-order models for a total of 43,758 sensor arrangement combinations. and the expected performance was overturned to approximately 84.0% based on sensor placement, including the largest square root error.

A Study on the Optimization Period of Light Buoy Location Patterns Using the Convex Hull Algorithm (볼록 껍질 알고리즘을 이용한 등부표 위치패턴 최적화 기간 연구)

  • Wonjin Choi;Beom-Sik Moon;Chae-Uk Song;Young-Jin Kim
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.164-170
    • /
    • 2024
  • The light buoy, a floating structure at sea, is prone to drifting due to external factors such as oceanic weather. This makes it imperative to monitor for any loss or displacement of buoys. In order to address this issue, the Ministry of Oceans and Fisheries aims to issue alerts for buoy displacement by analyzing historical buoy position data to detect patterns. However, periodic lifting inspections, which are conducted every two years, disrupt the buoy's location pattern. As a result, new patterns need to be analyzed after each inspection for location monitoring. In this study, buoy position data from various periods were analyzed using convex hull and distance-based clustering algorithms. In addition, the optimal data collection period was identified in order to accurately recognize buoy location patterns. The findings suggest that a nine-week data collection period established stable location patterns, explaining approximately 89.8% of the variance in location data. These results can improve the management of light buoys based on location patterns and aid in the effective monitoring and early detection of buoy displacement.

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

A study on the optimization of tunnel support patterns using ANN and SVR algorithms (ANN 및 SVR 알고리즘을 활용한 최적 터널지보패턴 선정에 관한 연구)

  • Lee, Je-Kyum;Kim, YangKyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.617-628
    • /
    • 2022
  • A ground support pattern should be designed by properly integrating various support materials in accordance with the rock mass grade when constructing a tunnel, and a technical decision must be made in this process by professionals with vast construction experiences. However, designing supports at the early stage of tunnel design, such as feasibility study or basic design, may be very challenging due to the short timeline, insufficient budget, and deficiency of field data. Meanwhile, the design of the support pattern can be performed more quickly and reliably by utilizing the machine learning technique and the accumulated design data with the rapid increase in tunnel construction in South Korea. Therefore, in this study, the design data and ground exploration data of 48 road tunnels in South Korea were inspected, and data about 19 items, including eight input items (rock type, resistivity, depth, tunnel length, safety index by tunnel length, safety index by rick index, tunnel type, tunnel area) and 11 output items (rock mass grade, two items for shotcrete, three items for rock bolt, three items for steel support, two items for concrete lining), were collected to automatically determine the rock mass class and the support pattern. Three machine learning models (S1, A1, A2) were developed using two machine learning algorithms (SVR, ANN) and organized data. As a result, the A2 model, which applied different loss functions according to the output data format, showed the best performance. This study confirms the potential of support pattern design using machine learning, and it is expected that it will be able to improve the design model by continuously using the model in the actual design, compensating for its shortcomings, and improving its usability.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.