• Title/Summary/Keyword: Computer optimization

Search Result 2,426, Processing Time 0.03 seconds

Improvement of evolution speed of individuals through hybrid reproduction of monogenesis and gamogenesis in genetic algorithms (유전자알고리즘에서 단성생식과 양성생식을 혼용한 번식을 통한 개체진화 속도향상)

  • Jung, Sung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.45-51
    • /
    • 2011
  • This paper proposes a method to accelerate the evolution speed of individuals through hybrid reproduction of monogenesis and gamogenesis. Monogenesis as a reproduction method that bacteria or monad without sexual distinction divide into two individuals has an advantage for local search and gamogenesis as a reproduction method that individuals with sexual distinction mate and breed the offsprings has an advantages for keeping the diversity of individuals. These properties can be properly used for improvement of evolution speed of individuals in genetic algorithms. In this paper, we made relatively good individuals among selected parents to do monogenesis for local search and forced relatively bad individuals among selected parents to do gamogenesis for global search by increasing the diversity of chromosomes. The mutation probability for monogenesis was set to a lower value than that of original genetic algorithm for local search and the mutation probability for gamogenesis was set to a higher value than that of original genetic algorithm for global search. Experimental results with four function optimization problems showed that the performances of three functions were very good, but the performances of fourth function with distributed global optima were not good. This was because distributed global optima prevented individuals from steady evolution.

An Efficient Clustering Algorithm based on Heuristic Evolution (휴리스틱 진화에 기반한 효율적 클러스터링 알고리즘)

  • Ryu, Joung-Woo;Kang, Myung-Ku;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.80-90
    • /
    • 2002
  • Clustering is a useful technique for grouping data points such that points within a single group/cluster have similar characteristics. Many clustering algorithms have been developed and used in engineering applications including pattern recognition and image processing etc. Recently, it has drawn increasing attention as one of important techniques in data mining. However, clustering algorithms such as K-means and Fuzzy C-means suffer from difficulties. Those are the needs to determine the number of clusters apriori and the clustering results depending on the initial set of clusters which fails to gain desirable results. In this paper, we propose a new clustering algorithm, which solves mentioned problems. In our method we use evolutionary algorithm to solve the local optima problem that clustering converges to an undesirable state starting with an inappropriate set of clusters. We also adopt a new measure that represents how well data are clustered. The measure is determined in terms of both intra-cluster dispersion and inter-cluster separability. Using the measure, in our method the number of clusters is automatically determined as the result of optimization process. And also, we combine heuristic that is problem-specific knowledge with a evolutionary algorithm to speed evolutionary algorithm search. We have experimented our algorithm with several sets of multi-dimensional data and it has been shown that one algorithm outperforms the existing algorithms.

Efficient 3D Object Simplification Algorithm Using 2D Planar Sampling and Wavelet Transform (2D 평면 표본화와 웨이브릿 변환을 이용한 효율적인 3차원 객체 간소화 알고리즘)

  • 장명호;이행석;한규필;박양우
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.297-304
    • /
    • 2004
  • In this paper, a mesh simplification algorithm based on wavelet transform and 2D planar sampling is proposed for efficient handling of 3D objects in computer applications. Since 3D vertices are directly transformed with wavelets in conventional mesh compression and simplification algorithms, it is difficult to solve tiling optimization problems which reconnect vertices into faces in the synthesis stage highly demanding vertex connectivities. However, a 3D mesh is sampled onto 2D planes and 2D polygons on the planes are independently simplified in the proposed algorithm. Accordingly, the transform of 2D polygons is very tractable and their connection information Is replaced with a sequence of vertices. The vertex sequence of the 2D polygons on each plane is analyzed with wavelets and the transformed data are simplified by removing small wavelet coefficients which are not dominant in the subjective quality of its shape. Therefore, the proposed algorithm is able to change the mesh level-of-detail simply by controlling the distance of 2D sampling planes and the selective removal of wavelet coefficients. Experimental results show that the proposed algorithm is a simple and efficient simplification technique with less external distortion.

ICARP: Interference-based Charging Aware Routing Protocol for Opportunistic Energy Harvesting Wireless Networks (ICARP: 기회적 에너지 하베스팅 무선 네트워크를 위한 간섭 기반 충전 인지 라우팅 프로토콜)

  • Kim, Hyun-Tae;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.1
    • /
    • pp.1-6
    • /
    • 2017
  • Recent researches on radio frequency energy harvesting networks(RF-EHNs) with limited energy resource like battery have been focusing on the development of a new scheme that can effectively extend the whole lifetime of a network to semipermanent. In order for considerable increase both in the amount of energy obtained from radio frequency energy harvesting and its charging effectiveness, it is very important to design a network that supports energy harvesting and data transfer simultaneously with the full consideration of various characteristics affecting the performance of a RF-EHN. In this paper, we proposes an interference-based charging aware routing protocol(ICARP) that utilizes interference information and charging time to maximize the amount of energy harvesting and to minimize the end-to-end delay from a source to the given destination node. To accomplish the research objectives, this paper gives a design of ICARP adopting new network metrics such as interference information and charging time to minimize end-to-end delay in energy harvesting wireless networks. The proposed method enables a RF-EHN to reduce the number of packet losses and retransmissions significantly for better energy consumption. Finally, simulation results show that the network performance in the aspects of packet transmission rate and end-to-end delay has enhanced with the comparison of existing routing protocols.

Tomato Crop Diseases Classification Models Using Deep CNN-based Architectures (심층 CNN 기반 구조를 이용한 토마토 작물 병해충 분류 모델)

  • Kim, Sam-Keun;Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.7-14
    • /
    • 2021
  • Tomato crops are highly affected by tomato diseases, and if not prevented, a disease can cause severe losses for the agricultural economy. Therefore, there is a need for a system that quickly and accurately diagnoses various tomato diseases. In this paper, we propose a system that classifies nine diseases as well as healthy tomato plants by applying various pretrained deep learning-based CNN models trained on an ImageNet dataset. The tomato leaf image dataset obtained from PlantVillage is provided as input to ResNet, Xception, and DenseNet, which have deep learning-based CNN architectures. The proposed models were constructed by adding a top-level classifier to the basic CNN model, and they were trained by applying a 5-fold cross-validation strategy. All three of the proposed models were trained in two stages: transfer learning (which freezes the layers of the basic CNN model and then trains only the top-level classifiers), and fine-tuned learning (which sets the learning rate to a very small number and trains after unfreezing basic CNN layers). SGD, RMSprop, and Adam were applied as optimization algorithms. The experimental results show that the DenseNet CNN model to which the RMSprop algorithm was applied output the best results, with 98.63% accuracy.

Active VM Consolidation for Cloud Data Centers under Energy Saving Approach

  • Saxena, Shailesh;Khan, Mohammad Zubair;Singh, Ravendra;Noorwali, Abdulfattah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.345-353
    • /
    • 2021
  • Cloud computing represent a new era of computing that's forms through the combination of service-oriented architecture (SOA), Internet and grid computing with virtualization technology. Virtualization is a concept through which every cloud is enable to provide on-demand services to the users. Most IT service provider adopt cloud based services for their users to meet the high demand of computation, as it is most flexible, reliable and scalable technology. Energy based performance tradeoff become the main challenge in cloud computing, as its acceptance and popularity increases day by day. Cloud data centers required a huge amount of power supply to the virtualization of servers for maintain on- demand high computing. High power demand increase the energy cost of service providers as well as it also harm the environment through the emission of CO2. An optimization of cloud computing based on energy-performance tradeoff is required to obtain the balance between energy saving and QoS (quality of services) policies of cloud. A study about power usage of resources in cloud data centers based on workload assign to them, says that an idle server consume near about 50% of its peak utilization power [1]. Therefore, more number of underutilized servers in any cloud data center is responsible to reduce the energy performance tradeoff. To handle this issue, a lots of research proposed as energy efficient algorithms for minimize the consumption of energy and also maintain the SLA (service level agreement) at a satisfactory level. VM (virtual machine) consolidation is one such technique that ensured about the balance of energy based SLA. In the scope of this paper, we explore reinforcement with fuzzy logic (RFL) for VM consolidation to achieve energy based SLA. In this proposed RFL based active VM consolidation, the primary objective is to manage physical server (PS) nodes in order to avoid over-utilized and under-utilized, and to optimize the placement of VMs. A dynamic threshold (based on RFL) is proposed for over-utilized PS detection. For over-utilized PS, a VM selection policy based on fuzzy logic is proposed, which selects VM for migration to maintain the balance of SLA. Additionally, it incorporate VM placement policy through categorization of non-overutilized servers as- balanced, under-utilized and critical. CloudSim toolkit is used to simulate the proposed work on real-world work load traces of CoMon Project define by PlanetLab. Simulation results shows that the proposed policies is most energy efficient compared to others in terms of reduction in both electricity usage and SLA violation.

EEG Feature Engineering for Machine Learning-Based CPAP Titration Optimization in Obstructive Sleep Apnea

  • Juhyeong Kang;Yeojin Kim;Jiseon Yang;Seungwon Chung;Sungeun Hwang;Uran Oh;Hyang Woon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.3
    • /
    • pp.89-103
    • /
    • 2023
  • Obstructive sleep apnea (OSA) is one of the most prevalent sleep disorders that can lead to serious consequences, including hypertension and/or cardiovascular diseases, if not treated promptly. Continuous positive airway pressure (CPAP) is widely recognized as the most effective treatment for OSA, which needs the proper titration of airway pressure to achieve the most effective treatment results. However, the process of CPAP titration can be time-consuming and cumbersome. There is a growing importance in predicting personalized CPAP pressure before CPAP treatment. The primary objective of this study was to optimize the CPAP titration process for obstructive sleep apnea patients through EEG feature engineering with machine learning techniques. We aimed to identify and utilize the most critical EEG features to forecast key OSA predictive indicators, ultimately facilitating more precise and personalized CPAP treatment strategies. Here, we analyzed 126 OSA patients' PSG datasets before and after the CPAP treatment. We extracted 29 EEG features to predict the features that have high importance on the OSA prediction index which are AHI and SpO2 by applying the Shapley Additive exPlanation (SHAP) method. Through extracted EEG features, we confirmed the six EEG features that had high importance in predicting AHI and SpO2 using XGBoost, Support Vector Machine regression, and Random Forest Regression. By utilizing the predictive capabilities of EEG-derived features for AHI and SpO2, we can better understand and evaluate the condition of patients undergoing CPAP treatment. The ability to predict these key indicators accurately provides more immediate insight into the patient's sleep quality and potential disturbances. This not only ensures the efficiency of the diagnostic process but also provides more tailored and effective treatment approach. Consequently, the integration of EEG analysis into the sleep study protocol has the potential to revolutionize sleep diagnostics, offering a time-saving, and ultimately more effective evaluation for patients with sleep-related disorders.

Improved Resource Allocation Model for Reducing Interference among Secondary Users in TV White Space for Broadband Services

  • Marco P. Mwaimu;Mike Majham;Ronoh Kennedy;Kisangiri Michael;Ramadhani Sinde
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.55-68
    • /
    • 2023
  • In recent years, the Television White Space (TVWS) has attracted the interest of many researchers due to its propagation characteristics obtainable between 470MHz and 790MHz spectrum bands. The plenty of unused channels in the TV spectrum allows the secondary users (SUs) to use the channels for broadband services especially in rural areas. However, when the number of SUs increases in the TVWS wireless network the aggregate interference also increases. Aggregate interferences are the combined harmful interferences that can include both co-channel and adjacent interferences. The aggregate interference on the side of Primary Users (PUs) has been extensively scrutinized. Therefore, resource allocation (power and spectrum) is crucial when designing the TVWS network to avoid interferences from Secondary Users (SUs) to PUs and among SUs themselves. This paper proposes a model to improve the resource allocation for reducing the aggregate interface among SUs for broadband services in rural areas. The proposed model uses joint power and spectrum hybrid Firefly algorithm (FA), Genetic algorithm (GA), and Particle Swarm Optimization algorithm (PSO) which is considered the Co-channel interference (CCI) and Adjacent Channel Interference (ACI). The algorithm is integrated with the admission control algorithm so that; there is a possibility to remove some of the SUs in the TVWS network whenever the SINR threshold for SUs and PU are not met. We considered the infeasible system whereby all SUs and PU may not be supported simultaneously. Therefore, we proposed a joint spectrum and power allocation with an admission control algorithm whose better complexity and performance than the ones which have been proposed in the existing algorithms in the literature. The performance of the proposed algorithm is compared using the metrics such as sum throughput, PU SINR, algorithm running time and SU SINR less than threshold and the results show that the PSOFAGA with ELGR admission control algorithm has best performance compared to GA, PSO, FA, and FAGAPSO algorithms.

Deep Neural Network Analysis System by Visualizing Accumulated Weight Changes (누적 가중치 변화의 시각화를 통한 심층 신경망 분석시스템)

  • Taelin Yang;Jinho Park
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.85-92
    • /
    • 2023
  • Recently, interest in artificial intelligence has increased due to the development of artificial intelligence fields such as ChatGPT and self-driving cars. However, there are still many unknown elements in training process of artificial intelligence, so that optimizing the model requires more time and effort than it needs. Therefore, there is a need for a tool or methodology that can analyze the weight changes during the training process of artificial intelligence and help out understatnding those changes. In this research, I propose a visualization system which helps people to understand the accumulated weight changes. The system calculates the weights for each training period to accumulates weight changes and stores accumulated weight changes to plot them in 3D space. This research will allow us to explore different aspect of artificial intelligence learning process, such as understanding how the model get trained and providing us an indicator on which hyperparameters should be changed for better performance. These attempts are expected to explore better in artificial intelligence learning process that is still considered as unknown and contribute to the development and application of artificial intelligence models.

Drape Simulation Estimation for Non-Linear Stiffness Model (비선형 강성 모델을 위한 드레이프 시뮬레이션 결과 추정)

  • Eungjune Shim;Eunjung Ju;Myung Geol Choi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.117-125
    • /
    • 2023
  • In the development of clothing design through virtual simulation, it is essential to minimize the differences between the virtual and the real world as much as possible. The most critical task to enhance the similarity between virtual and real garments is to find simulation parameters that can closely emulate the physical properties of the actual fabric in use. The simulation parameter optimization process requires manual tuning by experts, demanding high expertise and a significant amount of time. Especially, considerable time is consumed in repeatedly running simulations to check the results of applying the tuned simulation parameters. Recently, to tackle this issue, artificial neural network learning models have been proposed that swiftly estimate the results of drape test simulations, which are predominantly used for parameter tuning. In these earlier studies, relatively simple linear stiffness models were used, and instead of estimating the entirety of the drape mesh, they estimated only a portion of the mesh and interpolated the rest. However, there is still a scarcity of research on non-linear stiffness models, which are commonly used in actual garment design. In this paper, we propose a learning model for estimating the results of drape simulations for non-linear stiffness models. Our learning model estimates the full high-resolution mesh model of drape. To validate the performance of the proposed method, experiments were conducted using three different drape test methods, demonstrating high accuracy in estimation.