• Title/Summary/Keyword: Intelligence Optimization

Search Result 384, Processing Time 0.038 seconds

A Novel Grasshopper Optimization-based Particle Swarm Algorithm for Effective Spectrum Sensing in Cognitive Radio Networks

  • Ashok, J;Sowmia, KR;Jayashree, K;Priya, Vijay
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.520-541
    • /
    • 2023
  • In CRNs, SS is of utmost significance. Every CR user generates a sensing report during the training phase beneath various circumstances, and depending on a collective process, either communicates or remains silent. In the training stage, the fusion centre combines the local judgments made by CR users by a majority vote, and then returns a final conclusion to every CR user. Enough data regarding the environment, including the activity of PU and every CR's response to that activity, is acquired and sensing classes are created during the training stage. Every CR user compares their most recent sensing report to the previous sensing classes during the classification stage, and distance vectors are generated. The posterior probability of every sensing class is derived on the basis of quantitative data, and the sensing report is then classified as either signifying the presence or absence of PU. The ISVM technique is utilized to compute the quantitative variables necessary to compute the posterior probability. Here, the iterations of SVM are tuned by novel GO-PSA by combining GOA and PSO. Novel GO-PSA is developed since it overcomes the problem of computational complexity, returns minimum error, and also saves time when compared with various state-of-the-art algorithms. The dependability of every CR user is taken into consideration as these local choices are then integrated at the fusion centre utilizing an innovative decision combination technique. Depending on the collective choice, the CR users will then communicate or remain silent.

Predicting concrete's compressive strength through three hybrid swarm intelligent methods

  • Zhang Chengquan;Hamidreza Aghajanirefah;Kseniya I. Zykova;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.32 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • One of the main design parameters traditionally utilized in projects of geotechnical engineering is the uniaxial compressive strength. The present paper employed three artificial intelligence methods, i.e., the stochastic fractal search (SFS), the multi-verse optimization (MVO), and the vortex search algorithm (VSA), in order to determine the compressive strength of concrete (CSC). For the same reason, 1030 concrete specimens were subjected to compressive strength tests. According to the obtained laboratory results, the fly ash, cement, water, slag, coarse aggregates, fine aggregates, and SP were subjected to tests as the input parameters of the model in order to decide the optimum input configuration for the estimation of the compressive strength. The performance was evaluated by employing three criteria, i.e., the root mean square error (RMSE), mean absolute error (MAE), and the determination coefficient (R2). The evaluation of the error criteria and the determination coefficient obtained from the above three techniques indicates that the SFS-MLP technique outperformed the MVO-MLP and VSA-MLP methods. The developed artificial neural network models exhibit higher amounts of errors and lower correlation coefficients in comparison with other models. Nonetheless, the use of the stochastic fractal search algorithm has resulted in considerable enhancement in precision and accuracy of the evaluations conducted through the artificial neural network and has enhanced its performance. According to the results, the utilized SFS-MLP technique showed a better performance in the estimation of the compressive strength of concrete (R2=0.99932 and 0.99942, and RMSE=0.32611 and 0.24922). The novelty of our study is the use of a large dataset composed of 1030 entries and optimization of the learning scheme of the neural prediction model via a data distribution of a 20:80 testing-to-training ratio.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Artificial Intelligence and College Mathematics Education (인공지능(Artificial Intelligence)과 대학수학교육)

  • Lee, Sang-Gu;Lee, Jae Hwa;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.34 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • Today's healthcare, intelligent robots, smart home systems, and car sharing are already innovating with cutting-edge information and communication technologies such as Artificial Intelligence (AI), the Internet of Things, the Internet of Intelligent Things, and Big data. It is deeply affecting our lives. In the factory, robots have been working for humans more than several decades (FA, OA), AI doctors are also working in hospitals (Dr. Watson), AI speakers (Giga Genie) and AI assistants (Siri, Bixby, Google Assistant) are working to improve Natural Language Process. Now, in order to understand AI, knowledge of mathematics becomes essential, not a choice. Thus, mathematicians have been given a role in explaining such mathematics that make these things possible behind AI. Therefore, the authors wrote a textbook 'Basic Mathematics for Artificial Intelligence' by arranging the mathematics concepts and tools needed to understand AI and machine learning in one or two semesters, and organized lectures for undergraduate and graduate students of various majors to explore careers in artificial intelligence. In this paper, we share our experience of conducting this class with the full contents in http://matrix.skku.ac.kr/math4ai/.

Prediction of Storm Surge Height Using Synthesized Typhoons and Artificial Intelligence (합성태풍과 인공지능을 활용한 폭풍해일고 예측)

  • Eum, Ho-Sik;Park, Jong-Jib;Jeong, Kwang-Young;Park, Young-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.7
    • /
    • pp.892-903
    • /
    • 2020
  • The rapid and accurate prediction of storm-surge height during typhoon attacks is essential in responding to coastal disasters. Most methods used for predicting typhoon data are based on numerical modeling, but numerical modeling takes significant computing resources and time. Recently, various studies on the expeditious production of predictive data based on artificial intelligence have been conducted, and in this study, artificial intelligence-based storm-surge height prediction was performed. Several learning data were needed for artificial intelligence training. Because the number of previous typhoons was limited, many synthesized typhoons were created using the tropical cyclone risk model, and the storm-surge height was also generated using the storm surge model. The comparison of the storm-surge height predicted using artificial intelligence with the actual typhoon, showed that the root-mean-square error was 0.09 ~ 0.30 m, the correlation coefficient was 0.65 ~ 0.94, and the absolute relative error of the maximum height was 1.0 ~ 52.5%. Although errors appeared to be somewhat large at certain typhoons and points, future studies are expected to improve accuracy through learning-data optimization.

Federated learning-based client training acceleration method for personalized digital twins (개인화 디지털 트윈을 위한 연합학습 기반 클라이언트 훈련 가속 방식)

  • YoungHwan Jeong;Won-gi Choi;Hyoseon Kye;JeeHyeong Kim;Min-hwan Song;Sang-shin Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.23-37
    • /
    • 2024
  • Digital twin is an M&S (Modeling and Simulation) technology designed to solve or optimize problems in the real world by replicating physical objects in the real world as virtual objects in the digital world and predicting phenomena that may occur in the future through simulation. Digital twins have been elaborately designed and utilized based on data collected to achieve specific purposes in large-scale environments such as cities and industrial facilities. In order to apply this digital twin technology to real life and expand it into user-customized service technology, practical but sensitive issues such as personal information protection and personalization of simulations must be resolved. To solve this problem, this paper proposes a federated learning-based accelerated client training method (FACTS) for personalized digital twins. The basic approach is to use a cluster-driven federated learning training procedure to protect personal information while simultaneously selecting a training model similar to the user and training it adaptively. As a result of experiments under various statistically heterogeneous conditions, FACTS was found to be superior to the existing FL method in terms of training speed and resource efficiency.

RRSEB: A Reliable Routing Scheme For Energy-Balancing Using A Self-Adaptive Method In Wireless Sensor Networks

  • Shamsan Saleh, Ahmed M.;Ali, Borhanuddin Mohd.;Mohamad, Hafizal;Rasid, Mohd Fadlee A.;Ismail, Alyani
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1585-1609
    • /
    • 2013
  • Over recent years, enormous amounts of research in wireless sensor networks (WSNs) have been conducted, due to its multifarious applications such as in environmental monitoring, object tracking, disaster management, manufacturing, monitoring and control. In some of WSN applications dependent the energy-efficient and link reliability are demanded. Hence, this paper presents a routing protocol that considers these two criteria. We propose a new mechanism called Reliable Routing Scheme for Energy-Balanced (RRSEB) to reduce the packets dropped during the data communications. It is based on Swarm Intelligence (SI) using the Ant Colony Optimization (ACO) method. The RRSEB is a self-adaptive method to ensure the high routing reliability in WSNs, if the failures occur due to the movement of the sensor nodes or sensor node's energy depletion. This is done by introducing a new method to create alternative paths together with the data routing obtained during the path discovery stage. The goal of this operation is to update and offer new routing information in order to construct the multiple paths resulting in an increased reliability of the sensor network. From the simulation, we have seen that the proposed method shows better results in terms of packet delivery ratio and energy efficiency.

A NOVEL APPROACH TO FIND OPTIMIZED NEUTRON ENERGY GROUP STRUCTURE IN MOX THERMAL LATTICES USING SWARM INTELLIGENCE

  • Akbari, M.;Khoshahval, F.;Minuchehr, A.;Zolfaghari, A.
    • Nuclear Engineering and Technology
    • /
    • v.45 no.7
    • /
    • pp.951-960
    • /
    • 2013
  • Energy group structure has a significant effect on the results of multigroup transport calculations. It is known that $UO_2-PuO_2$ (MOX) is a recently developed fuel which consumes recycled plutonium. For such fuel which contains various resonant nuclides, the selection of energy group structure is more crucial comparing to the $UO_2$ fuels. In this paper, in order to improve the accuracy of the integral results in MOX thermal lattices calculated by WIMSD-5B code, a swarm intelligence method is employed to optimize the energy group structure of WIMS library. In this process, the NJOY code system is used to generate the 69 group cross sections of WIMS code for the specified energy structure. In addition, the multiplication factor and spectral indices are compared against the results of continuous energy MCNP-4C code for evaluating the energy group structure. Calculations performed in four different types of $H_2O$ moderated $UO_2-PuO_2$ (MOX) lattices show that the optimized energy structure obtains more accurate results in comparison with the WIMS original structure.

A Study on 2D Character Response of Speed Method Using Unity

  • HAN, Dong-Hun;CHOI, Jeong-Hyun;LIM, Myung-Jae
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.35-40
    • /
    • 2021
  • In this paper, many game companies seek better optimization and easy-to-apply logic to prolong the game's lifespan and provide a better game environment for users. Therefore, research will be showing the game's key input response method called RoS (Response of Speed). The purpose of the method is to simultaneously perform various motions with the character showing natural motion without errors even if the character's control key is duplicated. This method is for the developers so they can reduce bugs and development time in future game development. To be used with quickly generating game environments, the new method compares with the popular motion method, so which method is faster and can adapt to diverse games. The paper suggested that the Response of Speed method is a better method for optimizing frames and reducing the number of reacting seconds by showing a faster response and speed). With the method popularity of scrollers, many 2D cross-scroll games follow the formula of Dash, Shoot, Walk, Stay, and Crouch. With the development of game engines, it is becoming easier to implement them. Therefore, although the method presented in the above paper differs from the popular method, it is expected that there will be no great difficulty in applying it to the game because transplantation is easy. In the future, we plan to study to minimize the delay of each connection of the character motion so that the game can be optimized to best.

A Reinforcement Learning Framework for Autonomous Cell Activation and Customized Energy-Efficient Resource Allocation in C-RANs

  • Sun, Guolin;Boateng, Gordon Owusu;Huang, Hu;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3821-3841
    • /
    • 2019
  • Cloud radio access networks (C-RANs) have been regarded in recent times as a promising concept in future 5G technologies where all DSP processors are moved into a central base band unit (BBU) pool in the cloud, and distributed remote radio heads (RRHs) compress and forward received radio signals from mobile users to the BBUs through radio links. In such dynamic environment, automatic decision-making approaches, such as artificial intelligence based deep reinforcement learning (DRL), become imperative in designing new solutions. In this paper, we propose a generic framework of autonomous cell activation and customized physical resource allocation schemes for energy consumption and QoS optimization in wireless networks. We formulate the problem as fractional power control with bandwidth adaptation and full power control and bandwidth allocation models and set up a Q-learning model to satisfy the QoS requirements of users and to achieve low energy consumption with the minimum number of active RRHs under varying traffic demand and network densities. Extensive simulations are conducted to show the effectiveness of our proposed solution compared to existing schemes.