• Title/Summary/Keyword: Hybrid metrics

Search Result 40, Processing Time 0.023 seconds

Energy Efficient Cluster Head Selection and Routing Algorithm using Hybrid Firefly Glow-Worm Swarm Optimization in WSN

  • Bharathiraja S;Selvamuthukumaran S;Balaji V
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2140-2156
    • /
    • 2023
  • The Wireless Sensor Network (WSN), is constructed out of teeny-tiny sensor nodes that are very low-cost, have a low impact on the environment in terms of the amount of power they consume, and are able to successfully transmit data to the base station. The primary challenges that are presented by WSN are those that are posed by the distance between nodes, the amount of energy that is consumed, and the delay in time. The sensor node's source of power supply is a battery, and this particular battery is not capable of being recharged. In this scenario, the amount of energy that is consumed rises in direct proportion to the distance that separates the nodes. Here, we present a Hybrid Firefly Glow-Worm Swarm Optimization (HF-GSO) guided routing strategy for preserving WSNs' low power footprint. An efficient fitness function based on firefly optimization is used to select the Cluster Head (CH) in this procedure. It aids in minimising power consumption and the occurrence of dead sensor nodes. After a cluster head (CH) has been chosen, the Glow-Worm Swarm Optimization (GSO) algorithm is used to figure out the best path for sending data to the sink node. Power consumption, throughput, packet delivery ratio, and network lifetime are just some of the metrics measured and compared between the proposed method and methods that are conceptually similar to those already in use. Simulation results showed that the proposed method significantly reduced energy consumption compared to the state-of-the-art methods, while simultaneously increasing the number of functioning sensor nodes by 2.4%. Proposed method produces superior outcomes compared to alternative optimization-based methods.

A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan Generation from Building Point Clouds

  • Kim, Seongyong;Yajima, Yosuke;Park, Jisoo;Chen, Jingdao;Cho, Yong K.
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.792-799
    • /
    • 2022
  • Building Information Modeling (BIM) technology is a key component of modern construction engineering and project management workflows. As-is BIM models that represent the spatial reality of a project site can offer crucial information to stakeholders for construction progress monitoring, error checking, and building maintenance purposes. Geometric methods for automatically converting raw scan data into BIM models (Scan-to-BIM) often fail to make use of higher-level semantic information in the data. Whereas, semantic segmentation methods only output labels at the point level without creating object level models that is necessary for BIM. To address these issues, this research proposes a hybrid semantic-geometric approach for clutter-resistant floorplan generation from laser-scanned building point clouds. The input point clouds are first pre-processed by normalizing the coordinate system and removing outliers. Then, a semantic segmentation network based on PointNet++ is used to label each point as ceiling, floor, wall, door, stair, and clutter. The clutter points are removed whereas the wall, door, and stair points are used for 2D floorplan generation. A region-growing segmentation algorithm paired with geometric reasoning rules is applied to group the points together into individual building elements. Finally, a 2-fold Random Sample Consensus (RANSAC) algorithm is applied to parameterize the building elements into 2D lines which are used to create the output floorplan. The proposed method is evaluated using the metrics of precision, recall, Intersection-over-Union (IOU), Betti error, and warping error.

  • PDF

Classification-Based Approach for Hybridizing Statistical and Rule-Based Machine Translation

  • Park, Eun-Jin;Kwon, Oh-Woog;Kim, Kangil;Kim, Young-Kil
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.541-550
    • /
    • 2015
  • In this paper, we propose a classification-based approach for hybridizing statistical machine translation and rulebased machine translation. Both the training dataset used in the learning of our proposed classifier and our feature extraction method affect the hybridization quality. To create one such training dataset, a previous approach used auto-evaluation metrics to determine from a set of component machine translation (MT) systems which gave the more accurate translation (by a comparative method). Once this had been determined, the most accurate translation was then labelled in such a way so as to indicate the MT system from which it came. In this previous approach, when the metric evaluation scores were low, there existed a high level of uncertainty as to which of the component MT systems was actually producing the better translation. To relax such uncertainty or error in classification, we propose an alternative approach to such labeling; that is, a cut-off method. In our experiments, using the aforementioned cut-off method in our proposed classifier, we managed to achieve a translation accuracy of 81.5% - a 5.0% improvement over existing methods.

Metric based Performance Measurement of Software Development Methodologies from Traditional to DevOps Automation Culture

  • Poonam Narang;Pooja Mittal
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.107-114
    • /
    • 2023
  • Successful implementations of DevOps practices significantly improvise software efficiency, collaboration and security. Most of the organizations are adopting DevOps for faster and quality software delivery. DevOps brings development and operation teams together to overcome all kind of communication gaps responsible for software failures. It relies on different sets of alternative tools to automate the tasks of continuous integration, testing, delivery, deployment and monitoring. Although DevOps is followed for being very reliable and responsible environment for quality software delivery yet it lacks many quantifiable aspects to prove it on the top of other traditional and agile development methods. This research evaluates quantitative performance of DevOps and traditional/ agile development methods based on software metrics. This research includes three sample projects or code repositories to quantify the results and for DevOps integrated selective tool chain; current research considers our earlier proposed and implemented DevOps hybrid model of integrated automation tools. For result discussion and validation, tabular and graphical comparisons have also been included to retrieve best performer model. This comparative and evaluative research will be of much advantage to our young researchers/ students to get well versed with automotive environment of DevOps, latest emerging buzzword of development industries.

Comparison of Ensemble Perturbations using Lorenz-95 Model: Bred vectors, Orthogonal Bred vectors and Ensemble Transform Kalman Filter(ETKF) (로렌쯔-95 모델을 이용한 앙상블 섭동 비교: 브레드벡터, 직교 브레드벡터와 앙상블 칼만 필터)

  • Chung, Kwan-Young;Barker, Dale;Moon, Sun-Ok;Jeon, Eun-Hee;Lee, Hee-Sang
    • Atmosphere
    • /
    • v.17 no.3
    • /
    • pp.217-230
    • /
    • 2007
  • Using the Lorenz-95 simple model, which can simulate many atmospheric characteristics, we compare the performance of ensemble strategies such as bred vectors, the bred vectors rotated (to be orthogonal to each bred member), and the Ensemble Transform Kalman Filter (ETKF). The performance metrics used are the RMSE of ensemble means, the ratio of RMS error of ensemble mean to the spread of ensemble, rank histograms to see if the ensemble member can well represent the true probability density function (pdf), and the distribution of eigen-values of the forecast ensemble, which can provide useful information on the independence of each member. In the meantime, the orthogonal bred vectors can achieve the considerable progress comparing the bred vectors in all aspects of RMSE, spread, and independence of members. When we rotate the bred vectors for orthogonalization, the improvement rate for the spread of ensemble is almost as double as that for RMS error of ensemble mean compared to the non-rotated bred vectors on a simple model. It appears that the result is consistent with the tentative test on the operational model in KMA. In conclusion, ETKF is superior to the other two methods in all terms of the assesment ways we used when it comes to ensemble prediction. But we cannot decide which perturbation strategy is better in aspect of the structure of the background error covariance. It appears that further studies on the best perturbation way for hybrid variational data assimilation to consider an error-of-the-day(EOTD) should be needed.

Novel nomogram-based integrated gonadotropin therapy individualization in in vitro fertilization/intracytoplasmic sperm injection: A modeling approach

  • Ebid, Abdel Hameed IM;Motaleb, Sara M Abdel;Mostafa, Mahmoud I;Soliman, Mahmoud MA
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.48 no.2
    • /
    • pp.163-173
    • /
    • 2021
  • Objective: This study aimed to characterize a validated model for predicting oocyte retrieval in controlled ovarian stimulation (COS) and to construct model-based nomograms for assistance in clinical decision-making regarding the gonadotropin protocol and dose. Methods: This observational, retrospective, cohort study included 636 women with primary unexplained infertility and a normal menstrual cycle who were attempting assisted reproductive therapy for the first time. The enrolled women were split into an index group (n=497) for model building and a validation group (n=139). The primary outcome was absolute oocyte count. The dose-response relationship was tested using modified Poisson, negative binomial, hybrid Poisson-Emax, and linear models. The validation group was similarly analyzed, and its results were compared to that of the index group. Results: The Poisson model with the log-link function demonstrated superior predictive performance and precision (Akaike information criterion, 2,704; λ=8.27; relative standard error (λ)=2.02%). The covariate analysis included women's age (p<0.001), antral follicle count (p<0.001), basal follicle-stimulating hormone level (p<0.001), gonadotropin dose (p=0.042), and protocol type (p=0.002 and p<0.001 for short and antagonist protocols, respectively). The estimates from 500 bootstrap samples were close to those of the original model. The validation group showed model assessment metrics comparable to the index model. Based on the fitted model, a static nomogram was built to improve visualization. In addition, a dynamic electronic tool was created for convenience of use. Conclusion: Based on our validated model, nomograms were constructed to help clinicians individualize the stimulation protocol and gonadotropin doses in COS cycles.

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Improved Resource Allocation Model for Reducing Interference among Secondary Users in TV White Space for Broadband Services

  • Marco P. Mwaimu;Mike Majham;Ronoh Kennedy;Kisangiri Michael;Ramadhani Sinde
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.55-68
    • /
    • 2023
  • In recent years, the Television White Space (TVWS) has attracted the interest of many researchers due to its propagation characteristics obtainable between 470MHz and 790MHz spectrum bands. The plenty of unused channels in the TV spectrum allows the secondary users (SUs) to use the channels for broadband services especially in rural areas. However, when the number of SUs increases in the TVWS wireless network the aggregate interference also increases. Aggregate interferences are the combined harmful interferences that can include both co-channel and adjacent interferences. The aggregate interference on the side of Primary Users (PUs) has been extensively scrutinized. Therefore, resource allocation (power and spectrum) is crucial when designing the TVWS network to avoid interferences from Secondary Users (SUs) to PUs and among SUs themselves. This paper proposes a model to improve the resource allocation for reducing the aggregate interface among SUs for broadband services in rural areas. The proposed model uses joint power and spectrum hybrid Firefly algorithm (FA), Genetic algorithm (GA), and Particle Swarm Optimization algorithm (PSO) which is considered the Co-channel interference (CCI) and Adjacent Channel Interference (ACI). The algorithm is integrated with the admission control algorithm so that; there is a possibility to remove some of the SUs in the TVWS network whenever the SINR threshold for SUs and PU are not met. We considered the infeasible system whereby all SUs and PU may not be supported simultaneously. Therefore, we proposed a joint spectrum and power allocation with an admission control algorithm whose better complexity and performance than the ones which have been proposed in the existing algorithms in the literature. The performance of the proposed algorithm is compared using the metrics such as sum throughput, PU SINR, algorithm running time and SU SINR less than threshold and the results show that the PSOFAGA with ELGR admission control algorithm has best performance compared to GA, PSO, FA, and FAGAPSO algorithms.

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.