• Title/Summary/Keyword: K-Nearest Neighbor

Search Result 650, Processing Time 0.03 seconds

Sentiment Analysis for COVID-19 Vaccine Popularity

  • Muhammad Saeed;Naeem Ahmed;Abid Mehmood;Muhammad Aftab;Rashid Amin;Shahid Kamal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1377-1393
    • /
    • 2023
  • Social media is used for various purposes including entertainment, communication, information search, and voicing their thoughts and concerns about a service, product, or issue. The social media data can be used for information mining and getting insights from it. The World Health Organization has listed COVID-19 as a global epidemic since 2020. People from every aspect of life as well as the entire health system have been severely impacted by this pandemic. Even now, after almost three years of the pandemic declaration, the fear caused by the COVID-19 virus leading to higher depression, stress, and anxiety levels has not been fully overcome. This has also triggered numerous kinds of discussions covering various aspects of the pandemic on the social media platforms. Among these aspects is the part focused on vaccines developed by different countries, their features and the advantages and disadvantages associated with each vaccine. Social media users often share their thoughts about vaccinations and vaccines. This data can be used to determine the popularity levels of vaccines, which can provide the producers with some insight for future decision making about their product. In this article, we used Twitter data for the vaccine popularity detection. We gathered data by scraping tweets about various vaccines from different countries. After that, various machine learning and deep learning models, i.e., naive bayes, decision tree, support vector machines, k-nearest neighbor, and deep neural network are used for sentiment analysis to determine the popularity of each vaccine. The results of experiments show that the proposed deep neural network model outperforms the other models by achieving 97.87% accuracy.

A Study on the Failure Diagnosis of Transfer Robot for Semiconductor Automation Based on Machine Learning Algorithm (머신러닝 알고리즘 기반 반도체 자동화를 위한 이송로봇 고장진단에 대한 연구)

  • Kim, Mi Jin;Ko, Kwang In;Ku, Kyo Mun;Shim, Jae Hong;Kim, Kihyun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.65-70
    • /
    • 2022
  • In manufacturing and semiconductor industries, transfer robots increase productivity through accurate and continuous work. Due to the nature of the semiconductor process, there are environments where humans cannot intervene to maintain internal temperature and humidity in a clean room. So, transport robots take responsibility over humans. In such an environment where the manpower of the process is cutting down, the lack of maintenance and management technology of the machine may adversely affect the production, and that's why it is necessary to develop a technology for the machine failure diagnosis system. Therefore, this paper tries to identify various causes of failure of transport robots that are widely used in semiconductor automation, and the Prognostics and Health Management (PHM) method is considered for determining and predicting the process of failures. The robot mainly fails in the driving unit due to long-term repetitive motion, and the core components of the driving unit are motors and gear reducer. A simulation drive unit was manufactured and tested around this component and then applied to 6-axis vertical multi-joint robots used in actual industrial sites. Vibration data was collected for each cause of failure of the robot, and then the collected data was processed through signal processing and frequency analysis. The processed data can determine the fault of the robot by utilizing machine learning algorithms such as SVM (Support Vector Machine) and KNN (K-Nearest Neighbor). As a result, the PHM environment was built based on machine learning algorithms using SVM and KNN, confirming that failure prediction was partially possible.

EPAR V2.0: AUTOMATED MONITORING AND VISUALIZATION OF POTENTIAL AREAS FOR BUILDING RETROFIT USING THERMAL CAMERAS AND COMPUTATIONAL FLUID DYNAMICS (CFD) MODELS

  • Youngjib Ham;Mani Golparvar-Fard
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.279-286
    • /
    • 2013
  • This paper introduces a new method for identification of building energy performance problems. The presented method is based on automated analysis and visualization of deviations between actual and expected energy performance of the building using EPAR (Energy Performance Augmented Reality) models. For generating EPAR models, during building inspections, energy auditors collect a large number of digital and thermal imagery using a consumer-level single thermal camera that has a built-in digital lens. Based on a pipeline of image-based 3D reconstruction algorithms built on GPU and multi-core CPU architecture, 3D geometrical and thermal point cloud models of the building under inspection are automatically generated and integrated. Then, the resulting actual 3D spatio-thermal model and the expected energy performance model simulated using computational fluid dynamics (CFD) analysis are superimposed within an augmented reality environment. Based on the resulting EPAR models which jointly visualize the actual and expected energy performance of the building under inspection, two new algorithms are introduced for quick and reliable identification of potential performance problems: 1) 3D thermal mesh modeling using k-d trees and nearest neighbor searching to automate calculation of temperature deviations; and 2) automated visualization of performance deviations using a metaphor based on traffic light colors. The proposed EPAR v2.0 modeling method is validated on several interior locations of a residential building and an instructional facility. Our empirical observations show that the automated energy performance analysis using EPAR models enables performance deviations to be rapidly and accurately identified. The visualization of performance deviations in 3D enables auditors to easily identify potential building performance problems. Rather than manually analyzing thermal imagery, auditors can focus on other important tasks such as evaluating possible remedial alternatives.

  • PDF

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

Optimize KNN Algorithm for Cerebrospinal Fluid Cell Diseases

  • Soobia Saeed;Afnizanfaizal Abdullah;NZ Jhanjhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Medical imaginings assume a important part in the analysis of tumors and cerebrospinal fluid (CSF) leak. Magnetic resonance imaging (MRI) is an image segmentation technology, which shows an angular sectional perspective of the body which provides convenience to medical specialists to examine the patients. The images generated by MRI are detailed, which enable medical specialists to identify affected areas to help them diagnose disease. MRI imaging is usually a basic part of diagnostic and treatment. In this research, we propose new techniques using the 4D-MRI image segmentation process to detect the brain tumor in the skull. We identify the issues related to the quality of cerebrum disease images or CSF leakage (discover fluid inside the brain). The aim of this research is to construct a framework that can identify cancer-damaged areas to be isolated from non-tumor. We use 4D image light field segmentation, which is followed by MATLAB modeling techniques, and measure the size of brain-damaged cells deep inside CSF. Data is usually collected from the support vector machine (SVM) tool using MATLAB's included K-Nearest Neighbor (KNN) algorithm. We propose a 4D light field tool (LFT) modulation method that can be used for the light editing field application. Depending on the input of the user, an objective evaluation of each ray is evaluated using the KNN to maintain the 4D frequency (redundancy). These light fields' approaches can help increase the efficiency of device segmentation and light field composite pipeline editing, as they minimize boundary artefacts.

Inhalation Configuration Detection for COVID-19 Patient Secluded Observing using Wearable IoTs Platform

  • Sulaiman Sulmi Almutairi;Rehmat Ullah;Qazi Zia Ullah;Habib Shah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1478-1499
    • /
    • 2024
  • Coronavirus disease (COVID-19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus. COVID-19 become an active epidemic disease due to its spread around the globe. The main causes of the spread are through interaction and transmission of the droplets through coughing and sneezing. The spread can be minimized by isolating the susceptible patients. However, it necessitates remote monitoring to check the breathing issues of the patient remotely to minimize the interactions for spread minimization. Thus, in this article, we offer a wearable-IoTs-centered framework for remote monitoring and recognition of the breathing pattern and abnormal breath detection for timely providing the proper oxygen level required. We propose wearable sensors accelerometer and gyroscope-based breathing time-series data acquisition, temporal features extraction, and machine learning algorithms for pattern detection and abnormality identification. The sensors provide the data through Bluetooth and receive it at the server for further processing and recognition. We collect the six breathing patterns from the twenty subjects and each pattern is recorded for about five minutes. We match prediction accuracies of all machine learning models under study (i.e. Random forest, Gradient boosting tree, Decision tree, and K-nearest neighbor. Our results show that normal breathing and Bradypnea are the most correctly recognized breathing patterns. However, in some cases, algorithm recognizes kussmaul well also. Collectively, the classification outcomes of Random Forest and Gradient Boost Trees are better than the other two algorithms.

Protecting Accounting Information Systems using Machine Learning Based Intrusion Detection

  • Biswajit Panja
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.111-118
    • /
    • 2024
  • In general network-based intrusion detection system is designed to detect malicious behavior directed at a network or its resources. The key goal of this paper is to look at network data and identify whether it is normal traffic data or anomaly traffic data specifically for accounting information systems. In today's world, there are a variety of principles for detecting various forms of network-based intrusion. In this paper, we are using supervised machine learning techniques. Classification models are used to train and validate data. Using these algorithms we are training the system using a training dataset then we use this trained system to detect intrusion from the testing dataset. In our proposed method, we will detect whether the network data is normal or an anomaly. Using this method we can avoid unauthorized activity on the network and systems under that network. The Decision Tree and K-Nearest Neighbor are applied to the proposed model to classify abnormal to normal behaviors of network traffic data. In addition to that, Logistic Regression Classifier and Support Vector Classification algorithms are used in our model to support proposed concepts. Furthermore, a feature selection method is used to collect valuable information from the dataset to enhance the efficiency of the proposed approach. Random Forest machine learning algorithm is used, which assists the system to identify crucial aspects and focus on them rather than all the features them. The experimental findings revealed that the suggested method for network intrusion detection has a neglected false alarm rate, with the accuracy of the result expected to be between 95% and 100%. As a result of the high precision rate, this concept can be used to detect network data intrusion and prevent vulnerabilities on the network.

Resume Classification System using Natural Language Processing & Machine Learning Techniques

  • Irfan Ali;Nimra;Ghulam Mujtaba;Zahid Hussain Khand;Zafar Ali;Sajid Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.108-117
    • /
    • 2024
  • The selection and recommendation of a suitable job applicant from the pool of thousands of applications are often daunting jobs for an employer. The recommendation and selection process significantly increases the workload of the concerned department of an employer. Thus, Resume Classification System using the Natural Language Processing (NLP) and Machine Learning (ML) techniques could automate this tedious process and ease the job of an employer. Moreover, the automation of this process can significantly expedite and transparent the applicants' selection process with mere human involvement. Nevertheless, various Machine Learning approaches have been proposed to develop Resume Classification Systems. However, this study presents an automated NLP and ML-based system that classifies the Resumes according to job categories with performance guarantees. This study employs various ML algorithms and NLP techniques to measure the accuracy of Resume Classification Systems and proposes a solution with better accuracy and reliability in different settings. To demonstrate the significance of NLP & ML techniques for processing & classification of Resumes, the extracted features were tested on nine machine learning models Support Vector Machine - SVM (Linear, SGD, SVC & NuSVC), Naïve Bayes (Bernoulli, Multinomial & Gaussian), K-Nearest Neighbor (KNN) and Logistic Regression (LR). The Term-Frequency Inverse Document (TF-IDF) feature representation scheme proven suitable for Resume Classification Task. The developed models were evaluated using F-ScoreM, RecallM, PrecissionM, and overall Accuracy. The experimental results indicate that using the One-Vs-Rest-Classification strategy for this multi-class Resume Classification task, the SVM class of Machine Learning algorithms performed better on the study dataset with over 96% overall accuracy. The promising results suggest that NLP & ML techniques employed in this study could be used for the Resume Classification task.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.