• Title/Summary/Keyword: k-Nearest neighbor

Search Result 638, Processing Time 0.024 seconds

Sentiment Analysis for COVID-19 Vaccine Popularity

  • Muhammad Saeed;Naeem Ahmed;Abid Mehmood;Muhammad Aftab;Rashid Amin;Shahid Kamal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1377-1393
    • /
    • 2023
  • Social media is used for various purposes including entertainment, communication, information search, and voicing their thoughts and concerns about a service, product, or issue. The social media data can be used for information mining and getting insights from it. The World Health Organization has listed COVID-19 as a global epidemic since 2020. People from every aspect of life as well as the entire health system have been severely impacted by this pandemic. Even now, after almost three years of the pandemic declaration, the fear caused by the COVID-19 virus leading to higher depression, stress, and anxiety levels has not been fully overcome. This has also triggered numerous kinds of discussions covering various aspects of the pandemic on the social media platforms. Among these aspects is the part focused on vaccines developed by different countries, their features and the advantages and disadvantages associated with each vaccine. Social media users often share their thoughts about vaccinations and vaccines. This data can be used to determine the popularity levels of vaccines, which can provide the producers with some insight for future decision making about their product. In this article, we used Twitter data for the vaccine popularity detection. We gathered data by scraping tweets about various vaccines from different countries. After that, various machine learning and deep learning models, i.e., naive bayes, decision tree, support vector machines, k-nearest neighbor, and deep neural network are used for sentiment analysis to determine the popularity of each vaccine. The results of experiments show that the proposed deep neural network model outperforms the other models by achieving 97.87% accuracy.

A Study on the Failure Diagnosis of Transfer Robot for Semiconductor Automation Based on Machine Learning Algorithm (머신러닝 알고리즘 기반 반도체 자동화를 위한 이송로봇 고장진단에 대한 연구)

  • Kim, Mi Jin;Ko, Kwang In;Ku, Kyo Mun;Shim, Jae Hong;Kim, Kihyun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.65-70
    • /
    • 2022
  • In manufacturing and semiconductor industries, transfer robots increase productivity through accurate and continuous work. Due to the nature of the semiconductor process, there are environments where humans cannot intervene to maintain internal temperature and humidity in a clean room. So, transport robots take responsibility over humans. In such an environment where the manpower of the process is cutting down, the lack of maintenance and management technology of the machine may adversely affect the production, and that's why it is necessary to develop a technology for the machine failure diagnosis system. Therefore, this paper tries to identify various causes of failure of transport robots that are widely used in semiconductor automation, and the Prognostics and Health Management (PHM) method is considered for determining and predicting the process of failures. The robot mainly fails in the driving unit due to long-term repetitive motion, and the core components of the driving unit are motors and gear reducer. A simulation drive unit was manufactured and tested around this component and then applied to 6-axis vertical multi-joint robots used in actual industrial sites. Vibration data was collected for each cause of failure of the robot, and then the collected data was processed through signal processing and frequency analysis. The processed data can determine the fault of the robot by utilizing machine learning algorithms such as SVM (Support Vector Machine) and KNN (K-Nearest Neighbor). As a result, the PHM environment was built based on machine learning algorithms using SVM and KNN, confirming that failure prediction was partially possible.

EPAR V2.0: AUTOMATED MONITORING AND VISUALIZATION OF POTENTIAL AREAS FOR BUILDING RETROFIT USING THERMAL CAMERAS AND COMPUTATIONAL FLUID DYNAMICS (CFD) MODELS

  • Youngjib Ham;Mani Golparvar-Fard
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.279-286
    • /
    • 2013
  • This paper introduces a new method for identification of building energy performance problems. The presented method is based on automated analysis and visualization of deviations between actual and expected energy performance of the building using EPAR (Energy Performance Augmented Reality) models. For generating EPAR models, during building inspections, energy auditors collect a large number of digital and thermal imagery using a consumer-level single thermal camera that has a built-in digital lens. Based on a pipeline of image-based 3D reconstruction algorithms built on GPU and multi-core CPU architecture, 3D geometrical and thermal point cloud models of the building under inspection are automatically generated and integrated. Then, the resulting actual 3D spatio-thermal model and the expected energy performance model simulated using computational fluid dynamics (CFD) analysis are superimposed within an augmented reality environment. Based on the resulting EPAR models which jointly visualize the actual and expected energy performance of the building under inspection, two new algorithms are introduced for quick and reliable identification of potential performance problems: 1) 3D thermal mesh modeling using k-d trees and nearest neighbor searching to automate calculation of temperature deviations; and 2) automated visualization of performance deviations using a metaphor based on traffic light colors. The proposed EPAR v2.0 modeling method is validated on several interior locations of a residential building and an instructional facility. Our empirical observations show that the automated energy performance analysis using EPAR models enables performance deviations to be rapidly and accurately identified. The visualization of performance deviations in 3D enables auditors to easily identify potential building performance problems. Rather than manually analyzing thermal imagery, auditors can focus on other important tasks such as evaluating possible remedial alternatives.

  • PDF

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

Optimize KNN Algorithm for Cerebrospinal Fluid Cell Diseases

  • Soobia Saeed;Afnizanfaizal Abdullah;NZ Jhanjhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Medical imaginings assume a important part in the analysis of tumors and cerebrospinal fluid (CSF) leak. Magnetic resonance imaging (MRI) is an image segmentation technology, which shows an angular sectional perspective of the body which provides convenience to medical specialists to examine the patients. The images generated by MRI are detailed, which enable medical specialists to identify affected areas to help them diagnose disease. MRI imaging is usually a basic part of diagnostic and treatment. In this research, we propose new techniques using the 4D-MRI image segmentation process to detect the brain tumor in the skull. We identify the issues related to the quality of cerebrum disease images or CSF leakage (discover fluid inside the brain). The aim of this research is to construct a framework that can identify cancer-damaged areas to be isolated from non-tumor. We use 4D image light field segmentation, which is followed by MATLAB modeling techniques, and measure the size of brain-damaged cells deep inside CSF. Data is usually collected from the support vector machine (SVM) tool using MATLAB's included K-Nearest Neighbor (KNN) algorithm. We propose a 4D light field tool (LFT) modulation method that can be used for the light editing field application. Depending on the input of the user, an objective evaluation of each ray is evaluated using the KNN to maintain the 4D frequency (redundancy). These light fields' approaches can help increase the efficiency of device segmentation and light field composite pipeline editing, as they minimize boundary artefacts.

Protecting Accounting Information Systems using Machine Learning Based Intrusion Detection

  • Biswajit Panja
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.111-118
    • /
    • 2024
  • In general network-based intrusion detection system is designed to detect malicious behavior directed at a network or its resources. The key goal of this paper is to look at network data and identify whether it is normal traffic data or anomaly traffic data specifically for accounting information systems. In today's world, there are a variety of principles for detecting various forms of network-based intrusion. In this paper, we are using supervised machine learning techniques. Classification models are used to train and validate data. Using these algorithms we are training the system using a training dataset then we use this trained system to detect intrusion from the testing dataset. In our proposed method, we will detect whether the network data is normal or an anomaly. Using this method we can avoid unauthorized activity on the network and systems under that network. The Decision Tree and K-Nearest Neighbor are applied to the proposed model to classify abnormal to normal behaviors of network traffic data. In addition to that, Logistic Regression Classifier and Support Vector Classification algorithms are used in our model to support proposed concepts. Furthermore, a feature selection method is used to collect valuable information from the dataset to enhance the efficiency of the proposed approach. Random Forest machine learning algorithm is used, which assists the system to identify crucial aspects and focus on them rather than all the features them. The experimental findings revealed that the suggested method for network intrusion detection has a neglected false alarm rate, with the accuracy of the result expected to be between 95% and 100%. As a result of the high precision rate, this concept can be used to detect network data intrusion and prevent vulnerabilities on the network.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

kNN Query Processing Algorithm based on the Encrypted Index for Hiding Data Access Patterns (데이터 접근 패턴 은닉을 지원하는 암호화 인덱스 기반 kNN 질의처리 알고리즘)

  • Kim, Hyeong-Il;Kim, Hyeong-Jin;Shin, Youngsung;Chang, Jae-woo
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1437-1457
    • /
    • 2016
  • In outsourced databases, the cloud provides an authorized user with querying services on the outsourced database. However, sensitive data, such as financial or medical records, should be encrypted before being outsourced to the cloud. Meanwhile, k-Nearest Neighbor (kNN) query is the typical query type which is widely used in many fields and the result of the kNN query is closely related to the interest and preference of the user. Therefore, studies on secure kNN query processing algorithms that preserve both the data privacy and the query privacy have been proposed. However, existing algorithms either suffer from high computation cost or leak data access patterns because retrieved index nodes and query results are disclosed. To solve these problems, in this paper we propose a new kNN query processing algorithm on the encrypted database. Our algorithm preserves both data privacy and query privacy. It also hides data access patterns while supporting efficient query processing. To achieve this, we devise an encrypted index search scheme which can perform data filtering without revealing data access patterns. Through the performance analysis, we verify that our proposed algorithm shows better performance than the existing algorithms in terms of query processing times.