• Title/Summary/Keyword: analysis of algorithms

Search Result 3,535, Processing Time 0.031 seconds

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

Comparison of independent component analysis algorithms for low-frequency interference of passive line array sonars (수동 선배열 소나의 저주파 간섭 신호에 대한 독립성분분석 알고리즘 비교)

  • Kim, Juho;Ashraf, Hina;Lee, Chong-Hyun;Cheong, Myoung Jun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.2
    • /
    • pp.177-183
    • /
    • 2019
  • In this paper, we proposed an application method of ICA (Independent Component Analysis) to passive line array sonar to separate interferences from target signals in low frequency band and compared performance of three conventional ICA algorithms. Since the low frequency signals are received through larger bearing angles than other frequency bands, neighboring beam signals can be used to perform ICA as measurement signals of the ICA. We use three ICA algorithms such as Fast ICA, NNMF (Non-negative Matrix Factorization) and JADE (Joint Approximation Diagonalization of Eigen-matrices). Through experiments on real data obtained from passive line array sonar, it is verified that the interference can be separable from target signals by the suggested method and the JADE algorithm shows the best separation performance among the three algorithms.

Statistical Analysis of Projection-Based Face Recognition Algorithms (투사에 기초한 얼굴 인식 알고리즘들의 통계적 분석)

  • 문현준;백순화;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5A
    • /
    • pp.717-725
    • /
    • 2000
  • Within the last several years, there has been a large number of algorithms developed for face recognition. The majority of these algorithms have been view- and projection-based algorithms. Our definition of projection is not restricted to projecting the image onto an orthogonal basis the definition is expansive and includes a general class of linear transformation of the image pixel values. The class includes correlation, principal component analysis, clustering, gray scale projection, and matching pursuit filters. In this paper, we perform a detailed analysis of this class of algorithms by evaluating them on the FERET database of facial images. In our experiments, a projection-based algorithms consists of three steps. The first step is done off-line and determines the new basis for the images. The bases is either set by the algorithm designer or is learned from a training set. The last two steps are on-line and perform the recognition. The second step projects an image onto the new basis and the third step recognizes a face in an with a nearest neighbor classifier. The classification is performed in the projection space. Most evaluation methods report algorithm performance on a single gallery. This does not fully capture algorithm performance. In our study, we construct set of independent galleries. This allows us to see how individual algorithm performance varies over different galleries. In addition, we report on the relative performance of the algorithms over the different galleries.

  • PDF

An Algorithms for Tournament-based Big Data Analysis (토너먼트 기반의 빅데이터 분석 알고리즘)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.545-553
    • /
    • 2015
  • While all of the data has a value in itself, most of the data that is collected in the real world is a random and unstructured. In order to extract useful information from the data, it is need to use the data transform and analysis algorithms. Data mining is used for this purpose. Today, there is not only need for a variety of data mining techniques to analyze the data but also need for a computational requirements and rapid analysis time for huge volume of data. The method commonly used to store huge volume of data is to use the hadoop. A method for analyzing data in hadoop is to use the MapReduce framework. In this paper, we developed a tournament-based MapReduce method for high efficiency in developing an algorithm on a single machine to the MapReduce framework. This proposed method can apply many analysis algorithms and we showed the usefulness of proposed tournament based method to apply frequently used data mining algorithms k-means and k-nearest neighbor classification.

Implementation of Lighting Technique and Music Therapy for Improving Degree of Students Concentration During Lectures

  • Han, ChangPyoung;Hong, YouSik
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.116-124
    • /
    • 2020
  • The advantage of the distance learning universities based on the 4th Industrial Revolution is that anyone can conveniently take lectures anytime, anywhere on the web. In addition, research has been actively conducted on the effect of light color and temperature control upon student performance during online classes. However, research on how the conditions of subjects, lighting colors, and music selection improve the degree of a student's concentration during online lectures has not been completed. To solve these problems in this paper, we have developed automatic analysis system SW for the weak subjects of learners by applying intelligent analysis algorithm, have proposed and simulated music therapy and art therapy. Moreover, It proposed in this paper an algorithm for an automatic analysis system, which shows the weak subjects of learners by adopting intelligence analysis algorithms. We also have presented and simulated a music therapy and art therapy algorithms, based on the blended learning, in order to increase students concentration during lecture.

A Study on Prediction of Linear Relations Between Variables According to Working Characteristics Using Correlation Analysis

  • Kim, Seung Jae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.228-239
    • /
    • 2022
  • Many countries around the world using ICT technologies have various technologies to keep pace with the 4th industrial revolution, and various algorithms and systems have been developed accordingly. Among them, many industries and researchers are investing in unmanned automation systems based on AI. At the time when new technology development and algorithms are developed, decision-making by big data analysis applied to AI systems must be equipped with more sophistication. We apply, Pearson's correlation analysis is applied to six independent variables to find out the job satisfaction that office workers feel according to their job characteristics. First, a correlation coefficient is obtained to find out the degree of correlation for each variable. Second, the presence or absence of correlation for each data is verified through hypothesis testing. Third, after visualization processing using the size of the correlation coefficient, the degree of correlation between data is investigated. Fourth, the degree of correlation between variables will be verified based on the correlation coefficient obtained through the experiment and the results of the hypothesis test

Study on the Surface Defect Classification of Al 6061 Extruded Material By Using CNN-Based Algorithms (CNN을 이용한 Al 6061 압출재의 표면 결함 분류 연구)

  • Kim, S.B.;Lee, K.A.
    • Transactions of Materials Processing
    • /
    • v.31 no.4
    • /
    • pp.229-239
    • /
    • 2022
  • Convolution Neural Network(CNN) is a class of deep learning algorithms and can be used for image analysis. In particular, it has excellent performance in finding the pattern of images. Therefore, CNN is commonly applied for recognizing, learning and classifying images. In this study, the surface defect classification performance of Al 6061 extruded material using CNN-based algorithms were compared and evaluated. First, the data collection criteria were suggested and a total of 2,024 datasets were prepared. And they were randomly classified into 1,417 learning data and 607 evaluation data. After that, the size and quality of the training data set were improved using data augmentation techniques to increase the performance of deep learning. The CNN-based algorithms used in this study were VGGNet-16, VGGNet-19, ResNet-50 and DenseNet-121. The evaluation of the defect classification performance was made by comparing the accuracy, loss, and learning speed using verification data. The DenseNet-121 algorithm showed better performance than other algorithms with an accuracy of 99.13% and a loss value of 0.037. This was due to the structural characteristics of the DenseNet model, and the information loss was reduced by acquiring information from all previous layers for image identification in this algorithm. Based on the above results, the possibility of machine vision application of CNN-based model for the surface defect classification of Al extruded materials was also discussed.

Implementation of Adaptive Hierarchical Fair Com pet ion-based Genetic Algorithms and Its Application to Nonlinear System Modeling (적응형 계층적 공정 경쟁 기반 병렬유전자 알고리즘의 구현 및 비선형 시스템 모델링으로의 적용)

  • Choi, Jeoung-Nae;Oh, Sung-Kwun;Kim, Hyun-Ki
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.120-122
    • /
    • 2006
  • The paper concerns the hybrid optimization of fuzzy inference systems that is based on Hierarchical Fair Competition-based Parallel Genetic Algorithms (HFCGA) and information data granulation. The granulation is realized with the aid of the Hard C-means clustering and HFCGA is a kind of multi-populations of Parallel Genetic Algorithms (PGA), and it is used for structure optimization and parameter identification of fuzzy model. It concerns the fuzzy model-related parameters such as the number of input variables to be used, a collection of specific subset of input variables, the number of membership functions, the order of polynomial, and the apexes of the membership function. In the hybrid optimization process, two general optimization mechanisms are explored. Thestructural optimization is realized via HFCGA and HCM method whereas in case of the parametric optimization we proceed with a standard least square method as well as HFCGA method as well. A comparative analysis demonstrates that the proposed algorithm is superior to the conventional methods.

  • PDF

A comparative study on applicability and efficiency of machine learning algorithms for modeling gamma-ray shielding behaviors

  • Bilmez, Bayram;Toker, Ozan;Alp, Selcuk;Oz, Ersoy;Icelli, Orhan
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.310-317
    • /
    • 2022
  • The mass attenuation coefficient is the primary physical parameter to model narrow beam gamma-ray attenuation. A new machine learning based approach is proposed to model gamma-ray shielding behavior of composites alternative to theoretical calculations. Two fuzzy logic algorithms and a neural network algorithm were trained and tested with different mixture ratios of vanadium slag/epoxy resin/antimony in the 0.05 MeV-2 MeV energy range. Two of the algorithms showed excellent agreement with testing data after optimizing adjustable parameters, with root mean squared error (RMSE) values down to 0.0001. Those results are remarkable because mass attenuation coefficients are often presented with four significant figures. Different training data sizes were tried to determine the least number of data points required to train sufficient models. Data set size more than 1000 is seen to be required to model in above 0.05 MeV energy. Below this energy, more data points with finer energy resolution might be required. Neuro-fuzzy models were three times faster to train than neural network models, while neural network models depicted low RMSE. Fuzzy logic algorithms are overlooked in complex function approximation, yet grid partitioned fuzzy algorithms showed excellent calculation efficiency and good convergence in predicting mass attenuation coefficient.

Performance Assessments of Three Line Simplification Algorithms with Tolerance Changes (임계값 설정에 따른 선형 단순화 알고리듬의 반응 특성 연구)

  • Lee, Jae Eun;Park, Woo Jin;Yu, Ki Yun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.4
    • /
    • pp.363-368
    • /
    • 2012
  • The result of the line simplification algorithm varies with the choice of algorithms, the change in tolerance and the selection of target objects. Three of the algorithms used in this study are Sleeve-fitting, Visvalingam-Whyatt, and Bend-simplify. They were applied to the three kinds of objects which were buildings, rivers, and roads with the five degrees of the tolerance. Through this experiments the vector displacement, the areal displacement, and the angular displacement were measured and the qualitative analysis was performed with the trend line of the errors. The experimental results show that errors were differ from tolerance values, and characteristics of line simplification algorithms based on changes of tolerance were understood.