• Title/Summary/Keyword: learning algorithms

Search Result 2,317, Processing Time 0.022 seconds

Sentiment Analysis of Product Reviews to Identify Deceptive Rating Information in Social Media: A SentiDeceptive Approach

  • Marwat, M. Irfan;Khan, Javed Ali;Alshehri, Dr. Mohammad Dahman;Ali, Muhammad Asghar;Hizbullah;Ali, Haider;Assam, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.830-860
    • /
    • 2022
  • [Introduction] Nowadays, many companies are shifting their businesses online due to the growing trend among customers to buy and shop online, as people prefer online purchasing products. [Problem] Users share a vast amount of information about products, making it difficult and challenging for the end-users to make certain decisions. [Motivation] Therefore, we need a mechanism to automatically analyze end-user opinions, thoughts, or feelings in the social media platform about the products that might be useful for the customers to make or change their decisions about buying or purchasing specific products. [Proposed Solution] For this purpose, we proposed an automated SentiDecpective approach, which classifies end-user reviews into negative, positive, and neutral sentiments and identifies deceptive crowd-users rating information in the social media platform to help the user in decision-making. [Methodology] For this purpose, we first collected 11781 end-users comments from the Amazon store and Flipkart web application covering distant products, such as watches, mobile, shoes, clothes, and perfumes. Next, we develop a coding guideline used as a base for the comments annotation process. We then applied the content analysis approach and existing VADER library to annotate the end-user comments in the data set with the identified codes, which results in a labelled data set used as an input to the machine learning classifiers. Finally, we applied the sentiment analysis approach to identify the end-users opinions and overcome the deceptive rating information in the social media platforms by first preprocessing the input data to remove the irrelevant (stop words, special characters, etc.) data from the dataset, employing two standard resampling approaches to balance the data set, i-e, oversampling, and under-sampling, extract different features (TF-IDF and BOW) from the textual data in the data set and then train & test the machine learning algorithms by applying a standard cross-validation approach (KFold and Shuffle Split). [Results/Outcomes] Furthermore, to support our research study, we developed an automated tool that automatically analyzes each customer feedback and displays the collective sentiments of customers about a specific product with the help of a graph, which helps customers to make certain decisions. In a nutshell, our proposed sentiments approach produces good results when identifying the customer sentiments from the online user feedbacks, i-e, obtained an average 94.01% precision, 93.69% recall, and 93.81% F-measure value for classifying positive sentiments.

An Accurate Cryptocurrency Price Forecasting using Reverse Walk-Forward Validation (역순 워크 포워드 검증을 이용한 암호화폐 가격 예측)

  • Ahn, Hyun;Jang, Baekcheol
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.45-55
    • /
    • 2022
  • The size of the cryptocurrency market is growing. For example, market capitalization of bitcoin exceeded 500 trillion won. Accordingly, many studies have been conducted to predict the price of cryptocurrency, and most of them have similar methodology of predicting stock prices. However, unlike stock price predictions, machine learning become best model in cryptocurrency price predictions, conceptually cryptocurrency has no passive income from ownership, and statistically, cryptocurrency has at least three times higher liquidity than stocks. Thats why we argue that a methodology different from stock price prediction should be applied to cryptocurrency price prediction studies. We propose Reverse Walk-forward Validation (RWFV), which modifies Walk-forward Validation (WFV). Unlike WFV, RWFV measures accuracy for Validation by pinning the Validation dataset directly in front of the Test dataset in time series, and gradually increasing the size of the Training dataset in front of it in time series. Train data were cut according to the size of the Train dataset with the highest accuracy among all measured Validation accuracy, and then combined with Validation data to measure the accuracy of the Test data. Logistic regression analysis and Support Vector Machine (SVM) were used as the analysis model, and various algorithms and parameters such as L1, L2, rbf, and poly were applied for the reliability of our proposed RWFV. As a result, it was confirmed that all analysis models showed improved accuracy compared to existing studies, and on average, the accuracy increased by 1.23%p. This is a significant improvement in accuracy, given that most of the accuracy of cryptocurrency price prediction remains between 50% and 60% through previous studies.

Development of Open Set Recognition-based Multiple Damage Recognition Model for Bridge Structure Damage Detection (교량 구조물 손상탐지를 위한 Open Set Recognition 기반 다중손상 인식 모델 개발)

  • Kim, Young-Nam;Cho, Jun-Sang;Kim, Jun-Kyeong;Kim, Moon-Hyun;Kim, Jin-Pyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.117-126
    • /
    • 2022
  • Currently, the number of bridge structures in Korea is continuously increasing and enlarged, and the number of old bridges that have been in service for more than 30 years is also steadily increasing. Bridge aging is being treated as a serious social problem not only in Korea but also around the world, and the existing manpower-centered inspection method is revealing its limitations. Recently, various bridge damage detection studies using deep learning-based image processing algorithms have been conducted, but due to the limitations of the bridge damage data set, most of the bridge damage detection studies are mainly limited to one type of crack, which is also based on a close set classification model. As a detection method, when applied to an actual bridge image, a serious misrecognition problem may occur due to input images of an unknown class such as a background or other objects. In this study, five types of bridge damage including crack were defined and a data set was built, trained as a deep learning model, and an open set recognition-based bridge multiple damage recognition model applied with OpenMax algorithm was constructed. And after performing classification and recognition performance evaluation on the open set including untrained images, the results were analyzed.

Evaluation of Artificial Intelligence Accuracy by Increasing the CNN Hidden Layers: Using Cerebral Hemorrhage CT Data (CNN 은닉층 증가에 따른 인공지능 정확도 평가: 뇌출혈 CT 데이터)

  • Kim, Han-Jun;Kang, Min-Ji;Kim, Eun-Ji;Na, Yong-Hyeon;Park, Jae-Hee;Baek, Su-Eun;Sim, Su-Man;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.1
    • /
    • pp.1-6
    • /
    • 2022
  • Deep learning is a collection of algorithms that enable learning by summarizing the key contents of large amounts of data; it is being developed to diagnose lesions in the medical imaging field. To evaluate the accuracy of the cerebral hemorrhage diagnosis, we used a convolutional neural network (CNN) to derive the diagnostic accuracy of cerebral parenchyma computed tomography (CT) images and the cerebral parenchyma CT images of areas where cerebral hemorrhages are suspected of having occurred. We compared the accuracy of CNN with different numbers of hidden layers and discovered that CNN with more hidden layers resulted in higher accuracy. The analysis results of the derived CT images used in this study to determine the presence of cerebral hemorrhages are expected to be used as foundation data in studies related to the application of artificial intelligence in the medical imaging industry.

Implementation of Specific Target Detection and Tracking Technique using Re-identification Technology based on public Multi-CCTV (공공 다중CCTV 기반에서 재식별 기술을 활용한 특정대상 탐지 및 추적기법 구현)

  • Hwang, Joo-Sung;Nguyen, Thanh Hai;Kang, Soo-Kyung;Kim, Young-Kyu;Kim, Joo-Yong;Chung, Myoung-Sug;Lee, Jooyeoun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.49-57
    • /
    • 2022
  • The government is making great efforts to prevent crimes such as missing children by using public CCTVs. However, there is a shortage of operating manpower, weakening of concentration due to long-term concentration, and difficulty in tracking. In addition, applying real-time object search, re-identification, and tracking through a deep learning algorithm showed a phenomenon of increased parameters and insufficient memory for speed reduction due to complex network analysis. In this paper, we designed the network to improve speed and save memory through the application of Yolo v4, which can recognize real-time objects, and the application of Batch and TensorRT technology. In this thesis, based on the research on these advanced algorithms, OSNet re-ranking and K-reciprocal nearest neighbor for re-identification, Jaccard distance dissimilarity measurement algorithm for correlation, etc. are developed and used in the solution of CCTV national safety identification and tracking system. As a result, we propose a solution that can track objects by recognizing and re-identification objects in real-time within situation of a Korean public multi-CCTV environment through a set of algorithm combinations.

A Comparison of Pan-sharpening Algorithms for GK-2A Satellite Imagery (천리안위성 2A호 위성영상을 위한 영상융합기법의 비교평가)

  • Lee, Soobong;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.275-292
    • /
    • 2022
  • In order to detect climate changes using satellite imagery, the GCOS (Global Climate Observing System) defines requirements such as spatio-temporal resolution, stability by the time change, and uncertainty. Due to limitation of GK-2A sensor performance, the level-2 products can not satisfy the requirement, especially for spatial resolution. In this paper, we found the optimal pan-sharpening algorithm for GK-2A products. The six pan-sharpening methods included in CS (Component Substitution), MRA (Multi-Resolution Analysis), VO (Variational Optimization), and DL (Deep Learning) were used. In the case of DL, the synthesis property based method was used to generate training dataset. The process of synthesis property is that pan-sharpening model is applied with Pan (Panchromatic) and MS (Multispectral) images with reduced spatial resolution, and fused image is compared with the original MS image. In the synthesis property based method, fused image with desire level for user can be produced only when the geometric characteristics between the PAN with reduced spatial resolution and MS image are similar. However, since the dissimilarity exists, RD (Random Down-sampling) was additionally used as a way to minimize it. Among the pan-sharpening methods, PSGAN was applied with RD (PSGAN_RD). The fused images are qualitatively and quantitatively validated with consistency property and the synthesis property. As validation result, the GSA algorithm performs well in the evaluation index representing spatial characteristics. In the case of spectral characteristics, the PSGAN_RD has the best accuracy with the original MS image. Therefore, in consideration of spatial and spectral characteristics of fused image, we found that PSGAN_RD is suitable for GK-2A products.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

Estimation of Spatial Distribution Using the Gaussian Mixture Model with Multivariate Geoscience Data (다변량 지구과학 데이터와 가우시안 혼합 모델을 이용한 공간 분포 추정)

  • Kim, Ho-Rim;Yu, Soonyoung;Yun, Seong-Taek;Kim, Kyoung-Ho;Lee, Goon-Taek;Lee, Jeong-Ho;Heo, Chul-Ho;Ryu, Dong-Woo
    • Economic and Environmental Geology
    • /
    • v.55 no.4
    • /
    • pp.353-366
    • /
    • 2022
  • Spatial estimation of geoscience data (geo-data) is challenging due to spatial heterogeneity, data scarcity, and high dimensionality. A novel spatial estimation method is needed to consider the characteristics of geo-data. In this study, we proposed the application of Gaussian Mixture Model (GMM) among machine learning algorithms with multivariate data for robust spatial predictions. The performance of the proposed approach was tested through soil chemical concentration data from a former smelting area. The concentrations of As and Pb determined by ex-situ ICP-AES were the primary variables to be interpolated, while the other metal concentrations by ICP-AES and all data determined by in-situ portable X-ray fluorescence (PXRF) were used as auxiliary variables in GMM and ordinary cokriging (OCK). Among the multidimensional auxiliary variables, important variables were selected using a variable selection method based on the random forest. The results of GMM with important multivariate auxiliary data decreased the root mean-squared error (RMSE) down to 0.11 for As and 0.33 for Pb and increased the correlations (r) up to 0.31 for As and 0.46 for Pb compared to those from ordinary kriging and OCK using univariate or bivariate data. The use of GMM improved the performance of spatial interpretation of anthropogenic metals in soil. The multivariate spatial approach can be applied to understand complex and heterogeneous geological and geochemical features.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.