• Title/Summary/Keyword: Yelp

Search Result 18, Processing Time 0.026 seconds

Applications of Machine Learning Models on Yelp Data

  • Ruchi Singh;Jongwook Woo
    • Asia pacific journal of information systems
    • /
    • v.29 no.1
    • /
    • pp.35-49
    • /
    • 2019
  • The paper attempts to document the application of relevant Machine Learning (ML) models on Yelp (a crowd-sourced local business review and social networking site) dataset to analyze, predict and recommend business. Strategically using two cloud platforms to minimize the effort and time required for this project. Seven machine learning algorithms in Azure ML of which four algorithms are implemented in Databricks Spark ML. The analyzed Yelp business dataset contained 70 business attributes for more than 350,000 registered business. Additionally, review tips and likes from 500,000 users have been processed for the project. A Recommendation Model is built to provide Yelp users with recommendations for business categories based on their previous business ratings, as well as the business ratings of other users. Classification Model is implemented to predict the popularity of the business as defining the popular business to have stars greater than 3 and unpopular business to have stars less than 3. Text Analysis model is developed by comparing two algorithms, uni-gram feature extraction and n-feature extraction in Azure ML studio and logistic regression model in Spark. Comparative conclusions have been made related to efficiency of Spark ML and Azure ML for these models.

Factors Affecting the Usefulness of Online Reviews: The Moderating Role of Price (온라인 리뷰 유용성에 영향을 미치는 요인: 가격의 조절 효과)

  • Yun, Jiyun;Ro, Yuna;Kwon, Boram;Jahng, Jungjoo
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.153-173
    • /
    • 2022
  • This study analyzes yelp's online restaurant reviews written in 2019 and explores the factors influencing the decision of the usefulness for online reviews in the restaurant consumption decision process. Specifically, factors expected to affect review usefulness are classified according to the Elaboration Likelihood model. Also, it is assumed that the price range of the restaurant would have a moderating role. For the analysis, datasets provided by yelp.com in February 2020 are used. Among the datasets, online reviews of businesses located in Nevada in the US and belonging to the Food and Restaurant categories are targeted. As a result of the negative binomial regression analysis, it is confirmed that the central cues including review depth and readability and the peripheral cues including review consistency, reviewer popularity, and reviewer exposure positively affect the review usefulness. It is also confirmed that the influences of antecedents that affect the review restaurant prices moderate the effect of the central and peripheral cues on the review usefulness. It also provides implications for the need for price-differentiated review management strategies by review platforms and restaurant businesses.

A Study on Classifications of Useful Customer Reviews by Applying Text Mining Approach (텍스트 마이닝을 활용한 고객 리뷰의 유용성 지수 개선에 관한 연구)

  • Lee, Hong Joo
    • Journal of Information Technology Services
    • /
    • v.14 no.4
    • /
    • pp.159-169
    • /
    • 2015
  • Customer reviews are one of the important sources for purchase decision makings in online stores. Online stores have tried to provide useful reviews in product pages to customers. To assess the usefulness of customer reviews before other users have voted enough on the reviews, diverse aspects of reviews were utilized in prevous studies. Style and semantic information were utilized in many studies. This study aims to test diverse alogrithms and datasets for identifying a proper classification method and threshold to classify useful reviews. In particular, most researches utilized ratio type helpfulness index as Amazon.com used. However, there is another type of usefulness index utilized in TripAdviser.com or Yelp.com, count type helpfulness index. There was no proper threshold to classify useful reviews yet for count type helpfulness index. This study used reivews and their usefulness votes on restaurnats from Yelp.com to devise diverse datasets and applied text mining approaches to classify useful reviews. Random Forest, SVM, and GLMNET showed the greater values of accuracy than other approaches.

The Effects of Sentiment and Readability on Useful Votes for Customer Reviews with Count Type Review Usefulness Index (온라인 리뷰의 감성과 독해 용이성이 리뷰 유용성에 미치는 영향: 가산형 리뷰 유용성 정보 활용)

  • Cruz, Ruth Angelie;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.43-61
    • /
    • 2016
  • Customer reviews help potential customers make purchasing decisions. However, the prevalence of reviews on websites push the customer to sift through them and change the focus from a mere search to identifying which of the available reviews are valuable and useful for the purchasing decision at hand. To identify useful reviews, websites have developed different mechanisms to give customers options when evaluating existing reviews. Websites allow users to rate the usefulness of a customer review as helpful or not. Amazon.com uses a ratio-type helpfulness, while Yelp.com uses a count-type usefulness index. This usefulness index provides helpful reviews to future potential purchasers. This study investigated the effects of sentiment and readability on useful votes for customer reviews. Similar studies on the relationship between sentiment and readability have focused on the ratio-type usefulness index utilized by websites such as Amazon.com. In this study, Yelp.com's count-type usefulness index for restaurant reviews was used to investigate the relationship between sentiment/readability and usefulness votes. Yelp.com's online customer reviews for stores in the beverage and food categories were used for the analysis. In total, 170,294 reviews containing information on a store's reputation and popularity were used. The control variables were the review length, store reputation, and popularity; the independent variables were the sentiment and readability, while the dependent variable was the number of helpful votes. The review rating is the moderating variable for the review sentiment and readability. The length is the number of characters in a review. The popularity is the number of reviews for a store, and the reputation is the general average rating of all reviews for a store. The readability of a review was calculated with the Coleman-Liau index. The sentiment is a positivity score for the review as calculated by SentiWordNet. The review rating is a preference score selected from 1 to 5 (stars) by the review author. The dependent variable (i.e., usefulness votes) used in this study is a count variable. Therefore, the Poisson regression model, which is commonly used to account for the discrete and nonnegative nature of count data, was applied in the analyses. The increase in helpful votes was assumed to follow a Poisson distribution. Because the Poisson model assumes an equal mean and variance and the data were over-dispersed, a negative binomial distribution model that allows for over-dispersion of the count variable was used for the estimation. Zero-inflated negative binomial regression was used to model count variables with excessive zeros and over-dispersed count outcome variables. With this model, the excess zeros were assumed to be generated through a separate process from the count values and therefore should be modeled as independently as possible. The results showed that positive sentiment had a negative effect on gaining useful votes for positive reviews but no significant effect on negative reviews. Poor readability had a negative effect on gaining useful votes and was not moderated by the review star ratings. These findings yield considerable managerial implications. The results are helpful for online websites when analyzing their review guidelines and identifying useful reviews for their business. Based on this study, positive reviews are not necessarily helpful; therefore, restaurants should consider which type of positive review is helpful for their business. Second, this study is beneficial for businesses and website designers in creating review mechanisms to know which type of reviews to highlight on their websites and which type of reviews can be beneficial to the business. Moreover, this study highlights the review systems employed by websites to allow their customers to post rating reviews.

Too Much Information - Trying to Help or Deceive? An Analysis of Yelp Reviews

  • Hyuk Shin;Hong Joo Lee;Ruth Angelie Cruz
    • Asia pacific journal of information systems
    • /
    • v.33 no.2
    • /
    • pp.261-281
    • /
    • 2023
  • The proliferation of online customer reviews has completely changed how consumers purchase. Consumers now heavily depend on authentic experiences shared by previous customers. However, deceptive reviews that aim to manipulate customer decision-making to promote or defame a product or service pose a risk to businesses and buyers. The studies investigating consumer perception of deceptive reviews found that one of the important cues is based on review content. This study aims to investigate the impact of the information amount of review on the review truthfulness. This study adopted the Information Manipulation Theory (IMT) as an overarching theory, which asserts that the violations of one or more of the Gricean maxim are deceptive behaviors. It is regarded as a quantity violation if the required information amount is not delivered or more information is delivered; that is an attempt at deception. A topic modeling algorithm is implemented to reveal the distribution of each topic embedded in a text. This study measures information amount as topic diversity based on the results of topic modeling, and topic diversity shows how heterogeneous a text review is. Two datasets of restaurant reviews on Yelp.com, which have Filtered (deceptive) and Unfiltered (genuine) reviews, were used to test the hypotheses. Reviews that contain more diverse topics tend to be truthful. However, excessive topic diversity produces an inverted U-shaped relationship with truthfulness. Moreover, we find an interaction effect between topic diversity and reviews' ratings. This result suggests that the impact of topic diversity is strengthened when deceptive reviews have lower ratings. This study contributes to the existing literature on IMT by building the connection between topic diversity in a review and its truthfulness. In addition, the empirical results show that topic diversity is a reliable measure for gauging information amount of reviews.

Context-Based Prompt Selection Methodology to Enhance Performance in Prompt-Based Learning

  • Lib Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.9-21
    • /
    • 2024
  • Deep learning has been developing rapidly in recent years, with many researchers working to utilize large language models in various domains. However, there are practical difficulties that developing and utilizing language models require massive data and high-performance computing resources. Therefore, in-context learning, which utilizes prompts to learn efficiently, has been introduced, but there needs to be clear criteria for effective prompts for learning. In this study, we propose a methodology for enhancing prompt-based learning performance by improving the PET technique, which is one of the contextual learning methods, to select PVPs that are similar to the context of existing data. To evaluate the performance of the proposed methodology, we conducted experiments with 30,100 restaurant review datasets collected from Yelp, an online business review platform. We found that the proposed methodology outperforms traditional PET in all aspects of accuracy, stability, and learning efficiency.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Common Calls of Poodle (Poodle의 발성음)

  • 연성찬;서강문;권오경;남치주
    • Journal of Veterinary Clinics
    • /
    • v.13 no.2
    • /
    • pp.163-170
    • /
    • 1996
  • This study was performed to analyse the common calls of poddle spectrographically : bark, growl, howl, snore, yelp and whine. The sonograms of 6 common calls were shown their own specific features. There were significant differences among each types of common callsin the parceter of minimun frequency of call (MIFC), maximun frequency of call (MAFC), duration of call (DC), interval between call (IBC), dominant frequency (DF), F1 formant, F2 formant and F3 formant (P<0.01). It was considered that it was possible to record the main common calls dogs by sonograms and it sould be applied to objective basic data for understanding the psychological stats of dogs, the social relationship among them and the relationship sith human being.

  • PDF

Performance Evaluation Between PC and RaspberryPI Cluster in Apache Spark for Processing Big Data (빅데이터 처리를 위한 PC와 라즈베리파이 클러스터에서의 Apache Spark 성능 비교 평가)

  • Seo, Ji-Hye;Park, Mi-Rim;Yang, Hye-Kyung;Yong, Hwan-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1265-1267
    • /
    • 2015
  • 최근 IoT 기술의 등장으로 저전력 소형 컴퓨터인 라즈베리파이 클러스터가 IoT 데이터 처리를 위해 사용되고 있다. IoT 기술이 발전하면서 다양한 데이터가 생성되고 있으며 IoT 환경에서도 빅데이터 처리가 요구되고 있다. 빅데이터 처리 프레임워크에는 일반적으로 하둡이 사용되고 있으며 이를 대체하는 솔루션으로 Apache Spark가 등장했다. 본 논문에서는 PC와 라즈베리파이 클러스터에서의 성능을 Apache Spark를 통해 비교하였다. 본 실험을 위해 Yelp 데이터를 사용하며 데이터 로드 시간과 Spark SQL을 이용한 데이터 처리 시간을 통해 성능을 비교하였다.

Development of a Deep Learning Model for Detecting Fake Reviews Using Author Linguistic Features (작성자 언어적 특성 기반 가짜 리뷰 탐지 딥러닝 모델 개발)

  • Shin, Dong Hoon;Shin, Woo Sik;Kim, Hee Woong
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.01-23
    • /
    • 2022
  • Purpose This study aims to propose a deep learning-based fake review detection model by combining authors' linguistic features and semantic information of reviews. Design/methodology/approach This study used 358,071 review data of Yelp to develop fake review detection model. We employed linguistic inquiry and word count (LIWC) to extract 24 linguistic features of authors. Then we used deep learning architectures such as multilayer perceptron(MLP), long short-term memory(LSTM) and transformer to learn linguistic features and semantic features for fake review detection. Findings The results of our study show that detection models using both linguistic and semantic features outperformed other models using single type of features. In addition, this study confirmed that differences in linguistic features between fake reviewer and authentic reviewer are significant. That is, we found that linguistic features complement semantic information of reviews and further enhance predictive power of fake detection model.