• Title/Summary/Keyword: text

Search Result 13,490, Processing Time 0.03 seconds

A Methodology for Customer Core Requirement Analysis by Using Text Mining : Focused on Chinese Online Cosmetics Market (텍스트 마이닝을 활용한 사용자 핵심 요구사항 분석 방법론 : 중국 온라인 화장품 시장을 중심으로)

  • Shin, Yoon Sig;Baek, Dong Hyun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.66-77
    • /
    • 2021
  • Companies widely use survey to identify customer requirements, but the survey has some problems. First of all, the response is passive due to pre-designed questionnaire by companies which are the surveyor. Second, the surveyor needs to have good preliminary knowledge to improve the quality of the survey. On the other hand, text mining is an excellent way to compensate for the limitations of surveys. Recently, the importance of online review is steadily grown, and the enormous amount of text data has increased as Internet usage higher. Also, a technique to extract high-quality information from text data called Text Mining is improving. However, previous studies tend to focus on improving the accuracy of individual analytics techniques. This study proposes the methodology by combining several text mining techniques and has mainly three contributions. Firstly, able to extract information from text data without a preliminary design of the surveyor. Secondly, no need for prior knowledge to extract information. Lastly, this method provides quantitative sentiment score that can be used in decision-making.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

An Enhanced Text Mining Approach using Ensemble Algorithm for Detecting Cyber Bullying

  • Z.Sunitha Bai;Sreelatha Malempati
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.1-6
    • /
    • 2023
  • Text mining (TM) is most widely used to process the various unstructured text documents and process the data present in the various domains. The other name for text mining is text classification. This domain is most popular in many domains such as movie reviews, product reviews on various E-commerce websites, sentiment analysis, topic modeling and cyber bullying on social media messages. Cyber-bullying is the type of abusing someone with the insulting language. Personal abusing, sexual harassment, other types of abusing come under cyber-bullying. Several existing systems are developed to detect the bullying words based on their situation in the social networking sites (SNS). SNS becomes platform for bully someone. In this paper, An Enhanced text mining approach is developed by using Ensemble Algorithm (ETMA) to solve several problems in traditional algorithms and improve the accuracy, processing time and quality of the result. ETMA is the algorithm used to analyze the bullying text within the social networking sites (SNS) such as facebook, twitter etc. The ETMA is applied on synthetic dataset collected from various data a source which consists of 5k messages belongs to bullying and non-bullying. The performance is analyzed by showing Precision, Recall, F1-Score and Accuracy.

YOLO, EAST : Comparison of Scene Text Detection Performance, Using a Neural Network Model (YOLO, EAST: 신경망 모델을 이용한 문자열 위치 검출 성능 비교)

  • Park, Chan Yong;Lim, Young Min;Jeong, Seung Dae;Cho, Young Heuk;Lee, Byeong Chul;Lee, Gyu Hyun;Kim, Jin Wook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.115-124
    • /
    • 2022
  • In this paper, YOLO and EAST models are tested to analyze their performance in text area detecting for real-world and normal text images. The earl ier YOLO models which include YOLOv3 have been known to underperform in detecting text areas for given images, but the recently released YOLOv4 and YOLOv5 achieved promising performances to detect text area included in various images. Experimental results show that both of YOLO v4 and v5 models are expected to be widely used for text detection in the filed of scene text recognition in the future.

Applying Academic Theory with Text Mining to Offer Business Insight: Illustration of Evaluating Hotel Service Quality

  • Choong C. Lee;Kun Kim;Haejung Yun
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.615-643
    • /
    • 2019
  • Now is the time for IS scholars to demonstrate the added value of academic theory through its integration with text mining, clearly outline how to implement this for text mining experts outside of the academic field, and move towards establishing this integration as a standard practice. Therefore, in this study we develop a systematic theory-based text-mining framework (TTMF), and illustrate the use and benefits of TTMF by conducting a text-mining project in an actual business case evaluating and improving hotel service quality using a large volume of actual user-generated reviews. A total of 61,304 sentences extracted from actual customer reviews were successfully allocated to SERVQUAL dimensions, and the pragmatic validity of our model was tested by the OLS regression analysis results between the sentiment scores of each SERVQUAL dimension and customer satisfaction (star rates), and showed significant relationships. As a post-hoc analysis, the results of the co-occurrence analysis to define the root causes of positive and negative service quality perceptions and provide action plans to implement improvements were reported.

A Study on the Government Full-text Information Disclosure System through the Survey on the Government Officials' Perceptions (원문정보 공개제도에 대한 공무원들의 인식조사 연구)

  • Jang, Bo-Seong
    • Journal of Korean Library and Information Science Society
    • /
    • v.47 no.1
    • /
    • pp.339-360
    • /
    • 2016
  • This study is intended to analyze the actual condition of operating the full-text information disclosure system of government and Officials' Perceptions. According to the results of analysis, the public servant group's level of full-text information disclosure system was high. With regard to the positive and negative function of full-text information disclosure, an expectation for positive function was high in the aspect of assuring the transparency of administration. And public servants were worried about an increase in the burden of administrative duties. With regard to factors in hindering the development of full-text information disclosure, it was shown that the percentage of clients' abuse and misuse of full-text information was the highest. For the activation of full-text information disclosure system, it is necessary to prepare measures for preventing the abuse and misuse of full-text information.

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

An Ensemble Approach for Cyber Bullying Text messages and Images

  • Zarapala Sunitha Bai;Sreelatha Malempati
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.59-66
    • /
    • 2023
  • Text mining (TM) is most widely used to find patterns from various text documents. Cyber-bullying is the term that is used to abuse a person online or offline platform. Nowadays cyber-bullying becomes more dangerous to people who are using social networking sites (SNS). Cyber-bullying is of many types such as text messaging, morphed images, morphed videos, etc. It is a very difficult task to prevent this type of abuse of the person in online SNS. Finding accurate text mining patterns gives better results in detecting cyber-bullying on any platform. Cyber-bullying is developed with the online SNS to send defamatory statements or orally bully other persons or by using the online platform to abuse in front of SNS users. Deep Learning (DL) is one of the significant domains which are used to extract and learn the quality features dynamically from the low-level text inclusions. In this scenario, Convolutional neural networks (CNN) are used for training the text data, images, and videos. CNN is a very powerful approach to training on these types of data and achieved better text classification. In this paper, an Ensemble model is introduced with the integration of Term Frequency (TF)-Inverse document frequency (IDF) and Deep Neural Network (DNN) with advanced feature-extracting techniques to classify the bullying text, images, and videos. The proposed approach also focused on reducing the training time and memory usage which helps the classification improvement.