• Title/Summary/Keyword: Social Media Text

Search Result 350, Processing Time 0.025 seconds

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

A Study on the Visual Image and Verbal Texts in Television Public Service Advertising (TV공익광고에 나타난 영상이미지와 언어에 관한 연구)

  • Shin, In-Sik
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.111-122
    • /
    • 2005
  • Public Service Advertising(PSA) is the integrated marketing concept including strategy and technology in all its aspects that pursue the changes of community to seek an agreement of its members by the intentional and the target oriented way. PSA to the exclusion of commercial intention reflects the current social flow and subject since it is focused to the social issue. PSA plays an important role in creating the further cultural value, and also affects present cultural value to advertising message. In this aspect, this study is very valuable to design the plan for further effective management of advertisement and to analyze communication strategy of PSA. This study is to make dear the nature of PSA by the analysis of contents of visual image and linguistics' factors in actual produced and broadcasted TV advertising, called 'Protection of environment' In the results, PSA related the environment is working to linguistic-centered persuading message corresponding to visual factor, this intends to educate and instruct the consumers in 1980's. PSA, therefore, shows a non-description nature without story line and a hero(heroin) on it. In contrast, after 1990's, PSA was made up image-centered and maximized the effectiveness public campaign through the activating consumer's judgement and intervention. We are able to know that it contributes to considering and persuading the consumer to suggest the story format through the visual way to deliver the message. This study of relationship between visual image and linguistics is a common trend appeared in all media including today's advertising, and may be a remarkable result to present proper direction of PSA campaign.

  • PDF

A Study on AI Evolution Trend based on Topic Frame Modeling (인공지능발달 토픽 프레임 연구 -계열화(seriation)와 통합화(skeumorph)의 사회구성주의 중심으로-)

  • Kweon, Sang-Hee;Cha, Hyeon-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.66-85
    • /
    • 2020
  • The purpose of this study is to explain and predict trends the AI development process based on AI technology patents (total) and AI reporting frames in major newspapers. To that end, a summary of South Korean and U.S. technology patents filed over the past nine years and the AI (Artificial Intelligence) news text of major domestic newspapers were analyzed. In this study, Topic Modeling and Time Series Return Analysis using Big Data were used, and additional network agenda correlation and regression analysis techniques were used. First, the results of this study were confirmed in the order of artificial intelligence and algorithm 5G (hot AI technology) in the AI technical patent summary, and in the news report, AI industrial application and data analysis market application were confirmed in the order, indicating the trend of reporting on AI's social culture. Second, as a result of the time series regression analysis, the social and cultural use of AI and the start of industrial application were derived from the rising trend topics. The downward trend was centered on system and hardware technology. Third, QAP analysis using correlation and regression relationship showed a high correlation between AI technology patents and news reporting frames. Through this, AI technology patents and news reporting frames have tended to be socially constructed by the determinants of media discourse in AI development.

A Study on the Changes in Perspectives on Unwed Mothers in S.Korea and the Direction of Government Polices: 1995~2020 Social Media Big Data Analysis (한국미혼모에 대한 관점 변화와 정부정책의 방향: 1995년~2020년 소셜미디어 빅데이터 분석)

  • Seo, Donghee;Jun, Boksun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.305-313
    • /
    • 2021
  • This study collected and analyzed big data from 1995 to 2020, focusing on the keywords "unwed mother", "single mother," and "single mom" to present appropriate government support policy directions according to changes in perspectives on unwed mothers. Big data collection platform Textom was used to collect data from portal search sites Naver and Daum and refine data. The final refined data were word frequency analysis, TF-IDF analysis, an N-gram analysis provided by Textom. In addition, Network analysis and CONCOR analysis were conducted through the UCINET6 program. As a result of the study, similar words appeared in word frequency analysis and TF-IDF analysis, but they differed by year. In the N-gram analysis, there were similarities in word appearance, but there were many differences in frequency and form of words appearing in series. As a result of CONCOR analysis, it was found that different clusters were formed by year. This study confirms the change in the perspective of unwed mothers through big data analysis, suggests the need for unwed mothers policies for various options for independent women, and policies that embrace pregnancy, childbirth, and parenting without discrimination within the new family form.

Analyzing the Factors of Gentrification After Gradual Everyday Recovery

  • Yoon-Ah Song;Jeongeun Song;ZoonKy Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.175-186
    • /
    • 2023
  • In this paper, we aim to build a gentrification analysis model and examine its characteristics, focusing on the point at which rents rose sharply alongside the recovery of commercial districts after the gradual resumption of daily life. Recently, in Korea, the influence of social distancing measures after the pandemic has led to the formation of small-scale commercial districts, known as 'hot places', rather than large-scale ones. These hot places have gained popularity by leveraging various media and social networking services to attract customers effectively. As a result, with an increase in the floating population, commercial districts have become active, leading to a rapid surge in rents. However, for small business owners, coping with the sudden rise in rent even with increased sales can lead to gentrification, where they might be forced to leave the area. Therefore, in this study, we seek to analyze the periods before and after by identifying points where rents rise sharply as commercial districts experience revitalization. Firstly, we collect text data to explore topics related to gentrification, utilizing LDA topic modeling. Based on this, we gather data at the commercial district level and build a gentrification analysis model to examine its characteristics. We hope that the analysis of gentrification through this model during a time when commercial districts are being revitalized after facing challenges due to the pandemic can contribute to policies supporting small businesses.

Urban Landscape Image Study by Text Mining and Factor Analysis - Focused on Lotte World Tower - (텍스트 마이닝과 인자분석에 의한 도시경관이미지 연구 - 롯데월드타워를 대상으로 -)

  • Woo, Kyung-Sook;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.4
    • /
    • pp.104-117
    • /
    • 2017
  • This study compares the results of landscape image analysis using text mining techniques and factor analysis for Lotte World Tower, which is the first atypical skyscraper building in Korea, and identifies landscape images of the site to determine possibilities of use. Lotte World Tower's landscape image has been extracted from text mining analysis focusing on adjectives such as 'new', 'transformational', 'unusual', 'novelty', 'impressive', and 'unique', and phrases such as in the process of change, people's active elements(caliber, outing, project, night view), media(newspaper, blog), and climate(weather, season). As a result of the factor analysis, factors affecting the landscape image of Lotte World Tower were symbolic, aesthetic, and formative. Identification, which is a morphological feature, has characteristics of scale and visibility but it is not statistically significant in preference. Rather, the psychological factors such as the symbolism with characteristics such as poison and specialty, harmony with the characteristics of the surrounding environment, and beautiful aesthetic characteristics were an influence on the landscape image. The common results of the two research methods show that psychological characteristics such as factors that can represent and represent the city affect the landscape image more greatly than the morphological and physical characteristics such as location and location of the building. In addition, the text mining technique can identify nouns and adjectives corresponding to the images that people see and feel, and confirms the relationship between the derived keywords, so that it can focus the process of forming the landscape image and further the image of the city. It would appear to be a suitable method to complement the limitation of landscape research. This study is meaningful in that it confirms the possibility that big data can be utilized in landscape analysis, which is one research field of landscape architecture, and is significant for understanding the information of a big data base and contribute to enlarging the landscape research area.

Animation Education as VCAE in the Digital Age (시각문화교육과 디지털 미디어 시대의 애니메이션 교육의 방향)

  • Park, Yoo Shin
    • Cartoon and Animation Studies
    • /
    • s.35
    • /
    • pp.29-65
    • /
    • 2014
  • Visual culture art education (VCAE) seems to be the new paradigm for art education after postmodernism. Getting beyond the traditional art education, VCAE has expanded its scope of interest to include the visual environment that surrounds our life, thus pushing the boundary of art education beyond the traditional fine arts to cover pop culture and visual art. VCAE shares the issues as well as a lot of elements of culture and art education and in fact serves as a major theoretic background for culture and art education, in that it pays attention to the sociocultural context of images and emphasizes visual literacy and constructionist learning. In this paper, I have reviewed the theoretical background and related issues of VCAE with a view to presenting a direction for animation education, which is gaining in importance coming into the Age of Digital Media. VCAE was born in the progressive cultural atmosphere from the 1970s and thereafter, and its gist consists in figuring out visual artifacts and their action in order to improve individual and social life. Yet, VCAE continues with its development according to the changing aspects of visual culture, and currently, it is expanding its scope of interest to cover the esthetic, experiential education in visual culture and construction of meaning through digital story-telling. In the visual environment of the Digital Age, animation is establishing itself as the center of the visual culture, being a form that goes beyond an art genre or technology to realize images throughout the visual culture. Also, VCAE, which has so far emphasized visual communication and critical reading of culture, would need to reflect the new aspects of the visual culture in digital animation across the entire gamut from experiencing to understanding and appreciating art education. In this paper, I emphasize on Cross-Curricula, social reconstruction, the expansion of animation education, interests in animation as a digital media, and animation literacy. A study of animation education from the perspective of VCAE will not only provide a theoretical basis for establishing animation education, but also enrich the content of VCAE, traditionally focused on critical text reading, and promote its contemporary and futuristic orientation.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.