• Title/Summary/Keyword: Web Systems

Search Result 3,132, Processing Time 0.026 seconds

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

Building a Korean Sentiment Lexicon Using Collective Intelligence (집단지성을 이용한 한글 감성어 사전 구축)

  • An, Jungkook;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.49-67
    • /
    • 2015
  • Recently, emerging the notion of big data and social media has led us to enter data's big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers' attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public (www.openhangul.com). For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.

A Comparative Study of Information Delivery Method in Networks According to Off-line Communication (오프라인 커뮤니케이션 유무에 따른 네트워크 별 정보전달 방법 비교 분석)

  • Park, Won-Kuk;Choi, Chan;Moon, Hyun-Sil;Choi, Il-Young;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.131-142
    • /
    • 2011
  • In recent years, Social Network Service, which is defined as a web-based service that allows an individual to construct a public or a semi-public profile within a bounded system, articulates a list of other users with whom they share connections, and traverses their list of connections. For example, Facebook and Twitter are the representative sites of Social Network Service, and these sites are the big issue in the world. A lot of people use Social Network Services to connect and maintain social relationship. Recently the users of Social Network Services have increased dramatically. Accordingly, many organizations become interested in Social Network Services as means of marketing, media, communication with their customers, and so on, because social network services can offer a variety of benefits to organizations such as companies and associations. In other words, organizations can use Social Network Services to respond rapidly to various user's behaviors because Social Network Services can make it possible to communicate between the users more easily and faster. And marketing cost of the Social Network Service is lower than that of existing tools such as broadcasts, news papers, and direct mails. In addition, Social network Services are growing in market place. So, the organizations such as companies and associations can acquire potential customers for the future. However, organizations uniformly communicate with users through Social Network Service without consideration of the characteristics of the networks although networks have different effects on information deliveries. For example, members' cohesion in an offline communication is higher than that in an online communication because the members of the offline communication are very close. that is, the network of the offline communication has a strong tie. Accordingly, information delivery is fast in the network of the offline communication. In this study, we compose two networks which have different characteristic of communication in Twitter. First network is constructed with data based on an offline communication such as friend, family, senior and junior in school. Second network is constructed with randomly selected data from users who want to associate with friends in online. Each network size is 250 people who divide with three groups. The first group is an ego which means a person in the center of the network. The second group is the ego's followers. The last group is composed of the ego's follower's followers. We compare the networks through social network analysis and follower's reaction analysis. We investigate density and centrality to analyze the characteristic of each network. And we analyze the follower's reactions such as replies and retweets to find differences of information delivery in each network. Our experiment results indicate that density and centrality of the offline communicationbased network are higher than those of the online-based network. Also the number of replies are larger than that of retweets in the offline communication-based network. On the other hand, the number of retweets are larger than that of replies in the online based network. We identified that the effect of information delivery in the offline communication-based network was different from those in the online communication-based network through experiments. So, you configure the appropriate network types considering the characteristics of the network if you want to use social network as an effective marketing tool.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

The Current Status of Utilization of Palliative Care Units in Korea: 6 Month Results of 2009 Korean Terminal Cancer Patient Information System (말기암환자 정보시스템을 이용한 우리나라 암환자 완화의료기관의 이용현황)

  • Shin, Dong-Wook;Choi, Jin-Young;Nam, Byung-Ho;Seo, Won-Seok;Kim, Hyo-Young;Hwang, Eun-Joo;Kang, Jina;Kim, So-Hee;Kim, Yang-Hyuck;Park, Eun-Cheol
    • Journal of Hospice and Palliative Care
    • /
    • v.13 no.3
    • /
    • pp.181-189
    • /
    • 2010
  • Purpose: Recently, health policy making is increasingly based on evidence. Therefore, Korean Terminal Cancer Patient Information System (KTCPIS) was developed to meet such need. We aimed to report its developmental process and statistics from 6 months data. Methods: Items for KTCPIS were developed through the consultation with practitioners. E-Velos web-based clinical trial management system was used as a technical platform. Data were collected for patients who were registered to 34 inpatient palliative care services, designated by Ministry of Health, Welfare, and Family Affairs, from $1^{st}$ of January to $30^{th}$ of June in 2009. Descriptive statistics were used for the analysis. Results: From the nationally representative set of 2,940 patients, we obtained the following results. Mean age was $64.8{\pm}12.9$ years, and 56.6% were male. Lung cancer (18.0%) was most common diagnosis. Only 50.3% of patients received the confirmation of terminal diagnosis by two or more physicians, and 69.7% had an insight of terminal diagnosis at the time of admission. About half of patients were admitted to the units on their own without any formal referral. Average and worst pain scores were significantly reduced after 1 week when compared to those at the time of admission. 73.4% faced death in the units, and home-discharge comprised only 13.3%. Mean length of stay per admission was $20.2{\pm}21.2$ days, with median value of 13. Conclusion: Nationally representative data on the characteristics of patients and their caregiver, and current practice of service delivery in palliative care units were obtained through the operation of KTCPIS.

Experimental Study on Structural Behavior of Precast PSC Curved Girder Bridge (프리캐스트 PSC 곡선 거더교의 구조거동에 대한 실험적 연구)

  • Kim, Sung Jae;Kim, Sung Bae;Uhm, Ki Ha;Kim, Jang Ho Jay
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.6
    • /
    • pp.1731-1741
    • /
    • 2014
  • Recently, many overpasses, highway, and advanced transit systems have been constructed to distribute the traffic congestion, thus small size of curved bridges with small curvature such as ramp structures have been increasing. Many of early curved bridges had been constructed by using straight beams with curved slabs, but curved steel beams have replaced them due to the cost, aesthetic and the advantage in building the section form and manipulating the curvature of beams, thereby large portion of curved bridges were applied with steel box girders. However, steel box girder bridges needs comparatively high initial costs and continuous maintenance such as repainting, which is the one of the reason for increasing the cost. Moreover, I-type steel plate girder which is being studied by many researchers recently, seem to have problems in stability due to the low torsional stiffness, resulting from the section characteristics with thin plate used for web and open section forms. Therefore, in recent studies, researchers have proposed curved precast PSC girders with low cost and could secured safety which could replace the curved steel girder type bridges. Hence, this study developed a Smart Mold system to manufacture efficient curved precast PSC girders. And by using this mold system a 40 m 2-girder bridge was constructed for a static flexural test, to evaluate the safety and performance under ultimate load. At the manufacturing stage, each single girder showed problems in the stability due to the torsional moment, but after the girders were connected by cross beams and decks, the bridge successfully distributed the stress, thereby the stability was confirmed. The static loading test results show that the initial crack was observed at 1,400 kN when the design load was 450 kN, and the load at the allowable deflection by code was 1,800 kN, which shows that the safety and usability of the curved precast PSC bridge manufactured by Smart Mold system is secured.

A User Profile-based Filtering Method for Information Search in Smart TV Environment (스마트 TV 환경에서 정보 검색을 위한 사용자 프로파일 기반 필터링 방법)

  • Sean, Visal;Oh, Kyeong-Jin;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.97-117
    • /
    • 2012
  • Nowadays, Internet users tend to do a variety of actions at the same time such as web browsing, social networking and multimedia consumption. While watching a video, once a user is interested in any product, the user has to do information searches to get to know more about the product. With a conventional approach, user has to search it separately with search engines like Bing or Google, which might be inconvenient and time-consuming. For this reason, a video annotation platform has been developed in order to provide users more convenient and more interactive ways with video content. In the future of smart TV environment, users can follow annotated information, for example, a link to a vendor to buy the product of interest. It is even better to enable users to search for information by directly discussing with friends. Users can effectively get useful and relevant information about the product from friends who share common interests or might have experienced it before, which is more reliable than the results from search engines. Social networking services provide an appropriate environment for people to share products so that they can show new things to their friends and to share their personal experiences on any specific product. Meanwhile, they can also absorb the most relevant information about the product that they are interested in by either comments or discussion amongst friends. However, within a very huge graph of friends, determining the most appropriate persons to ask for information about a specific product has still a limitation within the existing conventional approach. Once users want to share or discuss a product, they simply share it to all friends as new feeds. This means a newly posted article is blindly spread to all friends without considering their background interests or knowledge. In this way, the number of responses back will be huge. Users cannot easily absorb the relevant and useful responses from friends, since they are from various fields of interest and knowledge. In order to overcome this limitation, we propose a method to filter a user's friends for information search, which leverages semantic video annotation and social networking services. Our method filters and brings out who can give user useful information about a specific product. By examining the existing Facebook information regarding users and their social graph, we construct a user profile of product interest. With user's permission and authentication, user's particular activities are enriched with the domain-specific ontology such as GoodRelations and BestBuy Data sources. Besides, we assume that the object in the video is already annotated using Linked Data. Thus, the detail information of the product that user would like to ask for more information is retrieved via product URI. Our system calculates the similarities among them in order to identify the most suitable friends for seeking information about the mentioned product. The system filters a user's friends according to their score which tells the order of whom can highly likely give the user useful information about a specific product of interest. We have conducted an experiment with a group of respondents in order to verify and evaluate our system. First, the user profile accuracy evaluation is conducted to demonstrate how much our system constructed user profile of product interest represents user's interest correctly. Then, the evaluation on filtering method is made by inspecting the ranked results with human judgment. The results show that our method works effectively and efficiently in filtering. Our system fulfills user needs by supporting user to select appropriate friends for seeking useful information about a specific product that user is curious about. As a result, it helps to influence and convince user in purchase decisions.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Construction of Gene Network System Associated with Economic Traits in Cattle (소의 경제형질 관련 유전자 네트워크 분석 시스템 구축)

  • Lim, Dajeong;Kim, Hyung-Yong;Cho, Yong-Min;Chai, Han-Ha;Park, Jong-Eun;Lim, Kyu-Sang;Lee, Seung-Su
    • Journal of Life Science
    • /
    • v.26 no.8
    • /
    • pp.904-910
    • /
    • 2016
  • Complex traits are determined by the combined effects of many loci and are affected by gene networks or biological pathways. Systems biology approaches have an important role in the identification of candidate genes related to complex diseases or traits at the system level. The gene network analysis has been performed by diverse types of methods such as gene co-expression, gene regulatory relationships, protein-protein interaction (PPI) and genetic networks. Moreover, the network-based methods were described for predicting gene functions such as graph theoretic method, neighborhood counting based methods and weighted function. However, there are a limited number of researches in livestock. The present study systemically analyzed genes associated with 102 types of economic traits based on the Animal Trait Ontology (ATO) and identified their relationships based on the gene co-expression network and PPI network in cattle. Then, we constructed the two types of gene network databases and network visualization system (http://www.nabc.go.kr/cg). We used a gene co-expression network analysis from the bovine expression value of bovine genes to generate gene co-expression network. PPI network was constructed from Human protein reference database based on the orthologous relationship between human and cattle. Finally, candidate genes and their network relationships were identified in each trait. They were typologically centered with large degree and betweenness centrality (BC) value in the gene network. The ontle program was applied to generate the database and to visualize the gene network results. This information would serve as valuable resources for exploiting genomic functions that influence economically and agriculturally important traits in cattle.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.