• Title/Summary/Keyword: Context log

Search Result 60, Processing Time 0.034 seconds

An Application-embedded method to trace OTT viewing patterns on smartphone (스마트폰에서의 OTT(Over The Top)서비스 시청패턴 추적 어플리케이션 설계 : 티빙(tving)을 중심으로)

  • Choi, Sun-Young;Kim, Min-Soo;Kim, Myoung-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.1000-1006
    • /
    • 2014
  • This study focuses on the fact that a OTT service is vigorously used for smart phones, and suggests a design of method to trace the experiences of watching television contents. For this purpose, we developed logging functions and embedded them into existing OTT service application to record flow and pattern of watching context. This paper suggests a log file format which can accurately and precisely record watching actions of users per-second methodology rather than former per-minute methodology. Moreover, this study shows that the application can trace watching attitude according to occurring events by characteristics and playing modes of realtime broadcasting, VOD, advertisement contents. In addition, based on the result of the study, this paper discusses educational, operational meaning of the method such as methodological application in mobile ethnography field or survey for total screening rate.

A Design of the Ontology-based Situation Recognition System to Detect Risk Factors in a Semiconductor Manufacturing Process (반도체 공정의 위험요소 판단을 위한 온톨로지 기반의 상황인지 시스템 설계)

  • Baek, Seung-Min;Jeon, Min-Ho;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.804-809
    • /
    • 2013
  • The current state monitoring system at a semiconductor manufacturing process is based on the manually collected sensor data, which involves limitations when it comes to complex malfunction detection and real time monitoring. This study aims to design a situation recognition algorithm to form a network over time by creating a domain ontology and to suggest a system to provide users with services by generating events upon finding risk factors in the semiconductor process. To this end, a multiple sensor node for situational inference was designed and tested. As a result of the experiment, events to which the rule of time inference was applied occurred for the contents formed over time with regard to a quantity of collected data while the events that occurred with regard to malfunction and external time factors provided log data only.

A Study on Pseudo N-gram Language Models for Speech Recognition (음성인식을 위한 의사(疑似) N-gram 언어모델에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.16-23
    • /
    • 2001
  • In this paper, we propose the pseudo n-gram language models for speech recognition with middle size vocabulary compared to large vocabulary speech recognition using the statistical n-gram language models. The proposed method is that it is very simple method, which has the standard structure of ARPA and set the word probability arbitrary. The first, the 1-gram sets the word occurrence probability 1 (log likelihood is 0.0). The second, the 2-gram also sets the word occurrence probability 1, which can only connect the word start symbol and WORD, WORD and the word end symbol . Finally, the 3-gram also sets the ward occurrence probability 1, which can only connect the word start symbol , WORD and the word end symbol . To verify the effectiveness of the proposed method, the word recognition experiments are carried out. The preliminary experimental results (off-line) show that the word accuracy has average 97.7% for 452 words uttered by 3 male speakers. The on-line word recognition results show that the word accuracy has average 92.5% for 20 words uttered by 20 male speakers about stock name of 1,500 words. Through experiments, we have verified the effectiveness of the pseudo n-gram language modes for speech recognition.

  • PDF

A Study on the Development of Services Supporting Personal Relationship Management - focusing on relationship management using mobile phones (인간 관계관리 지원 서비스 개발을 위한 연구 - 휴대전화를 이용한 관계 관리를 중심으로)

  • Kim, Ju-Yong;Lee, Chang-Hee;Lee, Se-Young;Lee, Jun-Ho
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.239-244
    • /
    • 2008
  • We lead a life in our community, beginning a new relationship with a stranger or maintaining or stopping existing relationships. Relationships with others are sustained through social activities based on communication. Generally speaking, by exchanging feelings and information through communication, relationships are formed and continued, and strengthened through active communication As a result of the development of technologies and information technology over the recent 10 years, a mobile phone has stood as a communication channel, and now it has become such a universal, and highly intimate and important means of communication that almost all the Koreans use more often than wired phones. Today, people have their communication channel available for others, by using a mobile phone at any time and anywhere. Like this, mobile phones have been playing a key role in helping people maintain, repair and strengthen their personal relationship, but from the perspective of personal relationship management, they still remain as an aid to help communication, failing to provide a positive help for actual relationship management. This study was designed to provide services supporting user's personal relationship management, focusing on the use of mobile phones as a major tool of communication, aiming to enable users to understand current state of their relationship and make relationship management efforts, or communication behaviors, by informing who needs communication, on the basis of data on mobile phone calls. To this end, the study established a method to extract intimacy between users and callers and develop a prototype of services supporting personal relationship management, using relationship characteristics in terms of mobile communication.

  • PDF

Survival analysis on the business types of small business using Cox's proportional hazard regression model (콕스 비례위험 모형을 이용한 중소기업의 업종별 생존율 및 생존요인 분석)

  • Park, Jin-Kyung;Oh, Kwang-Ho;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.257-269
    • /
    • 2012
  • Global crisis expedites the change in the environment of industry and puts small size enterprises in danger of mass bankruptcy. Because of this, domestic small size enterprises is an urgent need of restructuring. Based on the small business data registered in the Credit Guarantee Fund, we estimated the survival probability in the context of the survival analysis. We also analyzed the survival time which are distinguished depending on the types of business in the small business. Financial variables were also conducted using COX regression analysis of small businesses by types of business. In terms of types of business wholesale and retail trade industry and services were relatively high in the survival probability than light, heavy, and the construction industries. Especially the construction industry showed the lowest survival probability. In addition, we found that construction industry, the bigger BIS (bank of international settlements capital ratio) and current ratio are, the smaller default-rate is. But the bigger borrowing bond is, the bigger default-rate is. In the light industry, the bigger BIS and ROA (return on assets) are, the smaller a default-rate is. In the wholesale and retail trade industry, the bigger bis and current ratio are, the smaller a default-rate is. In the heavy industry, the bigger BIS, ROA, current ratio are, the smaller default-rate is. Finally, in the services industry, the bigger current ratio is, the smaller a default-rate is.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Performance of Drip Irrigation System in Banana Cultuivation - Data Envelopment Analysis Approach

  • Kumar, K. Nirmal Ravi;Kumar, M. Suresh
    • Agribusiness and Information Management
    • /
    • v.8 no.1
    • /
    • pp.17-26
    • /
    • 2016
  • India is largest producer of banana in the world producing 29.72 million tonnes from an area of 0.803 million ha with a productivity of 35.7 MT ha-1 and accounted for 15.48 and 27.01 per cent of the world's area and production respectively (www.nhb.gov.in). In India, Tamil Nadu leads other states both in terms of area and production followed by Maharashtra, Gujarat and Andhra Pradesh. In Rayalaseema region of Andhra Pradesh, Kurnool district had special reputation in the cultivation of banana in an area of 5765 hectares with an annual production of 2.01 lakh tonnes in the year 2012-13 and hence, it was purposively chosen for the study. On $23^{rd}$ November 2003, the Government of Andhra Pradesh has commenced a comprehensive project called 'Andhra Pradesh Micro Irrigation Project (APMIP)', first of its kind in the world so as to promote water use efficiency. APMIP is offering 100 per cent of subsidy in case of SC, ST and 90 per cent in case of other categories of farmers up to 5.0 acres of land. In case of acreage between 5-10 acres, 70 per cent subsidy and acreage above 10, 50 per cent of subsidy is given to the farmer beneficiaries. The sampling frame consists of Kurnool district, two mandals, four villages and 180 sample farmers comprising of 60 farmers each from Marginal (<1ha), Small (1-2ha) and Other (>2ha) categories. A well structured pre-tested schedule was employed to collect the requisite information pertaining to the performance of drip irrigation among the sample farmers and Data Envelopment Analysis (DEA) model was employed to analyze the performance of drip irrigation in banana farms. The performance of drip irrigation was assessed based on the parameters like: Land Development Works (LDW), Fertigation costs (FC), Volume of water supplied (VWS), Annual maintenance costs of drip irrigation (AMC), Economic Status of the farmer (ES), Crop Productivity (CP) etc. The first four parameters are considered as inputs and last two as outputs for DEA modelling purposes. The findings revealed that, the number of farms operating at CRS are more in number in other farms (46.66%) followed by marginal (45%) and small farms (28.33%). Similarly, regarding the number of farmers operating at VRS, the other farms are again more in number with 61.66 per cent followed by marginal (53.33%) and small farms (35%). With reference to scale efficiency, marginal farms dominate the scenario with 57 per cent followed by others (55%) and small farms (50%). At pooled level, 26.11 per cent of the farms are being operated at CRS with an average technical efficiency score of 0.6138 i.e., 47 out of 180 farms. Nearly 40 per cent of the farmers at pooled level are being operated at VRS with an average technical efficiency score of 0.7241. As regards to scale efficiency, nearly 52 per cent of the farmers (94 out of 180 farmers) at pooled level, either performed at the optimum scale or were close to the optimum scale (farms having scale efficiency values equal to or more than 0.90). Majority of the farms (39.44%) are operating at IRS and only 29 per cent of the farmers are operating at DRS. This signifies that, more resources should be provided to these farms operating at IRS and the same should be decreased towards the farms operating at DRS. Nearly 32 per cent of the farms are operating at CRS indicating efficient utilization of resources. Log linear regression model was used to analyze the major determinants of input use efficiency in banana farms. The input variables considered under DEA model were again considered as influential factors for the CRS obtained for the three categories of farmers. Volume of water supplied ($X_1$) and fertigation cost ($X_2$) are the major determinants of banana farms across all the farmer categories and even at pooled level. In view of their positive influence on the CRS, it is essential to strengthen modern irrigation infrastructure like drip irrigation and offer more fertilizer subsidies to the farmer to enhance the crop production on cost-effective basis in Kurnool district of Andhra Pradesh, India. This study further suggests that, the present era of Information Technology will help the irrigation management in the context of generating new techniques, extension, adoption and information. It will also guide the farmers in irrigation scheduling and quantifying the irrigation water requirements in accordance with the water availability in a particular season. So, it is high time for the Government of India to pay adequate attention towards the applications of 'Information and Communication Technology (ICT) and its applications in irrigation water management' for facilitating the deployment of Decision Supports Systems (DSSs) at various levels of planning and management of water resources in the country.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.