• Title/Summary/Keyword: Mobile Transactions

Search Result 1,614, Processing Time 0.025 seconds

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

A Study on Trust Transfer in Traditional Fintech of Smart Banking (핀테크 서비스에서 오프라인에서 온라인으로의 신뢰전이에 관한 연구 - 스마트뱅킹을 중심으로 -)

  • Ai, Di;Kwon, Sun-Dong;Lee, Su-Chul;Ko, Mi-Hyun;Lee, Bo-Hyung
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.167-184
    • /
    • 2017
  • In this study, we investigated the effect of offline banking trust on smart banking trust. As influencing factors of smart banking trust, this study compared offline banking trust, smart banking's system quality, and information quality. For the empirical study, 186 questionnaire data were collected from smart banking users and the data were analyzed using Smart-PLS 2.0. As results, it was verified that there is trust transfer in FinTech service, by the significant effect of offline banking trust on smart banking trust. And it was proved that the effect of offline banking trust on smart banking trust is lower than that of smart banking itself. The contribution of this study can be seen in both academic and industrial aspects. First, it is the contribution of the academic aspect. Previous studies on banking were focused on either offline banking or smart banking. But this study, focus on the relationship between offline banking and online banking, proved that offline banking trust affects smart banking trust. Next, it is the industrial contribution. This study showed that offline banking characteristics of traditional commercial banks affect the trust of emerging smart banking service. This means that the emerging FinTech companies are not advantageous in the competition of trust building compared to traditional commercial banks. Unlike traditional commercial banks, the emerging FinTech is innovating the convenience of customers by arming them with new technologies such as mobile Internet, social network, cloud technology, and big data. However, these FinTech strengths alone can not guarantee sufficient trust needed for financial transactions, because banking customers do not change a habit or an inertia that they already have during using traditional banks. Therefore, emerging FinTech companies should strive to create destructive value that reflects the connection with various Internet services and the strength of online interaction such as social services, which have an advantage over customer contacts. And emerging FinTech companies should strive to build service trust, focused on young people with low resistance to new services.

  • PDF

Design and Implementation of Quality Broker Architecture to Web Service Selection based on Autonomic Feedback (자율적 피드백 기반 웹 서비스 선정을 위한 품질 브로커 아키텍처의 설계 및 구현)

  • Seo, Young-Jun;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.223-234
    • /
    • 2008
  • Recently the web service area provides the efficient integrated environment of the internal and external of corporation and enterprise that wants the introduction of it is increasing. Also the web service develops and the new business model appears, the domestic enterprise environment and e-business environment are changing caused by web service. The web service which provides the similar function increases, most the method which searches the suitable service in demand of the user is more considered seriously. When it needs to choose one among the similar web services, service consumer generally needs quality information of web service. The problem, however, is that the advertised QoS information of a web service is not always trustworthy. A service provider may publish inaccurate QoS information to attract more customers, or the published QoS information may be out of date. Allowing current customers to rate the QoS they receive from a web service, and making these ratings public, can provide new customers with valuable information on how to rank services. This paper suggests the agent-based quality broker architecture which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. It is able to solve problem which modify quality requirements of the consumer from providing the architecture it selects a web service to consumer dynamically. Namely, the consumer is able to search the service which provides the optimal quality criteria through UDDI browser which is connected in quality broker server. To quality criteria value decision of each service the user intervention is excluded the maximum. In the existing selection architecture, the objective evaluation was difficult in subjective class of service selecting of the consumer. But the proposal architecture is able to secure an objectivity with the quality criteria value decision where the agent monitors binding information in consumer location. Namely, it solves QoS information of service which provider does not provide with QoS information sharing which is caused by with feedback of consumer side agents.