• Title/Summary/Keyword: Web Search Traffic

Search Result 25, Processing Time 0.026 seconds

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

A study to Predictive modeling of crime using Web traffic information (웹 검색 트래픽 정보를 이용한 범죄 예측 모델링에 관한 연구)

  • Park, Jung-Min;Chung, Young-Suk;Park, Koo-Rack
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.93-101
    • /
    • 2015
  • In modern society, various crimes is occurred. It is necessary to predict the criminal in order to prevent crimes, various studies on the prediction of crime is in progress. Crime-related data, is announced to the statistical processing of once a year from the Public Prosecutor's Office. However, relative to the current point in time, data that has been statistical processing is a data of about two years ago. It does not fit to the data of the crime currently being generated. In This paper, crime prediction data was apply with Naver trend data. By using the Web traffic Naver trend, it is possible to obtain the data of interest level for crime currently being generated. It was constructed a modeling that can predict the crime by using traffic data of the Naver web search. There have been applied to Markov chains prediction theory. Among various crimes, murder, arson, rape, predictive modeling was applied to target. And the result of predictive modeling value was analyzed. As a result, it got the same results within 20%, based on the value of crime that actually occurred. In the future, it plan to advance research for the predictive modeling of crime that takes into the characteristics of the season.

Development of Optimal Routes Guidance System based on GIS (GIS기반 최적 경로안내 시스템 개발)

  • Yoo, Hwan-Hee;Woo, Hae-In;Lee, Tae-Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.1 s.19
    • /
    • pp.59-66
    • /
    • 2002
  • The rapid change of industrial structure causes to increase distribution cost and requires necessity of physical distribution system urgently. Traffic situation is getting extremely worse and traffic jam has led to increasing expense of physical distribution delivery which dominates 20% of distribution cost. In this situation, the shortest and most suitable path search system is required by modern people who must waste a lot of time for moving with a car or on the street as well as many companies. for these reasons, we developed the shortest-path-searching system applying the dijkstra algorithm which is one of the effective shortest path algorithm to GIS, and it was constructed by considering realistic urban traffic and the pattern of street in a physical situation. Also, this system was developed to be updated weight data automatically, considering the dynamic change of traffic situation such as a traffic information service which will be served in real time. Finally, we designed this system to serve on web by using MapObjects IMS.

  • PDF

Information Sharing System Based on Ontology in Wireless Internet (무선 인터넷 환경에서의 온톨로지 기반 정보 공유 시스템)

  • 노경신;유영훈;조근식
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.133-136
    • /
    • 2003
  • Due to recent explosion of information available online, question- answering (Q&A) systems are becoming a compelling framework for finding relevant information in a variety of domains. Question-answering system is one of the best ways to introduce a novice customer to a new domain without making him/her to obtain prior knowledge of its overall structure improving search request with specific answer. However, the current web poses serious problem for finding specific answer for many overlapped meanings for the same questions or duplicate questions also retrieved answer for many overlapped meanings fer the same questions or duplicate questions also retrieved answer is slow due to enhanced network traffic, which leads to wastage of resource. In order to avoid wrong answer which occur due to above-mentioned problem we propose the system using ontology by RDF, RDFS and mobile agent based on JAVA. We also choose wireless internet based embedded device as our test bed for the system and apply the system in E-commerce information domain. The mobile agent provides agent routing with reduced network traffic, consequently helps us to minimize the elapsed time for answers and structured ontology based on our proposed algorithms sorts out the similarity between current and past question by comparing properties of classes.

  • PDF

An Analysis on Shortest Path Search Process of Gifted Student and Normal Student in Information (정보영재학생과 일반학생의 최단경로 탐색 과정 분석)

  • Kang, Sungwoong;Kim, Kapsu
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.3
    • /
    • pp.243-254
    • /
    • 2016
  • This study has produced a checker of the shortest path search problem with a total of 19 questions as a web-based computer evaluation based on the 'TRAFFIC' questions of PISA 2012. It is because the computer has been settled as an indispensable and significant instrument in the process of solving the problems of everyday life and as a media that is underlying in assessment. Therefore, information gifted students should be able to solve the problem using the computer and give clear enough commands to the computer so that it can perform the procedure. In addition, since it is the age that the computational thinking is affecting every sectors, it should give students new educational stimuli. The relationship between the rate of correct answers and the time took to solve the problem through the shortest route search process showed a significant correlation the variable that affected the problem solving as the difficulty of the question rises due to the increase of nodes and edges turned out to be the node than the edge. It was revealed that information gifted students went through algorithmic thinking in the process of solving the shortest route search problem. And It could be confirmed cognitive characteristics of the information gifted students such as 'ability streamlining' and 'information structure memory'.

Classification of Client-side Application-level HTTP Traffic (HTTP 트래픽의 클라이언트측 어플리케이션별 분류)

  • Choi, Mi-Jung;Jin, Chang-Gyu;Kim, Myung-Sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11B
    • /
    • pp.1277-1284
    • /
    • 2011
  • Today, many applications use 80 port, which is a basic port number of HTTP protocol, to avoid a blocking of firewall. HTTP protocol is used in not only Web browsing but also many applications such as the search of P2P programs, update of softwares and advertisement transfer of nateon messenger. As HTTP traffics are increasing and various applications transfer data through HTTP protocol, it is essential to identify which applications use HTTP and how they use the HTTP protocol. In order to prevent a specific application in the firewall, not the protocol-level, but the application-level traffic classification is necessary. This paper presents a method to classify HTTP traffics based on applications of the client-side and group the applications based on providing services. We developed an application-level HTTP traffic classification system and verified the method by applying the system to a small part of the campus network.

A Study on the characteristics of model house designs applying a virtual reality technique (가상현실 모델하우스 활용 특성에 관한 연구)

  • 윤재은;이준규
    • Korean Institute of Interior Design Journal
    • /
    • no.33
    • /
    • pp.106-114
    • /
    • 2002
  • Through the computer simulation of virtual reality techniques, prospective clients can experience the surroundings such as the view, traffic and convenience facilities in advance. Also, consumers can reflect their own tastes in the real apartment Interior construction through their own work in relation to furniture arrangement, color choices, etc. inside the apartments. The virtual model house not only replaces the function of the existing model house but it can also provide some services that could not be provided easily due to some restrictions under real conditions. More, it can provide a precise view of the outside appearance and surrounding environment as well as inside the house, as virtual structures on the computer. Consumers can search for the information through the Web in advance and consumer-satisfaction type of apartments can be built. For this purpose, smooth communications among the designers, server managers, and consumers should be achieved as a prerequisite to the delivery of the necessary information.

RDBMS Based Efficient Method for Shortest Path Searching Over Large Graphs Using K-degree Index Table (대용량 그래프에서 k-차수 인덱스 테이블을 이용한 RDBMS 기반의 효율적인 최단 경로 탐색 기법)

  • Hong, Jihye;Han, Yongkoo;Lee, Young-Koo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.5
    • /
    • pp.179-186
    • /
    • 2014
  • Current networks such as social network, web page link, traffic network are big data which have the large numbers of nodes and edges. Many applications such as social network services and navigation systems use these networks. Since big networks are not fit into the memory, existing in-memory based analysis techniques cannot provide high performance. Frontier-Expansion-Merge (FEM) framework for graph search operations using three corresponding operators in the relational database (RDB) context. FEM exploits an index table that stores pre-computed partial paths for efficient shortest path discovery. However, the index table of FEM has low hit ratio because the indices are determined by distances of indices rather than the possibility of containing a shortest path. In this paper, we propose an method that construct index table using high degree nodes having high hit ratio for efficient shortest path discovery. We experimentally verify that our index technique can support shortest path discovery efficiently in real-world datasets.

An Unified Spatial Index and Visualization Method for the Trajectory and Grid Queries in Internet of Things

  • Han, Jinju;Na, Chul-Won;Lee, Dahee;Lee, Do-Hoon;On, Byung-Won;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.9
    • /
    • pp.83-95
    • /
    • 2019
  • Recently, a variety of IoT data is collected by attaching geosensors to many vehicles that are on the road. IoT data basically has time and space information and is composed of various data such as temperature, humidity, fine dust, Co2, etc. Although a certain sensor data can be retrieved using time, latitude and longitude, which are keys to the IoT data, advanced search engines for IoT data to handle high-level user queries are still limited. There is also a problem with searching large amounts of IoT data without generating indexes, which wastes a great deal of time through sequential scans. In this paper, we propose a unified spatial index model that handles both grid and trajectory queries using a cell-based space-filling curve method. also it presents a visualization method that helps user grasp intuitively. The Trajectory query is to aggregate the traffic of the trajectory cells passed by taxi on the road searched by the user. The grid query is to find the cells on the road searched by the user and to aggregate the fine dust. Based on the generated spatial index, the user interface quickly summarizes the trajectory and grid queries for specific road and all roads, and proposes a Web-based prototype system that can be analyzed intuitively through road and heat map visualization.