• Title/Summary/Keyword: Web of Data

Search Result 5,522, Processing Time 0.031 seconds

IFC-based Data Structure Design for Web Visualization (IFC 기반 웹 가시화를 위한 데이터 구조 설계)

  • Lee, Daejin;Choi, Wonik
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.332-337
    • /
    • 2017
  • When using IFC data consisting of STEP schema based on the EXPRESS language, it is not easy for collaborating project stakeholders to share BIM modeling shape information. The IFC viewer application must be installed on the desktop PC to review the BIM modeling shape information defined within the IFC, because the IFC viewer application not only parse STEP structure information model but also process the 3D feature construction for a 3D visualization. Therefore, we propose a lightweight data structure design for web visualization by parsing IFC data and constructing 3D modeling data. Our experimental results show the weight reduction of IFC data is about 40% of original file size and the web visualization is able to see the same quality with all web browsers which support WebGL on PCs and smartphones. If applied research is conducted about the web visualization based on IFC data of the last construction phase, it could be utilized in various fields ranging from the facility maintenance to indoor location-based services.

Sparse Data Cleaning using Multiple Imputations

  • Jun, Sung-Hae;Lee, Seung-Joo;Oh, Kyung-Whan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.119-124
    • /
    • 2004
  • Real data as web log file tend to be incomplete. But we have to find useful knowledge from these for optimal decision. In web log data, many useful things which are hyperlink information and web usages of connected users may be found. The size of web data is too huge to use for effective knowledge discovery. To make matters worse, they are very sparse. We overcome this sparse problem using Markov Chain Monte Carlo method as multiple imputations. This missing value imputation changes spare web data to complete. Our study may be a useful tool for discovering knowledge from data set with sparseness. The more sparseness of data in increased, the better performance of MCMC imputation is good. We verified our work by experiments using UCI machine learning repository data.

Implementation of Search Engine to Minimize Traffic Using Blockchain-Based Web Usage History Management System

  • Yu, Sunghyun;Yeom, Cheolmin;Won, Yoojae
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.989-1003
    • /
    • 2021
  • With the recent increase in the types of services provided by Internet companies, collection of various types of data has become a necessity. Data collectors corresponding to web services profit by collecting users' data indiscriminately and providing it to the associated services. However, the data provider remains unaware of the manner in which the data are collected and used. Furthermore, the data collector of a web service consumes web resources by generating a large amount of web traffic. This traffic can damage servers by causing service outages. In this study, we propose a website search engine that employs a system that controls user information using blockchains and builds its database based on the recorded information. The system is divided into three parts: a collection section that uses proxy, a management section that uses blockchains, and a search engine that uses a built-in database. This structure allows data sovereigns to manage their data more transparently. Search engines that use blockchains do not use internet bots, and instead use the data generated by user behavior. This avoids generation of traffic from internet bots and can, thereby, contribute to creating a better web ecosystem.

Web Content Loading Speed Enhancement Method using Service Walker-based Caching System (서비스워커 기반의 캐싱 시스템을 이용한 웹 콘텐츠 로딩 속도 향상 기법)

  • Kim, Hyun-gook;Park, Jin-tae;Choi, Moon-Hyuk;Moon, Il-young
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.1
    • /
    • pp.55-60
    • /
    • 2019
  • The web is one of the most intimate technologies in people's daily lives, and most of the time, people are sharing data on the web. Simple messenger, news, video, as well as various data are now spreading through the web. In addition, with the emergence of Web assembly technology, the programs that run in the existing native environment start to enter the domain of the Web, and the data shared by the Web is now getting wider and larger in terms of VR / AR contents and big data. Therefore, in this paper, we have studied how to effectively deliver web contentsto users who use Web service by using service worker that can operate independently without being dependent on browser and cache API that can effectively store data in web browser.

Design and Implementation of Web Crawler utilizing Unstructured data

  • Tanvir, Ahmed Md.;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.374-385
    • /
    • 2019
  • A Web Crawler is a program, which is commonly used by search engines to find the new brainchild on the internet. The use of crawlers has made the web easier for users. In this paper, we have used unstructured data by structuralization to collect data from the web pages. Our system is able to choose the word near our keyword in more than one document using unstructured way. Neighbor data were collected on the keyword through word2vec. The system goal is filtered at the data acquisition level and for a large taxonomy. The main problem in text taxonomy is how to improve the classification accuracy. In order to improve the accuracy, we propose a new weighting method of TF-IDF. In this paper, we modified TF-algorithm to calculate the accuracy of unstructured data. Finally, our system proposes a competent web pages search crawling algorithm, which is derived from TF-IDF and RL Web search algorithm to enhance the searching efficiency of the relevant information. In this paper, an attempt has been made to research and examine the work nature of crawlers and crawling algorithms in search engines for efficient information retrieval.

An SNS and Web based BDAS design for On-Line Marketing Strategy (온라인 마케팅 전략을 위한 SNS와 Web기반 BDAS(Big data Data Analysis Scheme) 설계)

  • Jeong, Yi-Na;Lee, Byung-Kwan;Park, Seok-Gyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.141-148
    • /
    • 2015
  • This paper proposes the BDAS(Big Data analysis Scheme) design that extracts the real time shared information from SNS and Web, analyzes the extracted data rapidly for customers, and makes an on-line marketing strategy efficiently. First, the BDAS collects the data shared in SNS and Web. Second, it provides the result of visualization by analyzing the semantics of the collected data as positive or negative. Therefore, because the BDAS ensures an average 90% accuracy in judging the semantics about the shared SNA and Web data, it can judge customer's propensity accurately and be used for on-line marketing strategy efficiently.

Operating Simulation of RPS using DEVS W/S in Web Service Environment

  • Cho, Kyu-Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.107-114
    • /
    • 2016
  • Web system helps high-performance processing for big-data analysis and practical use to make various information using IT resources. The government have started the RPS system in 2012. The system invigorates the electricity production as using renewable energy equipment. The government operates system gathered big-data with various related information system data and the system users are distributed geographically. The companies have to fulfill the system, are available to purchase the REC to other electricity generation company sellers to procure REC for their duty volumes. The REC market operates single auction methods with users a competitive price. But the price have the large variation with various user trading strategy and sellers situations. This papler proposed RPS system modeling and simulation in web environment that is modeled in geographically distributed computing environment for web user with DEVS W/S. Web simulation system base on web service helps to analysis correlation and variables that act on trading price and volume within RPS big-data and the analysis can be forecast REC price.

A New Approach to Web Data Mining Based on Cloud Computing

  • Zhu, Wenzheng;Lee, Changhoon
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.4
    • /
    • pp.181-186
    • /
    • 2014
  • Web data mining aims at discovering useful knowledge from various Web resources. There is a growing trend among companies, organizations, and individuals alike of gathering information through Web data mining to utilize that information in their best interest. In science, cloud computing is a synonym for distributed computing over a network; cloud computing relies on the sharing of resources to achieve coherence and economies of scale, similar to a utility over a network, and means the ability to run a program or application on many connected computers at the same time. In this paper, we propose a new system framework based on the Hadoop platform to realize the collection of useful information of Web resources. The system framework is based on the Map/Reduce programming model of cloud computing. We propose a new data mining algorithm to be used in this system framework. Finally, we prove the feasibility of this approach by simulation experiment.

Blockchain for the Trustworthy Decentralized Web Architecture

  • Kim, Geun-Hyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.26-36
    • /
    • 2021
  • The Internet was created as a decentralized and autonomous system of interconnected computer networks used for data exchange across mutually trusted participants. The element technologies on the Internet, such as inter-domain and intra-domain routing and DNS, operated in a distributed manner. With the development of the Web, the Web has become indispensable in daily life. The existing web applications allow us to form online communities, generate private information, access big data, shop online, pay bills, post photos or videos, and even order groceries. This is what has led to centralization of the Web. This centralization is now controlled by the giant social media platforms that provide it as a service, but the original Internet was not like this. These giant companies realized that the decentralized network's huge value involves gathering, organizing, and monetizing information through centralized web applications. The centralized Web applications have heralded some major issues, which will likely worsen shortly. This study focuses on these problems and investigates blockchain's potentials for decentralized web architecture capable of improving conventional web services' critical features, including autonomous, robust, and secure decentralized processing and traceable trustworthiness in tamper-proof transactions. Finally, we review the decentralized web architecture that circumvents the main Internet gatekeepers and controls our data back from the giant social media companies.

Development of a STEP-compliant Web RPD Environment (STEP표준과 Web을 이용한 RPD환경 구축)

  • 강석호;김민수;김영호
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.1
    • /
    • pp.23-32
    • /
    • 2000
  • In this paper, we present a Web-enabled product data sharing system for the support of RPD (Rapid Product Development) process by incorporating STEP (STandard for the Exchange of Product model data) with Web technology such as VRML (Virtual Reality Markup Language), SGML (Structured Generalized Markup Language) and Java. Extreme competition makes product life cycle short by incessantly deprecating current products with a brand-new one, and thus urges enterprises to devise a new product faster than ever. In this environment, an RPD process with effective product data sharing system is essential to outstrip competitors by speeding up the development process. However, the diversity of product data schema and heterogeneous systems make it difficult to exchange the product data. We chose STEP as a neutral product data schema and Web as an independent exchange environment to overcome these problems. While implementing our system, we focused on the support of STEP AP 203 UoF (Units of Functionality) views to efficiently employ STEP data models that are maximally normalized, and therefore very cumbersome to handle. Our functionality-oriented UoF view approach can increase users'appreciation since it facilitates the modular usage of STEP data models. This can also enhance the accuracy of product data. We demonstrate that our view approach is applicable to the configuration control of mechanical assemblies.

  • PDF