• Title/Summary/Keyword: Web Crawler System

Search Result 39, Processing Time 0.031 seconds

A Design of Web History Archive System Using RCS in Large Scale Web (대용량 웹에서 RCS를 이용한 웹 히스토리 저장 시스템 설계)

  • 이무훈;이민희;조성훈;장창복;김동혁;최의인
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.211-213
    • /
    • 2004
  • 웹의 급속한 성장에 따라 웹 정보는 시간적 . 공간적 제약을 받지 않고 널리 활용되어지고 있다. 하지만 기존에 유용하게 사용되던 정보가 어느 순간 삭제가 된다면 더 이상 켈 정보를 이용할 수 없게 된다는 문제점이 존재한다. 이러한 문제를 해결하기 위해 웹 아카이브 시스템에 대한 연구와 좀더 효율적으로 삭제된 웹 정보를 저장하기 위한 기법들이 제안되었다. 그러나 기존의 기법들은 단순히 웹 정보를 저장하는 것에만 초점을 두었기 때문에 저장 공간의 효율성 및 제약성을 전혀 고려하지 않는 단점을 가지고 있다. 따라서 본 논문에서는 WebBase를 기반으로 하여 레포지토리에서 갱신되는 웹 정보들을 효율적으로 저장하고 검색할 수 있는 웹 히스토리 저장 시스템을 설계하였다. 본 논문에서 제안한 기법은 웹 히스토리 저장 시스템 설계를 위해 별도의 Crawler를 두지 않고 WebBase를 활용함으로써 웹 정보 수집에 대한 오버헤드를 줄일 수 일고, 삭제되는 웹 정보를 RCS를 통하여 체계적이고 효율적으로 저장함으로써 중요한 웹 정보를 공유할 수 있도록 하였다.

  • PDF

Design for Recommended System of Movies using Social Network Keyword of Analysis (소셜 네트워크 키워드 분석을 통한 영화 추천 시스템 설계)

  • Yang, Xi-tong;Lee, Jong-Won;Chu, Xun;Pyoun, Do-Kil;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.609-611
    • /
    • 2014
  • Was developed of the web service in Due to the dissemination for IT skills development and smart appliances. In particular, Social network service for should be able to communicate feel free to a user across without distinguishing between production and consumption information in contrast to the existing web service. And strengthen to the information sharing relationships between existing human relation and new human relation. In this paper, a social network service in providing a social networking from users using their communication and information sharing is used to collect and analyze in the keyword. And a design of recommended system of movies for appropriate keyword.

  • PDF

Improving the quality of Search engine by using the Intelligent agent technolo

  • Nauyen, Ha-Nam;Choi, Gyoo-Seok;Park, Jong-Jin;Chi, Sung-Do
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.1093-1102
    • /
    • 2003
  • The dynamic nature of the World Wide Web challenges Search engines to find relevant and recent pages. Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Web in a reasonable amount of time. In this paper we study way spiders that should visit the URLs in order to obtain more “important” pages first. We define and apply several metrics, ranking formula for improving crawling results. The comparison between our result and Breadth-first Search (BFS) method shows the efficiency of our experiment system.

  • PDF

System Design for Collecting Real-Time Product Information Using RSS (RSS를 이용한 실시간 상품정보 수집시스템의 설계)

  • Chuluun, Munkhzaya;Ko, Sun-Woo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • It is well known that internet shoppers are very sensitive to sale prices. They visit the various shopping malls and collect the product information including purchase conditions for goods purchase decision-making. Recently the necessity of information support is increasing because of increase of information amount which is necessary and complexity of goods purchase decision-making process. The comparison shopping agent systems have provided price comparison information which is collected from various shopping malls to satisfy internet shoppers information craving. But the frequent price change caused by keen price competition is becoming the primary reason of information quality decline among price comparison sites. RSS which is a family of web feed formats used to publish frequently updated is applied even in on-line shopping malls. This paper develops a RSS product information collection system to get real-time product information. The proposed product information system consists of (1) web crawler module for searching RSS feed shopping malls automatically, (2) RSS reader module for parsing product information from RSS feed file, (3) product DB and (4) product searching module. Performance of the proposed system is higher than the comparison shopping agent systems when it is defined with the volume of collecting product information per unit time.

KONG-DB: Korean Novel Geo-name DB & Search and Visualization System Using Dictionary from the Web (KONG-DB: 웹 상의 어휘 사전을 활용한 한국 소설 지명 DB, 검색 및 시각화 시스템)

  • Park, Sung Hee
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.321-343
    • /
    • 2016
  • This study aimed to design a semi-automatic web-based pilot system 1) to build a Korean novel geo-name, 2) to update the database using automatic geo-name extraction for a scalable database, and 3) to retrieve/visualize the usage of an old geo-name on the map. In particular, the problem of extracting novel geo-names, which are currently obsolete, is difficult to solve because obtaining a corpus used for training dataset is burden. To build a corpus for training data, an admin tool, HTML crawler and parser in Python, crawled geo-names and usages from a vocabulary dictionary for Korean New Novel enough to train a named entity tagger for extracting even novel geo-names not shown up in a training corpus. By means of a training corpus and an automatic extraction tool, the geo-name database was made scalable. In addition, the system can visualize the geo-name on the map. The work of study also designed, implemented the prototype and empirically verified the validity of the pilot system. Lastly, items to be improved have also been addressed.

Deep Learning Frameworks for Cervical Mobilization Based on Website Images

  • Choi, Wansuk;Heo, Seoyoon
    • Journal of International Academy of Physical Therapy Research
    • /
    • v.12 no.1
    • /
    • pp.2261-2266
    • /
    • 2021
  • Background: Deep learning related research works on website medical images have been actively conducted in the field of health care, however, articles related to the musculoskeletal system have been introduced insufficiently, deep learning-based studies on classifying orthopedic manual therapy images would also just be entered. Objectives: To create a deep learning model that categorizes cervical mobilization images and establish a web application to find out its clinical utility. Design: Research and development. Methods: Three types of cervical mobilization images (central posteroanterior (CPA) mobilization, unilateral posteroanterior (UPA) mobilization, and anteroposterior (AP) mobilization) were obtained using functions of 'Download All Images' and a web crawler. Unnecessary images were filtered from 'Auslogics Duplicate File Finder' to obtain the final 144 data (CPA=62, UPA=46, AP=36). Training classified into 3 classes was conducted in Teachable Machine. The next procedures, the trained model source was uploaded to the web application cloud integrated development environment (https://ide.goorm.io/) and the frame was built. The trained model was tested in three environments: Teachable Machine File Upload (TMFU), Teachable Machine Webcam (TMW), and Web Service webcam (WSW). Results: In three environments (TMFU, TMW, WSW), the accuracy of CPA mobilization images was 81-96%. The accuracy of the UPA mobilization image was 43~94%, and the accuracy deviation was greater than that of CPA. The accuracy of the AP mobilization image was 65-75%, and the deviation was not large compared to the other groups. In the three environments, the average accuracy of CPA was 92%, and the accuracy of UPA and AP was similar up to 70%. Conclusion: This study suggests that training of images of orthopedic manual therapy using machine learning open software is possible, and that web applications made using this training model can be used clinically.

Developing and Pre-Processing a Dataset using a Rhetorical Relation to Build a Question-Answering System based on an Unsupervised Learning Approach

  • Dutta, Ashit Kumar;Wahab sait, Abdul Rahaman;Keshta, Ismail Mohamed;Elhalles, Abheer
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.199-206
    • /
    • 2021
  • Rhetorical relations between two text fragments are essential information and support natural language processing applications such as Question - Answering (QA) system and automatic text summarization to produce an effective outcome. Question - Answering (QA) system facilitates users to retrieve a meaningful response. There is a demand for rhetorical relation based datasets to develop such a system to interpret and respond to user requests. There are a limited number of datasets for developing an Arabic QA system. Thus, there is a lack of an effective QA system in the Arabic language. Recent research works reveal that unsupervised learning can support the QA system to reply to users queries. In this study, researchers intend to develop a rhetorical relation based dataset for implementing unsupervised learning applications. A web crawler is developed to crawl Arabic content from the web. A discourse-annotated corpus is generated using the rhetorical structural theory. A Naïve Bayes based QA system is developed to evaluate the performance of datasets. The outcome shows that the performance of the QA system is improved with proposed dataset and able to answer user queries with an appropriate response. In addition, the results on fine-grained and coarse-grained relations reveal that the dataset is highly reliable.

Design and Analysis of Technical Management System of Personal Information Security using Web Crawer (웹 크롤러를 이용한 개인정보보호의 기술적 관리 체계 설계와 해석)

  • Park, In-pyo;Jeon, Sang-june;Kim, Jeong-ho
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.69-77
    • /
    • 2018
  • In the case of personal information files containing personal information, there is insufficient awareness of personal information protection in end-point areas such as personal computers, smart terminals, and personal storage devices. In this study, we use Diffie-Hellman method to securely retrieve personal information files generated by web crawler. We designed SEED and ARIA using hybrid slicing to protect against attack on personal information file. The encryption performance of the personal information file collected by the Web crawling method is compared with the encryption decryption rate according to the key generation and the encryption decryption sharing according to the user key level. The simulation was performed on the personal information file delivered to the external agency transmission process. As a result, we compared the performance of existing methods and found that the detection rate is improved by 4.64 times and the information protection rate is improved by 18.3%.

Building an SNS Crawling System Using Python (Python을 이용한 SNS 크롤링 시스템 구축)

  • Lee, Jong-Hwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.5
    • /
    • pp.61-76
    • /
    • 2018
  • Everything is coming into the world of network where modern people are living. The Internet of Things that attach sensors to objects allows real-time data transfer to and from the network. Mobile devices, essential for modern humans, play an important role in keeping all traces of everyday life in real time. Through the social network services, information acquisition activities and communication activities are left in a huge network in real time. From the business point of view, customer needs analysis begins with SNS data. In this research, we want to build an automatic collection system of SNS contents of web environment in real time using Python. We want to help customers' needs analysis through the typical data collection system of Instagram, Twitter, and YouTube, which has a large number of users worldwide. It is stored in database through the exploitation process and NLP process by using the virtual web browser in the Python web server environment. According to the results of this study, we want to conduct service through the site, the desired data is automatically collected by the search function and the netizen's response can be confirmed in real time. Through time series data analysis. Also, since the search was performed within 5 seconds of the execution result, the advantage of the proposed algorithm is confirmed.

Analyzing Box-Office Hit Factors Using Big Data: Focusing on Korean Films for the Last 5 Years

  • Hwang, Youngmee;Kim, Kwangsun;Kwon, Ohyoung;Moon, Ilyoung;Shin, Gangho;Ham, Jongho;Park, Jintae
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.4
    • /
    • pp.217-226
    • /
    • 2017
  • Korea has the tenth largest film industry in the world; however, detailed analyses using the factors contributing to successful film commercialization have not been approached. Using big data, this paper analyzed both internal and external factors (including genre, release date, rating, and number of screenings) that contributed to the commercial success of Korea's top 10 ranking films in 2011-2015. The authors developed a WebCrawler to collect text data about each movie, implemented a Hadoop system for data storage, and classified the data using Map Reduce method. The results showed that the characteristic of "release date," followed closely by "rating" and "genre" were the most influential factors of success in the Korean film industry. The analysis in this study is considered groundwork for the development of software that can predict box-office performance.