• Title/Summary/Keyword: web

Search Result 18,382, Processing Time 0.055 seconds

A Study on Design and Development of Web Information Collection System Based Compare and Merge Method (웹 페이지 비교통합 기반의 정보 수집 시스템 설계 및 개발에 대한 연구)

  • Jang, Jin-Wook
    • Journal of Information Technology Services
    • /
    • v.13 no.1
    • /
    • pp.147-159
    • /
    • 2014
  • Recently, the quantity of information that is accessible from the Internet is being dramatically increased. Searching the Web for useful information has therefore become increasingly difficult. Thus, much research has been done on web robots which perform internet information filtering based on user interest. If a web site which users want to visit is found, its content is searched by following the searching list or Web sites links in order. This search process takes a long time according as the number of page or site increases so that its performance need to be improved. In order to minimize unnecessary search with web robots, this paper proposes an efficient information collection system based on compare and merge method. In the proposed system, a web robot initially collects information from web sites which users register. From the next visit to the web sites, the web robot compares what it collected with what the web sites have currently. If they are different, the web robot updates what it collected. Only updated web page information is classified according to subject and provided to users so that users can access the updated information quickly.

Web Accessibility Comparison between Handicapped Person Related Web Sites and School Web Sites (장애인 관련 웹 사이트와 학교 웹 사이트의 웹 접근성 비교)

  • Kim, Hwangyong
    • Journal of Digital Convergence
    • /
    • v.12 no.1
    • /
    • pp.365-370
    • /
    • 2014
  • Importance of Web accessability for handicapped persons is ever increasing in the information society with increasing role of the Web. The Web accessibility of handicapped person related Web sites were compared with the school Web sites in order to figure out the realities of handicapped person related Web site's accessibilities. According to the experiment, the Web accessibility of handicapped person related Web sites got lower scores than general school Web sites. The research result shows that more improvement efforts for the Web accessibility of handicapped person related Web sites are required.

Transverse load carrying capacity of sinusoidally corrugated steel web beams with web openings

  • Kiymaz, G.;Coskun, E.;Cosgun, C.;Seckin, E.
    • Steel and Composite Structures
    • /
    • v.10 no.1
    • /
    • pp.69-85
    • /
    • 2010
  • The present paper presents a study on the behavior and design of corrugated web steel beams with and without web openings. In the literature, the web opening problem in steel beams was dealt with mostly for steel beams with plane web plates and research on the effect of an opening on a corrugated web was found out to be very limited. The present study deals mainly with the effect of web openings on the transverse load carrying capacity of steel beams with sinusoidally corrugated webs. A general purpose finite element program (ABAQUS) was used. Simply supported corrugated web beams of 2 m length and with circular web openings at quarter span points were considered. These points are generally considered to be the optimum locations of web openings for steel beams. Various cases were analyzed including the size of the openings and the corrugation density which is a function of the magnitude and length of the sine wave. Models without web holes were also analyzed and compared with other cases which were all together examined in terms of load-deformation characteristics and ultimate web shear resistance.

RwO-Caching:A Study on Web Caching with Related Web Object (RwO-캐싱:연관 웹 객체 기반의 웹 캐싱 기법 연구)

  • Na, Hui-Seong;Ko, Franz I.S
    • The Journal of Society for e-Business Studies
    • /
    • v.13 no.4
    • /
    • pp.161-171
    • /
    • 2008
  • The most important reason for increasing web traffic and overloaded web servers is a dramatic growth of web users which leads to a great dissatisfaction of each individual user. For overcoming this situation, studies for an acceleration of web content have been conducted actively. We use web caching technology for reducing the load of the web system and traffic in the network. In this paper, we proposed a new web caching technology with the related web object which based on the unit of web processing and characteristics of the web object. Also we verified the availability of the proposed system with comparison and experiments.,

  • PDF

Numerical Formula and Verification of Web Robot for Collection Speedup of Web Documents

  • Kim Weon;Kim Young-Ki;Chin Yong-Ok
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.1-10
    • /
    • 2004
  • A web robot is a software that has abilities of tracking and collecting web documents on the Internet(l), The performance scalability of recent web robots reached the limit CIS the number of web documents on the internet has increased sharply as the rapid growth of the Internet continues, Accordingly, it is strongly demanded to study on the performance scalability in searching and collecting documents on the web. 'Design of web robot based on Multi-Agent to speed up documents collection ' rather than 'Sequentially executing Web Robot based on the existing Fork-Join method' and the results of analysis on its performance scalability is presented in the thesis, For collection speedup, a Multi-Agent based web robot performs the independent process for inactive URL ('Dead-links' URL), which is caused by overloaded web documents, temporary network or web-server disturbance, after dividing them into each agent. The agents consist of four component; Loader, Extractor, Active URL Scanner and inactive URL Scanner. The thesis models a Multi-Agent based web robot based on 'Amdahl's Law' to speed up documents collection, introduces a numerical formula for collection speedup, and verifies its performance improvement by comparing data from the formula with data from experiments based on the formula. Moreover, 'Dynamic URL Partition algorithm' is introduced and realized to minimize the workload of the web server by maximizing a interval of the web server which can be a collection target.

  • PDF

A Web Recommendation System using Grid based Support Vector Machines

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.2
    • /
    • pp.91-95
    • /
    • 2007
  • Main goal of web recommendation system is to study how user behavior on a website can be predicted by analyzing web log data which contain the visited web pages. Many researches of the web recommendation system have been studied. To construct web recommendation system, web mining is needed. Especially, web usage analysis of web mining is a tool for recommendation model. In this paper, we propose web recommendation system using grid based support vector machines for improvement of web recommendation system. To verify the performance of our system, we make experiments using the data set from our web server.

Design and Implementation of Web Ontology Inference System Using Axiomatisation (어휘의 공리화를 이용한 Web Ontology 추론 시스템의 설계 및 구현)

  • 하영국;손주찬;함호상
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10c
    • /
    • pp.559-561
    • /
    • 2003
  • 최근 차세대 Web 기술로서 Semantic Web이 주목 받고 있다. Semantic Web에서는 Web상에 존재하는 문서에 Web Resource들에 대한 Ontology를 기반으로 Semantic Annotation을 하고 Ontology 추론 Agent를 통하여 의미 기반으로 Web을 검색할 수 있도록 해준다. 이와 같은 Semantic Web 기술의 핵심 요소는 Web Ontology이며 W3C에서는 이를 표현 할 수 있는 표준 언어로서 RDF기반의 OWL(Web Ontology Language) 명세를 제정하고 있다. 따라서 표준 Web Ontology 언어인 OWL을 위한 추론 시스템은 Semantic Web 검색 Agent의 구현을 위한 필수적인 기반 기술이라 할 수 있으나 아직 그 개발이 미비한 상태이다. OWL 추론 시스템을 구현하기 위해서는 OWL의 이론적인 기반을 제공하는 DL(Description Logic)을 추론할 수 있는 엔진을 사용하는 것이 한가지 방법이 될 수 있으나 OWL이 Rule과 같은 DL의 범주를 벗어나는 Vocabulary를 지원하는 언어로 확장되는 경우에 이를 처리하기가 어렵다. 또 다른 방법으로서 Logic Programming을 통하여 OWL 언어의 Semantic을 기술하고 정리 증명(Theorem Proving)을 통하여 Ontology를 추론하는 공리화(Axiomatisation) 기법이 있는데 이러한 방법의 장점은 기반이 되는 Logic의 범주 내에서 새로운 언어를 위한 Vocabulary의 확장이 용이하다는 점이다. 본 논문에서는 Axiomatisation 방법을 이용하여 OWL로 기술된 Ontology를 추론할 수 있는 시스템의 설계 및 구현에 대해 설명하기로 한다.

  • PDF

Optimization Model on the World Wide Web Organization with respect to Content Centric Measures (월드와이드웹의 내용기반 구조최적화)

  • Lee Wookey;Kim Seung;Kim Hando;Kang Sukho
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.30 no.1
    • /
    • pp.187-198
    • /
    • 2005
  • The structure of a Web site can prevent the search robots or crawling agents from confusion in the midst of huge forest of the Web pages. We formalize the view on the World Wide Web and generalize it as a hierarchy of Web objects such as the Web as a set of Web sites, and a Web site as a directed graph with Web nodes and Web edges. Our approach results in the optimal hierarchical structure that can maximize the weight, tf-idf (term frequency and inverse document frequency), that is one of the most widely accepted content centric measures in the information retrieval community, so that the measure can be used to embody the semantics of search query. The experimental results represent that the optimization model is an effective alternative in the dynamically changing Web environment by replacing conventional heuristic approaches.

sPAC(Web Services Performance Analysis Center): A performance-aware web service composition tool (sPAC(Web Service Performance Analysis Center): 성능 중심의 웹 서비스 조합 도구)

  • Chang, Hee-Jung;Song, Hyung-Ki;Lee, Kang-Sun
    • Journal of the Korea Society for Simulation
    • /
    • v.14 no.3
    • /
    • pp.119-127
    • /
    • 2005
  • Web services and their composition (web processes) are promising technologies to efficiently integrate disparate software components over various types of systems. As many web services are nowadays available on Internet, quality of services (QoS) and performance/cost become increasingly important to differentiating between similar service providers. In this work, we introduce sPAC (Web Services Performance Analysis Centre) and show how customers can benefit from sPAC to consider performance in composing and commercializing web services. sPAC 1) helps users to graphically describe the workflow of web services, 2) invokes web services to test out performance for light load conditions, 3) automatically converts the web services and the flow between them into a simulation model, 4) conducts extensive simulations for heavy load conditions and various usage patterns, and 5) reports analysis results and estimation data for the web services.

  • PDF

Automatic Extraction of Dependencies between Web Components and Database Resources in Java Web Applications

  • Oh, Jaewon;Ahn, Woo Hyun;Kim, Taegong
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.2
    • /
    • pp.149-160
    • /
    • 2019
  • Web applications typically interact with databases. Therefore, it is very crucial to understand which web components access which database resources when maintaining web apps. Existing research identifies interactions between Java web components, such as JavaServer Pages and servlets but does not extract dependencies between the web components and database resources, such as tables and attributes. This paper proposes a dynamic analysis of Java web apps, which extracts such dependencies from a Java web app and represents them as a graph. The key responsibility of our analysis method is to identify when web components access database resources. To fulfill this responsibility, our method dynamically observes the database-related objects provided in the Java standard library using the proxy pattern, which can be applied to control access to a desired object. This study also experiments with open source web apps to verify the feasibility of the proposed method.