• Title/Summary/Keyword: Web Content

Search Result 1,141, Processing Time 0.028 seconds

The Development of Instrument for Evaluating Web Site about Elementary Science Education (초등학교 과학 교육을 위한 웹사이트의 평가 도구 개발)

  • Song, Myung-Seub;Choi, Gwang-Ho
    • Journal of Korean Elementary Science Education
    • /
    • v.27 no.2
    • /
    • pp.201-209
    • /
    • 2008
  • The purpose of this study was to develop an instrument for evaluating quality of the web site about elementary science education. Eight experienced specialists (three elementary science education specialists and five elementary school teachers engaging) in elementary science education verified content validity twice. The evaluation criteria for elementary school science were considering aspects of the characteristic and the quality of web site. The evaluation criteria can be categorized as being in five areas, fifty-seven items. The five areas were a purpose, a credibility, an efficiency, a data-type, an interactivity. Four web sites were analyzed for the subject of this study. Evaluators were nine elementary school teachers, who were educated in the use of instrument. The developed evaluation instrument in this study is considered valid and reliable. The content validity is 91.1%, and the reliability has a Cronbach's ${\alpha}$ of .82. The developed evaluation instrument in this study will verify the quality of web site, provide teachers and students with efficient information about elementary science education. In addition it suggests guidelines about design and devise web site to instructional developer or subject-matter expert.

  • PDF

An Enhancing Caching Technique by the SOP(Shared Object Page) for Content Adaptation Systems (콘텐츠 적응화 시스템에 SOP(Shared Object Page)를 도입한 개선된 캐싱 기법)

  • Jang, Seo-Young;Jeong, Ho-Yeong;Kang, Su-Yong;Cha, Jae-Hyeok
    • Journal of Digital Contents Society
    • /
    • v.8 no.1
    • /
    • pp.41-50
    • /
    • 2007
  • People access web contain via PC and many other devices. In other words, not only they access information by a PC connected internet, but also they get information through a mobile phone, a PDA even D-TV. In this article, to resolve the problem, we suppose new web caching mechanism called 'SOP(Shared Object Page)'based on applying of meta data of web page information and storing adapted objects.

  • PDF

Manufacturing and Characterization of PVDF/TiO2 Composite Nano Web with Improved β-phase (β-phase가 향상된 PVDF/TiO2 Nano Web 제조 및 특성 분석)

  • Bae, Sung Jun;Kim, Il Jin;Lee, Jae Yeon;Sur, Suk-Hun;Choi, Pil Jun;Sim, Jae Hak;Lee, Seung Geol;Ko, Jae Wang
    • Textile Coloration and Finishing
    • /
    • v.32 no.3
    • /
    • pp.167-175
    • /
    • 2020
  • In this study, the optimum conditions for manufacturing PVDF nano web according to various electrospinning conditions such as solution concentration and applied voltage conditions were confirmed. The optimum spinning conditions were studied by analyzing the changes in the radioactivity of PVDF/TiO2 nano web according to the TiO2 content and the content of β-phase closely related to the piezoelectric properties under established conditions. As a result, it was confirmed that the concentration of the spinning solution was 20 wt%, the applied voltage was 25 kV, and the TiO2 content was 5 phr. PVDF nano web and PVDF/TiO2 nano web were observed morphologies through Scanning Electron Microscope(SEM) analysis. Formation of β-phase by electrospinning was confirmed by Fourier transform infrared spectroscopy(FT-IR) and X-ray Diffractometer(XRD), and the effect of the trapped nano web structure on the piezoelectric properties was investigated.

A Study on Enhancing Web Accessibility for Visually Impaired People in Public Libraries (시각장애인을 위한 공공도서관의 웹 접근성 제고 방안)

  • Cho, Yoon-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.43 no.3
    • /
    • pp.335-354
    • /
    • 2009
  • In the knowledge and information society, there is an increasing importance of information, being unable to access this information becomes a huge disadvantage. Blind and visually impaired users who are using assistant equipment such as screen readers are the very class that is having trouble accessing the web content that public libraries provide. In this research 10% of each region's libraries were picked as a research sample. Most of the web content that the public libraries provided did not meet the Internet web content accessibility guidelines as a Korean standard. Also evaluation into web accessibility by the visually impaired was also low. This research suggests some solutions for libraries to reduce the disadvantages that are put upon visually impaired people from various views such as perceivable, operable, understandable, and robust.

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

Web-based Knowledge Management Model for Mid-Term and Long- Term Nuclear R&D Using Web Knowledge DataBase (웹 지식 데이터베이스를 활용한 원자력 중장기 연구개발 웹 기반 지식관리 모델)

  • 정관성;한도희
    • The Journal of Society for e-Business Studies
    • /
    • v.5 no.2
    • /
    • pp.143-150
    • /
    • 2000
  • This paper presents a methodology how to utilize management of research scheduling plan, processing, and results using Web Knowledge Database System, which integrates research knowledge management model under the Research & Development Environment. The content of this paper consists of description on utilization of the Web Knowledge Database System, sharing of the Research Knowledge through design data review, communications, and management of research knowledge flow during the Research & Development Period.

  • PDF

A Comprehensive Model for Evaluating Internet Web Sites (인터넷 웹사이트의 포괄적 평가모형에 관한 연구)

  • 홍일유;정부현
    • Korean Management Science Review
    • /
    • v.17 no.3
    • /
    • pp.161-180
    • /
    • 2000
  • The purpose of this paper is to propose an analytical model for evaluating Internet Web sites, that is comprehensive and flexible enough to accomodate different categories of Internet Web sites. This paper is intended to identify critical success factors of Internet Web sites to determine criteria for evaluating the Web sites, and use the criteria to develop a framework for comprehensive evaluation of Internet Web sites. The framework consists of eight categories, including design, business functions, trustworthiness, interface, technology, community, contents, and others. An empirical study designed to validated the framework has been conducted for each of the three Web site categories, including (1) information provision, (2) product sale, and (3) customer service. The results show that ‘content ’ is the most important for information provision Web sites, ‘trustworthiness’for product sale Web sites, and ‘design’ for customer service Web sites. The framework may be used not only as a tool to evaluate Internet Web sites, but also as a checklist to improve the quality of a Web site that is under development.

  • PDF

Design and Implementation of Web Crawler with Real-Time Keyword Extraction based on the RAKE Algorithm

  • Zhang, Fei;Jang, Sunggyun;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.395-398
    • /
    • 2017
  • We propose a web crawler system with keyword extraction function in this paper. Researches on the keyword extraction in existing text mining are mostly based on databases which have already been grabbed by documents or corpora, but the purpose of this paper is to establish a real-time keyword extraction system which can extract the keywords of the corresponding text and store them into the database together while grasping the text of the web page. In this paper, we design and implement a crawler combining RAKE keyword extraction algorithm. It can extract keywords from the corresponding content while grasping the content of web page. As a result, the performance of the RAKE algorithm is improved by increasing the weight of the important features (such as the noun appearing in the title). The experimental results show that this method is superior to the existing method and it can extract keywords satisfactorily.

A Study on Web Document's Efficient Browsing

  • Kim, Dong-Hyun;Song, Seung-Heon;Kim, Eung-Kon
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.2
    • /
    • pp.88-92
    • /
    • 2003
  • Most document consists of primary content and supporting material, such as footnotes, detailed explanations, and illustrations, and the related supporting materials are linked as hypertext on web document. However, the content of hypertext links is appeared in the new windows on present web browser. Then the user will leave the primary material, may lose the entire context, and must have some difficulties to return to the primary context when the interest disappears. Using the technique for fluid links, we can solve these problems easily. If the mouse is putted on the link, the related material is presented in between lines or at margin maintaining the context of primary material. In this paper, we introduce the various browsing techniques using fluid links, analyze the forms and the features, and then we propose the way to implement in Java.