• Title/Summary/Keyword: Web Databases

Search Result 604, Processing Time 0.024 seconds

An Implementation of a Web Transaction Processing System (웹 트랜잭션 처리 시스템의 구현)

  • Lee, Gang-U;Kim, Hyeong-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.533-542
    • /
    • 1999
  • 웹은 지금까지의 문자 위주의 응용에서 벗어나, 그림, 동영상과 같은 멀티미디어 자료를 브라우저를 통해 접근할 수 있도록 하여, 점차 많은 응용분야에서 사용되고 있다. 최근에 들어서는 데이타베이스와 웹의 연동을 통해, 보다 다양하고, 양질의 웹 서비스를 지원하고자 하는 노력으로, 많은 연구자들이 웹 통로에 대한 많은 연구로 좋은 결과를 내고 있다. 그러나 웹과 데이타베이스의 연동에 있어 필요한 웹 트랜잭션 처리시 발생하는 문제에 있어서는 아직까지 만족할 만한 연구결과가 제시되지 않고 있다.본 논문에서는 웹 통로에서 웹 트랜잭션 처리를 위한 시스템인 WebTP를 제안하고, 이를 구현하였다. WebTP는 워크, 워크-전역변수 등을 도입하여, 웹 응용 단위에서의 저장점 설치와 부분 철회와 트랜잭션 상태 관리 기법을 제공하여, 시스템의 가용성과 안정성을 높이며, 구조적인 웹 응용작성이 가능토록 하여, 복잡한 웹 응용 개발을 돕는다.Abstract Enabling users to access multi-media data like images and motion pictures, in addition to the traditional text-based data, Web becomes a popular platform for various applications. Recently, many researchers have paid a large amount of works on integrating Web and databases to improve the quality of Web services, and have produced many noticeable results on this area. However, no enough research results have been presented on processing Web transactions that are essential in integrating Web and databases.In this paper, we have designed and implemented WebTP, which is a Web transaction processing system for Web gateways to databases. WebTP, by introducing work and work-global-variables, provides a more robust state management, supports application-level savepoints and partial rollbacks. It also nhances productity by providing a modular way to develop Web applications.

Development of Internet Expert System Tool using ASP (ASP를 이용한 인터넷 전문가 시스템 도구 개발)

  • 조성인;양희성;배영민;정재연
    • Journal of Biosystems Engineering
    • /
    • v.26 no.2
    • /
    • pp.141-146
    • /
    • 2001
  • Lots of the agricultural information come from human experiences and are in non-numerical forms. Therefore, it is difficult to process to be processed in a conventional data processing way. An internet expert system for agricultural application using the ASP(active server page) was developed to solve this problem and consisted of databases, an inference engine, and an user interface. The databases were composed of rule base, question base and link data. The inference engine was developed with the ASP for connection with web between databases. The used interface was developed with the CGI(common gateway interface), so that question could be answered on a web browser, and the session technique was used to provide proper result to each of multi-users. A prototype internet expert system was developed for diagnosis of diseases and nutritional disorders of paddy rice. The expert system was interactively worked through WWW(world wide web) at remote sites by multi-users, even at the same time. The rule base could be easily updated and modified from a web server computer by a knowledge engineer.

  • PDF

An Effective Metric for Measuring the Degree of Web Page Changes (효과적인 웹 문서 변경도 측정 방법)

  • Kwon, Shin-Young;Kim, Sung-Jin;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.34 no.5
    • /
    • pp.437-447
    • /
    • 2007
  • A variety of similarity metrics have been used to measure the degree of web page changes. In this paper, we first define criteria for web page changes to evaluate the effectiveness of the similarity metrics in terms of six important types of web page changes. Second, we propose a new similarity metric appropriate for measuring the degree of web page changes. Using real web pages and synthesized pages, we analyze the five existing metrics (i.e., the byte-wise comparison, the TF IDF cosine distance, the word distance, the edit distance, and the shingling) and ours under the proposed criteria. The analysis result shows that our metric represents the changes more effectively than other metrics. We expect that our study can help users select an appropriate metric for particular web applications.

A Study on Integrated Operation of Electronic Journals and Commercial Web Databases (전자저널과 상용 웹 DB의 상호 통합운영에 대한 연구)

  • 김혜선;황혜경;최선희
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.11a
    • /
    • pp.333-337
    • /
    • 2003
  • Electronic journals and commercial web databases are major parts of scholarly contents. In this study we investigated the current situation for both of them and the ways of integrated operation. Also we studied the integration scheme with paper resources and usage when we establish future information contents and resources development action plan.

  • PDF

An Ontology-based Knowledge Management System - Integrated System of Web Information Extraction and Structuring Knowledge -

  • Mima, Hideki;Matsushima, Katsumori
    • Proceedings of the CALSEC Conference
    • /
    • 2005.03a
    • /
    • pp.55-61
    • /
    • 2005
  • We will introduce a new web-based knowledge management system in progress, in which XML-based web information extraction and our structuring knowledge technologies are combined using ontology-based natural language processing. Our aim is to provide efficient access to heterogeneous information on the web, enabling users to use a wide range of textual and non textual resources, such as newspapers and databases, effortlessly to accelerate knowledge acquisition from such knowledge sources. In order to achieve the efficient knowledge management, we propose at first an XML-based Web information extraction which contains a sophisticated control language to extract data from Web pages. With using standard XML Technologies in the system, our approach can make extracting information easy because of a) detaching rules from processing, b) restricting target for processing, c) Interactive operations for developing extracting rules. Then we propose a structuring knowledge system which includes, 1) automatic term recognition, 2) domain oriented automatic term clustering, 3) similarity-based document retrieval, 4) real-time document clustering, and 5) visualization. The system supports integrating different types of databases (textual and non textual) and retrieving different types of information simultaneously. Through further explanation to the specification and the implementation technique of the system, we will demonstrate how the system can accelerate knowledge acquisition on the Web even for novice users of the field.

  • PDF

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

Profile based Web Application Attack Detection and Filtering Method (프로파일기반 웹 어플리케이션 공격탐지 및 필터링 기법)

  • Yun Young-Tae;Ryou Jae-Cheol;Park Sang-Seo;Park Jong-Wook
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.19-26
    • /
    • 2006
  • Recently, web server hacking is trending toward web application hacking which uses comparatively vulnerable web applications based on open sources. And, it is possible to hack databases using web interfaces because web servers are usually connected databases. Web application attacks use vulnerabilities not in web server itself, but in web application structure, logical error and code error. It is difficult to defend web applications from various attacks by only using pattern matching detection method and code modification. In this paper, we propose a method to secure the web applications based on profiling which can detect and filter out abnormal web application requests.

Bioinformatics Resources of the Korean Bioinformation Center (KOBIC)

  • Lee, Byung-Wook;Chu, In-Sun;Kim, Nam-Shin;Lee, Jin-Hyuk;Kim, Seon-Yong;Kim, Wan-Kyu;Lee, Sang-Hyuk
    • Genomics & Informatics
    • /
    • v.8 no.4
    • /
    • pp.165-169
    • /
    • 2010
  • The Korean Bioinformation Center (KOBIC) is a national bioinformatics research center in Korea. We developed many bioinformatics algorithms and applications to facilitate the biological interpretation of OMICS data. Here we present an introduction to major bioinformatics resources of databases and tools developed at KOBIC. These resources are classified into three main fields: genome, proteome, and literature. In the genomic resources, we constructed several pipelines for next generation sequencing (NGS) data processing and developed analysis algorithms and web-based database servers including miRGator, ESTpass, and CleanEST. We also built integrated databases and servers for microarray expression data such as MDCDP. As for the proteome data, VnD database, WDAC, Localizome, and CHARMM_HM web servers are available for various purposes. We constructed IntoPub server and Patome database in the literature field. We continue constructing and maintaining the bioinformatics infrastructure and developing algorithms.

Publication Indicators under Web of Science, SCOPUS Databases at Northern Border University: 2008-2020

  • Al Sawy, Yaser Mohammad Mohammad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.5
    • /
    • pp.90-97
    • /
    • 2021
  • The study aimed to analyze the reality of the scientific publishing of the faculty members of the Northern Border University, in both the Web of Science database and the SCOPUS database, with the analysis of publishing indicators and trends for the period between 2008-2020, and the researcher's keenness to apply the bibliometric study research methodology to obtain an account Full and detailed publications indicators under the two databases, including a full analysis of scientific publishing through objective, temporal, quantitative, authors, languages, open access journals, information forms, the most productive authors, the most published scientific journals, the most scientific bodies involved with the university. The study found results, the most important of which is the existence of a high increase in scientific publishing, starting from 2015, with an increase in publishing in the scientific field compared to other disciplines, and that the vast majority of publishing is in the form of articles, as well as publishing in English for the rest of the languages.

The Migration of Data Between Heterogeneous RDBs Using Web Service in Intranet (인트라넷에서의 웹 서비스를 이용한 이기종 RDB간의 데이터 이주)

  • Park, Yoo-Shin;Jung, Kye-Dong;Choi, Young-Keun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11B
    • /
    • pp.758-771
    • /
    • 2005
  • Information systems in current corporations are managing and storing large data officially happened on individual and various databases. Corporations for migrating the stored data from these individual databases are adopting with technologies of EAI, MDR, DW etc.. However these technologies are not only required to expenses introduction charge and maintenance cost but also have problems of heterogeneous environment required per each vender In this paper, to solve problems of these current existing technologies, we propose to design our data migration system to migrate source data and semantic constraint condition between heterogeneous relation databases based on web service. As corporations use web services, they can reduce introduction expense and maintenance cost because of using current existent web environment. Each system can independently migrate XML based data against any platform, system environment, and Implementation language.