• Title/Summary/Keyword: Three-tier Database

Search Result 13, Processing Time 0.021 seconds

Development of the Educational Simulator for the Electricity Spot Market in Korea (교육용 현물전력시장 모의 시뮬레이터)

  • Yang, Kwang-Min;Lee, Ki-Song;Park, Jong-Bae;Shin, Joong-Rhin
    • Proceedings of the KIEE Conference
    • /
    • 2004.11b
    • /
    • pp.94-96
    • /
    • 2004
  • This paper discusses the development of the educational simulator for the electricity spot market in korea. The interaction between lectures and users can be much enhanced via the web-based programs which result in the student's teaming effectiveness on an electricity spot market. However the difficulties for developing web-based application programs are that there can be the numerous unspecified users to access the application programs. To overcome the aforementioned multi-users problem and to develope the educational simulator, we have revised the system architecture, the modeling of application programs, and database which efficiently and effectively manages the complex data sets related to an electricity spot market. The developed application program is composed of the physical three tiers where the middle tier is logically divided into two kinds of application programs. The divided application programs are interconnected by using the Web-service based on XML (Extended Markup Technology) and HTTP (Hyper Text Transfer Protocol) which make it possible the distributed computing technology.

  • PDF

Chemical Risk Assessment Screening Tool of a Global Chemical Company

  • Tjoe-Nij, Evelyn;Rochin, Christophe;Berne, Nathalie;Sassi, Alessandro;Leplay, Antoine
    • Safety and Health at Work
    • /
    • v.9 no.1
    • /
    • pp.84-94
    • /
    • 2018
  • Background: This paper describes a simple-to-use and reliable screening tool called Critical Task Exposure Screening (CTES), developed by a chemical company. The tool assesses if the exposure to a chemical for a task is likely to be within acceptable levels. Methods: CTES is a Microsoft Excel tool, where the inhalation risk score is calculated by relating the exposure estimate to the corresponding occupational exposure limit (OEL) or occupational exposure band (OEB). The inhalation exposure is estimated for tasks by preassigned ART1.5 activity classes and modifying factors. Results: CTES requires few inputs. The toxicological data, including OELs, OEBs, and vapor pressure are read from a database. Once the substance is selected, the user specifies its concentration and then chooses the task description and its duration. CTES has three outputs that may trigger follow-up: (1) inhalation risk score; (2) identification of the skin hazard with the skin warnings for local and systemic adverse effects; and (3) status for carcinogenic, mutagenic, or reprotoxic effects. Conclusion: The tool provides an effective way to rapidly screen low-concern tasks, and quickly identifies certain tasks involving substances that will need further review with, nevertheless, the appropriate conservatism. This tool shows that the higher-tier ART1.5 inhalation exposure assessment model can be included effectively in a screening tool. After 2 years of worldwide extensive use within the company, CTES is well perceived by the users, including the shop floor management, and it fulfills its target of screening tool.

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.