• Title/Summary/Keyword: graph engine

Search Result 48, Processing Time 0.021 seconds

A Web Services-based Client OLAP API and Its Application to Cube Browsing (웹 서비스 기반의 클라이언트 OLAP API와 큐브 브라우징에의 응용 사례)

  • Bae, Eun-Ju;Kim, Myung
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.143-152
    • /
    • 2003
  • XML and Web Services draw a lot of attention as standard technologies for data exchange and integration among heterogeneous platforms XML/A, which supports such technologies, is a SOAP based XML APl that facilitates data exchange between a client application and a data analysis engine through the Internet. The fact that the XML format is used for data exchange makes XML/A to be platform-independent. However. client application developers have to go through a tedious Job of treating the same type of XML documents fur downloading data from the server. Also, an XML query language is needed for extracting data from the XML documents sent by the server. In this paper, we present a high level client OLAP API, called DXML, for the client application developers in the windows environment to easily use the OLAP services of XML/A. XMLMD consists of properties and methods needed for OLAP application development. XMLMD is to XML/A what ADOMD is to OLEDB for OLAP. We also present a web OLAP cube browser that is developed using XMLMD. The browser display's data in various formats such as XML, HTML, Excel, and graph.

Design and Implementation of the Postal Route Optimization System Model (우편 경로 최적화 시스템 모델 설계 및 구현)

  • Nam, Sang-U
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1483-1492
    • /
    • 1996
  • In this paper, related on the postal business with the GIS(Geographics Information System), it discusses design and implementation of the PROS(Postal Route Optimization System) model and its main module, the shortest path generation algorithm, for supporting to postal route managements. It explains examples requirements of postal route system, and suggests the efficient PROS model using our developed shortest path generation algorithm. Because the shortest path algorithm adopts not only consider the Dijkstra algorithm of graph theory, but also the method with the direction property, PROS can be implemented with fast and efficient route search. PROS is mainly constituted of the Shortest Generator, the Isochronal Area Generator, and the Path Rearrangement Generator. It also exploits the GIS engine and the spatial DBMS (Data Base Management System) for processing coordinates in the map and geographical features. PROS can be used in the management of postal delivery business and delivery area and route, and in the rearrangement of route. In the near future, it can be also applied to commercial delivery businesses, guides of routs and traffic informations, and auto navigation system with GPS(Global Positioning System).

  • PDF

Design of Spark SQL Based Framework for Advanced Analytics (Spark SQL 기반 고도 분석 지원 프레임워크 설계)

  • Chung, Jaehwa
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.477-482
    • /
    • 2016
  • As being the advanced analytics indispensable on big data for agile decision-making and tactical planning in enterprises, distributed processing platforms, such as Hadoop and Spark which distribute and handle the large volume of data on multiple nodes, receive great attention in the field. In Spark platform stack, Spark SQL unveiled recently to make Spark able to support distributed processing framework based on SQL. However, Spark SQL cannot effectively handle advanced analytics that involves machine learning and graph processing in terms of iterative tasks and task allocations. Motivated by these issues, this paper proposes the design of SQL-based big data optimal processing engine and processing framework to support advanced analytics in Spark environments. Big data optimal processing engines copes with complex SQL queries that involves multiple parameters and join, aggregation and sorting operations in distributed/parallel manner and the proposing framework optimizes machine learning process in terms of relational operations.

Assessment technology for spatial interaction of Artificial Monitoring System through 3-dimensional Simulation (3차원 시뮬레이션을 이용한 인위감시체계의 공간대응성능 평가기술)

  • Kim, Suk-Tae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.2
    • /
    • pp.1426-1433
    • /
    • 2015
  • CCTV-applied monitoring is an effective measure to suppress potential crimes and record objective relationship; however, there is no methodology that can quantitatively compare and assess the afore-mentioned effects. Thus, this study intended to construct the methodology and analysis application that can measure the changes in the space-corresponding performance of CCTVs depending on installation measures by using 3-dimenstional virtual simulation technology. For analysis, the raster-based Isovist theory was 3-dimensionally expanded and the amount of incident sight line to each point was accumulated. At the same time, the amount of overlapped monitoring in the CCTV cameras that were connected to each measurement node was accumulated for cross-analysis. By applying the examples and analyzing the results, it was possible to construct an analysis application in use of collision detection model and quantify the changes of monitoring performance depending on positioning alternative of the cameras. Moreover, it enabled intuitive review and supplementation by reproducing visible shadow areas in a graph.

A Study on the Analysis of Bus Machine Learning in Changwon City Using VIMS and DTG Data (VIMS와 DTG 데이터를 이용한 창원시 시내버스 머신러닝 분석 연구)

  • Park, Jiyang;Jeong, Jaehwan;Yoon, Jinsu;Kim, Sungchul;Kim, Jiyeon;Lee, Hosang;Ryu, Ikhui;Gwon, Yeongmun
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.1
    • /
    • pp.26-31
    • /
    • 2022
  • Changwon City has the second highest accident rate with 79.6 according to the city bus accident rate. In fact, 250,000 people use the city bus a day in Changwon, The number of accidents is increasing gradually. In addition, a recent fire accident occurred in the engine room of a city bus (CNG) in Changwon, which has gradually expanded the public's anxiety. In the case of business vehicles, the government conducts inspections with a short inspection cycle for the purpose of periodic safety inspections, etc., but it is not in the monitoring stage. In the case of city buses, the operation records are monitored using Digital Tacho Graph (DTG). As such, driving records, methods, etc. are continuously monitored, but inspections are conducted every six months to ascertain the safety and performance of automobiles. It is difficult to identify real-time information on automobile safety. Therefore, in this study, individual automobile management solutions are presented through machine learning techniques of inspection results based on driving records or habits by linking DTG data and Vehicle Inspection Management System (VIMS) data for city buses in Changwon from 2019 to 2020.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

A Study on the Implement of AI-based Integrated Smart Fire Safety (ISFS) System in Public Facility

  • Myung Sik Lee;Pill Sun Seo
    • International Journal of High-Rise Buildings
    • /
    • v.12 no.3
    • /
    • pp.225-234
    • /
    • 2023
  • Even at this point in the era of digital transformation, we are still facing many problems in the safety sector that cannot prevent the occurrence or spread of human casualties. When you are in an unexpected emergency, it is often difficult to respond only with human physical ability. Human casualties continue to occur at construction sites, manufacturing plants, and multi-use facilities used by many people in everyday life. If you encounter a situation where normal judgment is impossible in the event of an emergency at a life site where there are still many safety blind spots, it is difficult to cope with the existing manual guidance method. New variable guidance technology, which combines artificial intelligence and digital twin, can make it possible to prevent casualties by processing large amounts of data needed to derive appropriate countermeasures in real time beyond identifying what safety accidents occurred in unexpected crisis situations. When a simple control method that divides and monitors several CCTVs is digitally converted and combined with artificial intelligence and 3D digital twin control technology, intelligence augmentation (IA) effect can be achieved that strengthens the safety decision-making ability required in real time. With the enforcement of the Serious Disaster Enterprise Punishment Act, the importance of distributing a smart location guidance system that urgently solves the decision-making delay that occurs in safety accidents at various industrial sites and strengthens the real-time decision-making ability of field workers and managers is highlighted. The smart location guidance system that combines artificial intelligence and digital twin consists of AIoT HW equipment, wireless communication NW equipment, and intelligent SW platform. The intelligent SW platform consists of Builder that supports digital twin modeling, Watch that meets real-time control based on synchronization between real objects and digital twin models, and Simulator that supports the development and verification of various safety management scenarios using intelligent agents. The smart location guidance system provides on-site monitoring using IoT equipment, CCTV-linked intelligent image analysis, intelligent operating procedures that support workflow modeling to immediately reflect the needs of the site, situational location guidance, and digital twin virtual fencing access control technology. This paper examines the limitations of traditional fixed passive guidance methods, analyzes global technology development trends to overcome them, identifies the digital transformation properties required to switch to intelligent variable smart location guidance methods, explains the characteristics and components of AI-based public facility smart fire safety integrated system (ISFS).

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.