• Title/Summary/Keyword: multiple query

Search Result 254, Processing Time 0.026 seconds

Application of OGC WPS 2.0 to Geo-Spatial Web Services (공간정보 웹 서비스에서 OGC WPS 2.0 적용)

  • YOON, Goo-Seon;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.16-28
    • /
    • 2016
  • Advancing geo-spatial web technologies and their applications require compatible and interoperable heterogeneous browsers and platforms. Reduction of common or supporting components for web-based system development is also necessary. If properly understood and applied, OGC-based standards can be utilized as effective solutions for these problems. Thus, OGC standards are central to the design and development of web-based geo-spatial systems, and are particularly applicable to web services, which contain data processing modules. However, the application for OGC WPS 2.0 is at an early stage as compared with other OGC standards; thus, this study describes a test implementation of a web-based geo-spatial processing system with OGC WPS 2.0 focused on asynchronous processing functionality. While a binary thresholding algorithm was tested in this system, further experiments with other processing modules can be performed on requests for many types of processing from multiple users. The client system of the implemented product was based on open sources such as jQuery and OpenLayers, and server-side running on Spring framework also used various types of open sources such as ZOO project, and GeoServer. The results of geo-spatial image processing by this system implies further applicability and extensibility of OGC WPS 2.0 on user interfaces for practical applications.

TEST DB: The intelligent data management system for Toxicogenomics (독성유전체학 연구를 위한 지능적 데이터 관리 시스템)

  • Lee, Wan-Seon;Jeon, Ki-Seon;Um, Chan-Hwi;Hwang, Seung-Young;Jung, Jin-Wook;Kim, Seung-Jun;Kang, Kyung-Sun;Park, Joon-Suk;Hwang, Jae-Woong;Kang, Jong-Soo;Lee, Gyoung-Jae;Chon, Kum-Jin;Kim, Yang-Suk
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.66-72
    • /
    • 2003
  • Toxicogenomics is now emerging as one of the most important genomics application because the toxicity test based on gene expression profiles is expected more precise and efficient than current histopathological approach in pre-clinical phase. One of the challenging points in Toxicogenomics is the construction of intelligent database management system which can deal with very heterogeneous and complex data from many different experimental and information sources. Here we present a new Toxicogenomics database developed as a part of 'Toxicogenomics for Efficient Safety Test (TEST) project'. The TEST database is especially focused on the connectivity of heterogeneous data and intelligent query system which enables users to get inspiration from the complex data sets. The database deals with four kinds of information; compound information, histopathological information, gene expression information, and annotation information. Currently, TEST database has Toxicogenomics information fer 12 molecules with 4 efficacy classes; anti cancer, antibiotic, hypotension, and gastric ulcer. Users can easily access all kinds of detailed information about there compounds and simultaneously, users can also check the confidence of retrieved information by browsing the quality of experimental data and toxicity grade of gene generated from our toxicology annotation system. Intelligent query system is designed for multiple comparisons of experimental data because the comparison of experimental data according to histopathological toxicity, compounds, efficacy, and individual variation is crucial to find common genetic characteristics .Our presented system can be a good information source for the study of toxicology mechanism in the genome-wide level and also can be utilized fur the design of toxicity test chip.

  • PDF

An Improved Algorithm for Building Multi-dimensional Histograms with Overlapped Buckets (중첩된 버킷을 사용하는 다차원 히스토그램에 대한 개선된 알고리즘)

  • 문진영;심규석
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.336-349
    • /
    • 2003
  • Histograms have been getting a lot of attention recently. Histograms are commonly utilized in commercial database systems to capture attribute value distributions for query optimization Recently, in the advent of researches on approximate query answering and stream data, the interests in histograms are widely being spread. The simplest approach assumes that the attributes in relational tables are independent by AVI(Attribute Value Independence) assumption. However, this assumption is not generally valid for real-life datasets. To alleviate the problem of approximation on multi-dimensional data with multiple one-dimensional histograms, several techniques such as wavelet, random sampling and multi-dimensional histograms are proposed. Among them, GENHIST is a multi-dimensional histogram that is designed to approximate the data distribution with real attributes. It uses overlapping buckets that allow more efficient approximation on the data distribution. In this paper, we propose a scheme, OPT that can determine the optimal frequencies of overlapped buckets that minimize the SSE(Sum Squared Error). A histogram with overlapping buckets is first generated by GENHIST and OPT can improve the histogram by calculating the optimal frequency for each bucket. Our experimental result confirms that our technique can improve the accuracy of histograms generated by GENHIST significantly.

A Single Index Approach for Subsequence Matching that Supports Normalization Transform in Time-Series Databases (시계열 데이터베이스에서 단일 색인을 사용한 정규화 변환 지원 서브시퀀스 매칭)

  • Moon Yang-Sae;Kim Jin-Ho;Loh Woong-Kee
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.513-524
    • /
    • 2006
  • Normalization transform is very useful for finding the overall trend of the time-series data since it enables finding sequences with similar fluctuation patterns. The previous subsequence matching method with normalization transform, however, would incur index overhead both in storage space and in update maintenance since it should build multiple indexes for supporting arbitrary length of query sequences. To solve this problem, we propose a single index approach for the normalization transformed subsequence matching that supports arbitrary length of query sequences. For the single index approach, we first provide the notion of inclusion-normalization transform by generalizing the original definition of normalization transform. The inclusion-normalization transform normalizes a window by using the mean and the standard deviation of a subsequence that includes the window. Next, we formally prove correctness of the proposed method that uses the inclusion-normalization transform for the normalization transformed subsequence matching. We then propose subsequence matching and index building algorithms to implement the proposed method. Experimental results for real stock data show that our method improves performance by up to $2.5{\sim}2.8$ times over the previous method. Our approach has an additional advantage of being generalized to support many sorts of other transforms as well as normalization transform. Therefore, we believe our work will be widely used in many sorts of transform-based subsequence matching methods.

An Enhancing Technique for Scan Performance of a Skip List with MVCC (MVCC 지원 스킵 리스트의 범위 탐색 향상 기법)

  • Kim, Leeju;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.5
    • /
    • pp.107-112
    • /
    • 2020
  • Recently, unstructured data is rapidly being produced based on web-based services. NoSQL systems and key value stores that process unstructured data as key and value pairs are widely used in various applications. In this paper, a study was conducted on a skip list used for in-memory data management in an LSM-tree based key value store. The skip list used in the key value store is an insertion-based skip list that does not allow overwriting and processes all changes only by inserting. This behavior can support Multi-Version Concurrency Control (MVCC), which can simultaneously process multiple read/write requests through snapshot isolation. However, since duplicate keys exist in the skip list, the performance significantly degrades due to unnecessary node visits during a list traverse. In particular, serious overhead occurs when a range query or scan operation that collectively searches a specific range of data occurs. This paper proposes a newly designed Stride SkipList to reduce this overhead. The stride skip list additionally maintains an indexing pointer for the last node of the same key to avoid unnecessary node visits. The proposed scheme is implemented using RocksDB's in-memory component, and the performance evaluation shows that the performance of SCAN operation improves by up to 350 times compared to the existing skip list for various workloads.

Design and Algorithm Implementation of a Distributed Information Retrieval System using Sequential Transferring Method(STM) (순차적 전달방식(STM)을 이용한 분산정보검색시스템의 설계 및 알고리즘 구현)

  • Yoon, Hee-Byung;Kim, Yong-Han;Kim, Hwa-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.5
    • /
    • pp.603-610
    • /
    • 2004
  • The distributed Information Retrieval System centrally controlled by mediator or meta search engine result in congestion of heavy traffic and int he problem of increment of cost for the reason of the design of complicated algorithm for central control and installation of hardware. So to figure out this problem, the way is needed that has independent retrieval functionality and can cooperate each other without dependency. In this paper, we overview a few works involved in distributed information retrieval system, then, implement algorithm and design the frame-work of distributed information retrieval system using sequential transferring method(STM) including multiple information retrieval system separated from central control. For this first of all, we present a web partition policy which devide and manage web logically and we present the sequential query processing way by means of illustration through changing numbered information retrieval system. Then, we also present 3-layered structure of framework and function and module of each layer suitable for information retrieval system. Last of ail, for effective implementation of STM algorithm we analysis module structure and present description of pseudocode of this, and show that the proposed STM algorithm works smoothly by demonstration of sequential query transfer process between servers.

Application of Google Search Queries for Predicting the Unemployment Rate for Koreans in Their 30s and 40s (한국 30~40대 실업률 예측을 위한 구글 검색 정보의 활용)

  • Jung, Jae Un;Hwang, Jinho
    • Journal of Digital Convergence
    • /
    • v.17 no.9
    • /
    • pp.135-145
    • /
    • 2019
  • Prolonged recession has caused the youth unemployment rate in Korea to remain at a high level of approximately 10% for years. Recently, the number of unemployed Koreans in their 30s and 40s has shown an upward trend. To expand the government's employment promotion and unemployment benefits from youth-centered policies to diverse age groups, including people in their 30s and 40s, prediction models for different age groups are required. Thus, we aimed to develop unemployment prediction models for specific age groups (30s and 40s) using available unemployment rates provided by Statistics Korea and Google search queries related to them. We first estimated multiple linear regressions (Model 1) using seasonal autoregressive integrated moving average approach with relevant unemployment rates. Then, we introduced Google search queries to obtain improved models (Model 2). For both groups, consequently, Model 2 additionally using web queries outperformed Model 1 during training and predictive periods. This result indicates that a web search query is still significant to improve the unemployment predictive models for Koreans. For practical application, this study needs to be furthered but will contribute to obtaining age-wise unemployment predictions.

A Study on Effective Real Estate Big Data Management Method Using Graph Database Model (그래프 데이터베이스 모델을 이용한 효율적인 부동산 빅데이터 관리 방안에 관한 연구)

  • Ju-Young, KIM;Hyun-Jung, KIM;Ki-Yun, YU
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.163-180
    • /
    • 2022
  • Real estate data can be big data. Because the amount of real estate data is growing rapidly and real estate data interacts with various fields such as the economy, law, and crowd psychology, yet is structured with complex data layers. The existing Relational Database tends to show difficulty in handling various relationships for managing real estate big data, because it has a fixed schema and is only vertically extendable. In order to improve such limitations, this study constructs the real estate data in a Graph Database and verifies its usefulness. For the research method, we modeled various real estate data on MySQL, one of the most widely used Relational Databases, and Neo4j, one of the most widely used Graph Databases. Then, we collected real estate questions used in real life and selected 9 different questions to compare the query times on each Database. As a result, Neo4j showed constant performance even in queries with multiple JOIN statements with inferences to various relationships, whereas MySQL showed a rapid increase in its performance. According to this result, we have found out that a Graph Database such as Neo4j is more efficient for real estate big data with various relationships. We expect to use the real estate Graph Database in predicting real estate price factors and inquiring AI speakers for real estate.

Error-Tolerant Music Information Retrieval Method Using Query-by-Humming (허밍 질의를 이용한 오류에 강한 악곡 정보 검색 기법)

  • 정현열;허성필
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.488-496
    • /
    • 2004
  • This paper describes a music information retrieval system which uses humming as the key for retrieval Humming is an easy way for the user to input a melody. However, there are several problems with humming that degrade the retrieval of information. One problem is a human factor. Sometimes people do not sing accurately, especially if they are inexperienced or unaccompanied. Another problem arises from signal processing. Therefore, a music information retrieval method should be sufficiently robust to surmount various humming errors and signal processing problems. A retrieval system has to extract pitch from the user's humming. However pitch extraction is not perfect. It often captures half or double pitches. even if the extraction algorithms take the continuity of the pitch into account. Considering these problems. we propose a system that takes multiple pitch candidates into account. In addition to the frequencies of the pitch candidates. the confidence measures obtained from their powers are taken into consideration as well. We also propose the use of an algorithm with three dimensions that is an extension of the conventional DP algorithm, so that multiple pitch candidates can be treated. Moreover in the proposed algorithm. DP paths are changed dynamically to take deltaPitches and IOIratios of input and reference notes into account in order to treat notes being split or unified. We carried out an evaluation experiment to compare the proposed system with a conventional system. From the experiment. the proposed method gave better retrieval performance than the conventional system.

Efficient Multiple Joins using the Synchronization of Page Execution Time in Limited Processors Environments (한정된 프로세서 환경에서 체이지 실행시간 동기화를 이용한 효율적인 다중 결합)

  • Lee, Kyu-Ock;Weon, Young-Sun;Hong, Man-Pyo
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.732-741
    • /
    • 2001
  • In the relational database systems the join operation is one of the most time-consuming query operations. Many parallel join algorithms have been developed 개 reduce the execution time Multiple hash join algorithm using allocation tree is one of the most efficient ones. However, it may have some delay on the processing each node of allocation tree, which is occurred in tuple-probing phase by the difference between one page reading time of outer relation and the processing time of already read one. This delay problem was solved by using the concept of synchronization of page execution time with we had proposed In this paper the effects of the performance improvements in each node of the allocation tree are extended to the whole allocation tree and the performance evaluation about that is processed. In addition we propose an efficient algorithm for multiple hash joins in limited number of processor environments according to the relationship between the number of input relations in the allocation tree and the number of processors allocated to the tree. Finally. we analyze the performance by building the analytical cost model and verify the validity of it by various performance comparison with previous method.

  • PDF