• Title/Summary/Keyword: Queries

Search Result 1,273, Processing Time 0.024 seconds

Real-Time Monitoring and Buffering Strategy of Moving Object Databases on Cluster-based Distributed Computing Architecture (클러스터 기반 분산 컴퓨팅 구조에서의 이동 객체 데이타베이스의 실시간 모니터링과 버퍼링 기법)

  • Kim, Sang-Woo;Jeon, Se-Gil;Park, Seung-Yong;Lee, Chung-Woo;Hwang, Jae-Il;Nah, Yun-Mook
    • Journal of Korea Spatial Information System Society
    • /
    • v.8 no.2 s.17
    • /
    • pp.75-89
    • /
    • 2006
  • LBS (Location-Based Service) systems have become a serious subject for research and development since recent rapid advances in wireless communication technologies and position measurement technologies such as global positioning system. The architecture named the GALIS (Gracefully Aging Location Information System) has been suggested which is a cluster-based distributed computing system architecture to overcome performance losses and to efficiently handle a large volume of data, at least millions. The GALIS consists of SLDS and LLDS. The SLDS manages current location information of moving objects and the LLDS manages past location information of moving objects. In this thesis, we implement a monitoring technique for the GALIS prototype, to allow dynamic load balancing among multiple computing nodes by keeping track of the load of each node in real-time during the location data management and spatio-temporal query processing. We also propose a buffering technique which efficiently manages the query results having overlapped query regions to improve query processing performance of the GALIS. The proposed scheme reduces query processing time by eliminating unnecessary query execution on the overlapped regions with the previous queries.

  • PDF

A Non-Uniform Network Split Method for Energy Efficiency in a Data Centric Sensor Network (데이타 중심 센서 네트워크에서 에너지 효율성을 고려한 비균등 네트워크 분할 기법)

  • Kang, Hong-Koo;Kim, Joung-Joon;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.3
    • /
    • pp.35-50
    • /
    • 2007
  • In a data centric sensor network, a sensor node to store data is determined by the measured data value of each sensor node. Therefore, if the same data occur frequently, the energy of the sensor node to store the data is exhausted quickly due to the concentration of loads. And if the sensor network is extended, the communication cost for storing data and processing queries is increased, since the length of the routing path for them is usually in the distance. However, the existing researches that generally focus on the efficient management of data storing can not solve these problems efficiently. In this paper, we propose a NUNS(Non-Uniform Network Split) method that can distribute loads of sensor nodes and decrease the communication cost caused by the sensor network extension. By dividing the sensor network into non-uniform partitions that have the minimum difference in the number of sensor nodes and the splitted area size and storing the data which is occurred in a partition at the sensor nodes within the partition, the NUNS can distribute loads of sensor nodes and decrease the communication cost efficiently. In addition, by dividing each partition into non-uniform zones that have the minimum difference in the splitted area size as many as the number of the sensor nodes in the partition and allocating each of them as the processing area of each sensor node, the NUNS can protect a specific sensor node from the load concentration and decrease the unnecessary routing cost.

  • PDF

A Technique of Replacing XML Semantic Cache (XML 시맨틱 캐쉬의 교체 기법)

  • Hong, Jung-Woo;Kang, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.3
    • /
    • pp.211-234
    • /
    • 2007
  • In e-business, XML is a major format of data and it is essential to efficiently process queries against XML data. XML query caching has received much attention for query performance improvement. In employing XML query caching, some efficient technique of cache replacement is required. The previous techniques considered as a replacement unit either the whole query result or the path in the query result. The former is simple to employ but it is not efficient whereas the latter is more efficient and yet the size difference among the potential victims is large, and thus, efficiency of caching would be limited. In this paper, we propose a new technique where the element in the query result is are placement unit to overcome the limitations of the previous techniques. The proposed technique could enhance the cache efficiency to a great extent because it would not pick a victim whose size is too large to store a new cached item, the variance in the size of victims would be small, and the unused space of the cache storage would be small. A technique of XML semantic cache replacement is presented which is based on the replacement function that takes into account cache hit ratio, last access time, fetch time, size of XML semantic region, size of element in XML semantic region, etc. We implemented a prototype XML semantic cache system that employs the proposed technique, and conducted a detailed set of experiments over a LAN environment. The experimental results showed that our proposed technique outperformed the previous ones.

  • PDF

Index for Efficient Ontology Retrieval and Inference (효율적인 온톨로지 검색과 추론을 위한 인덱스)

  • Song, Seungjae;Kim, Insung;Chun, Jonghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.153-173
    • /
    • 2013
  • The ontology has been gaining increasing interests by recent arise of the semantic web and related technologies. The focus is mostly on inference query processing that requires high-level techniques for storage and searching ontologies efficiently, and it has been actively studied in the area of semantic-based searching. W3C's recommendation is to use RDFS and OWL for representing ontologies. However memory-based editors, inference engines, and triple storages all store ontology as a simple set of triplets. Naturally the performance is limited, especially when a large-scale ontology needs to be processed. A variety of researches on proposing algorithms for efficient inference query processing has been conducted, and many of them are based on using proven relational database technology. However, none of them had been successful in obtaining the complete set of inference results which reflects the five characteristics of the ontology properties. In this paper, we propose a new index structure called hyper cube index to efficiently process inference queries. Our approach is based on an intuition that an index can speed up the query processing when extensive inferencing is required.

Design and Implementation of High-dimensional Index Structure for the support of Concurrency Control (필터링에 기반한 고차원 색인구조의 동시성 제어기법의 설계 및 구현)

  • Lee, Yong-Ju;Chang, Jae-Woo;Kim, Hang-Young;Kim, Myung-Joon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Recently, there have been many indexing schemes for multimedia data such as image, video data. But recent database applications, for example data mining and multimedia database, are required to support multi-user environment. In order for indexing schemes to be useful in multi-user environment, a concurrency control algorithm is required to handle it. So we propose a concurrency control algorithm that can be applied to CBF (cell-based filtering method), which uses the signature of the cell for alleviating the dimensional curse problem. In addition, we extend the SHORE storage system of Wisconsin university in order to handle high-dimensional data. This extended SHORE storage system provides conventional storage manager functions, guarantees the integrity of high-dimensional data and is flexible to the large scale of feature vectors for preventing the usage of large main memory. Finally, we implement the web-based image retrieval system by using the extended SHORE storage system. The key feature of this system is platform-independent access to the high-dimensional data as well as functionality of efficient content-based queries. Lastly. We evaluate an average response time of point query, range query and k-nearest query in terms of the number of threads.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.

A Query Index for Processing Continuous Queries over RFID Tag Data (RFID 태그 데이타의 연속질의 처리를 위한 질의 색인)

  • Seok, Su-Wook;Park, Jae-Kwan;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.166-178
    • /
    • 2007
  • The ALE specification of EPCglobal is leading the development of RFID standards, includes the Event Cycle Specification (ECSpec) describing how long a cycle is, how to filter RFID tag data and which reader is interested in. The ECSpec is a specification for filtering and collecting RFID tag data. It is registered to a middleware for long time and is evaluated to return results satisfying the requirements included in it. Thus, it is quite similar to the continuous query. It can be transformed into a continuous query as its predicate in WHERE clause is characterized by the long interval. Long intervals cause problems deteriorating insertion and search performance of existing query indices. In this paper, we propose a TLC-index as a new query index structure for long interval data. The TLC-index has hybrid structure that uses the cell construct of CQI-index with the virtual construct of VCR-index for partitioning long intervals. The TLC-index can reduce the storage cost and improve the insertion performance through decomposing long intervals into one or more cell constructs that have long size. It can also improve the search performance through decomposing short intervals into one or more virtual constructs that have short size enough to fit into those intervals.

Active Documents: Programs by Form Designers (능동문서: 서식설계자의 프로그램)

  • Nam, Chul-Ki;Bae, Jae-Hak;Yoo, Hae-Young
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.599-610
    • /
    • 2003
  • The Web plays an important role as information source and most Web applications are document-centric. A document implies an intention of its own designer, which can be utilized actively in automation of business processes. Through an understanding of an intrinsic nature of a document function, we can see a document as an executable computer program in a special case. For this approach, we propose an active document model that is composed of form, knowledge base, rules, and queries. For reusability and interoperability of a document, each component of the proposed model is uniformly represented in XML. The proposed active document not only plays a passive role in providing user interfaces, but also is a document that a machine can infer and process with reading a procedure of document processing and business rules intended by document designers. Through this approach, document can interact with machines and can cooperate with other applications. For applicability of our active document, we show a case study for the processing of purchase orders in a B2B e-Commerce system. This paper is expected to provide the framework of accelerating the development of intelligent applications through our approach regards form document as a computer program. In short, the proposed active document contains knowledge representation and processing method, consequently our document will play an important role in providing a concept of document of pursuing in Semantic Web.

A Study on Automatic Classification of Newspaper Articles Based on Unsupervised Learning by Departments (비지도학습 기반의 행정부서별 신문기사 자동분류 연구)

  • Kim, Hyun-Jong;Ryu, Seung-Eui;Lee, Chul-Ho;Nam, Kwang Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.345-351
    • /
    • 2020
  • Administrative agencies today are paying keen attention to big data analysis to improve their policy responsiveness. Of all the big data, news articles can be used to understand public opinion regarding policy and policy issues. The amount of news output has increased rapidly because of the emergence of new online media outlets, which calls for the use of automated bots or automatic document classification tools. There are, however, limits to the automatic collection of news articles related to specific agencies or departments based on the existing news article categories and keyword search queries. Thus, this paper proposes a method to process articles using classification glossaries that take into account each agency's different work features. To this end, classification glossaries were developed by extracting the work features of different departments using Word2Vec and topic modeling techniques from news articles related to different agencies. As a result, the automatic classification of newspaper articles for each department yielded approximately 71% accuracy. This study is meaningful in making academic and practical contributions because it presents a method of extracting the work features for each department, and it is an unsupervised learning-based automatic classification method for automatically classifying news articles relevant to each agency.

Semantic Search System using Ontology-based Inference (온톨로지기반 추론을 이용한 시맨틱 검색 시스템)

  • Ha Sang-Bum;Park Yong-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.202-214
    • /
    • 2005
  • The semantic web is the web paradigm that represents not general link of documents but semantics and relation of document. In addition it enables software agents to understand semantics of documents. We propose a semantic search based on inference with ontologies, which has the following characteristics. First, our search engine enables retrieval using explicit ontologies to reason though a search keyword is different from that of documents. Second, although the concept of two ontologies does not match exactly, can be found out similar results from a rule based translator and ontological reasoning. Third, our approach enables search engine to increase accuracy and precision by using explicit ontologies to reason about meanings of documents rather than guessing meanings of documents just by keyword. Fourth, domain ontology enables users to use more detailed queries based on ontology-based automated query generator that has search area and accuracy similar to NLP. Fifth, it enables agents to do automated search not only documents with keyword but also user-preferable information and knowledge from ontologies. It can perform search more accurately than current retrieval systems which use query to databases or keyword matching. We demonstrate our system, which use ontologies and inference based on explicit ontologies, can perform better than keyword matching approach .