• Title/Summary/Keyword: path information

Search Result 5,188, Processing Time 0.036 seconds

PrimeFilter: An Efficient XML Data Filtering based on Prime Number Indexing (PrimeFilter: 소수 인덱싱 기법에 기반한 효율적 XML 데이타 필터링)

  • Kim, Jae-Hoon;Kim, Sang-Wook;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.35 no.5
    • /
    • pp.421-431
    • /
    • 2008
  • Recently XML is becoming a de facto standard for online data exchange between heterogeneous systems and also the research of streaming XML data filtering comes into the spotlight. Since streaming XML data filtering technique needs rapid matching of queries with XML data, it is required that the query processing should be efficiently performed. Until now, most of researches focused only on partial sharing of path expressions or efficient predicate processing and they were work for time and space efficiency. However, if containment relationship between queries is previously calculated and the lowest level query is matched with XML data, we can easily get a result that high level queries can match with the XML data without any other processing. That is, using this containment technique can be another optimal solution for streaming XML data filtering. In this paper, we suggest an efficient XML data filtering based on prime number indexing and containment relationship between queries. Through some experimental results, we present that our suggested method has a better performance than the existing method. All experiments have shown that our method has a more than two times better performance even though each experiment has its own distinct test purpose.

An Improved Technique of Fitness Evaluation for Automated Test Data Generation (테스트 데이터 자동 생성을 위한 적합도 평가 방법의 효율성 향상 기법)

  • Lee, Sun-Yul;Choi, Hyun-Jae;Jeong, Yeon-Ji;Bae, Jung-Ho;Kim, Tae-Ho;Chae, Heung-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.882-891
    • /
    • 2010
  • Many automated dynamic test data generation technique have been proposed. The techniques evaluate fitness of test data through executing instrumented Software Under Test (SUT) and then generate new test data based on evaluated fitness values and optimization algorithms. Previous researches and experiments have been showed that these techniques generate effective test data. However, optimization algorithms in these techniques incur much time to generate test data, which results in huge test case generation cost. In this paper, we propose a technique for reducing the time of evaluating a fitness of test data among steps of dynamic test data generation methods. We introduce the concept of Fitness Evaluation Program (FEP), derived from a path constraint of SUT. We suggest a test data generation method based on FEP and implement a test generation tool, named ConGA. We also apply ConGA to generate test cases for C programs, and evaluate efficiency of the FEP-based test case generation technique. The experiments show that the proposed technique reduces 20% of test data generation time on average.

Effect of Short-term and Long-term Preservation on Motion Characteristics of Garole Ram Spermatozoa: A Prolific Microsheep Breed of India

  • Joshi, Anil;Bag, Sadhan;Naqvi, S.M.K.;Sharma, R.C.;Rawat, P.S.;Mittal, J.P.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.14 no.11
    • /
    • pp.1527-1533
    • /
    • 2001
  • Garole is a prolific, rare, less known and small size Indian sheep breed found in low and humid Sunderban region of West Bengal. Although information on stored Garole ram liquid semen upto 24 h is available, but there is a need to further investigate the short-term and long-term preservability of Garole ram semen for extensive utilization of this valuable germplasm by artificial insemination. The aim of the present study was to apply computer-assisted sperm analysis technique for assessing the motion characteristics of Garole ram semen stored (i) in liquid state at refrigeration temperature for short-term preservation upto 48 h and (ii) in frozen state at $-196^{\circ}C$ for long-term preservation after packaging in mini straws. Short-term preservation had a significant effect on motility (p<0.01) as the motility progressively decreased from 90.1% at 0 h to 85.5% and 73.2% after 24 and 48 h of storage, respectively. Although the decline in rapid moving sperms was also significant (p<0.01) on storage but the decrease was more pronounced at 48 h as compared to 24 h of storage period. Storage of chilled semen had also a significant effect on % linearity (p<0.05), % straightness (p<0.01), sperm velocities (p<0.01), amplitude of lateral head displacement (p<0.01) and beat frequency (pO.Ol) of spermatozoa. The replication had a significant effect for all the variables except average path and straight line velocity. However, the interactions of short-term storage and replication were non-significant for most of the variables except % of medium moving sperms, sperm velocities and beat frequency. On long-term preservation of Garole ram spermatozoa under controlled conditions the mean post-thaw recovery of 70.4 and 71.4% motile spermatozoa was achieved having 48.8 and 48.9% of rapidly motile spermatozoa, respectively in both the replicates. The effect of replication on cryopreservation was significant (p<0.05) on amplitude of lateral head displacement and beat frequency, but there was no significant effect on motility, rapidly motile spermatozoa, linearity, straightness and sperm velocities of frozen-thawed spermatozoa. It can be concluded from these results that an average 70% motility can be achieved on storage of Garole ram semen in chilled liquid state upto 48 h or in liquid nitrogen after freezing under controlled conditions in straws. However, further studies are required to evaluate the fertility of short-term and long-term preserved Garole ram semen for extensive use of this prolific sheep breed.

Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface (메타데이터 인터페이스를 이용한 DTD 기반 XML 문서 변환기의 골격 원시 코드 생성)

  • Choe Gui-Ja;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.549-556
    • /
    • 2006
  • In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.

A Model-based Methodology for Application Specific Energy Efficient Data path Design Using FPGAs (FPGA에서 에너지 효율이 높은 데이터 경로 구성을 위한 계층적 설계 방법)

  • Jang Ju-Wook;Lee Mi-Sook;Mohanty Sumit;Choi Seonil;Prasanna Viktor K.
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.451-460
    • /
    • 2005
  • We present a methodology to design energy-efficient data paths using FPGAs. Our methodology integrates domain specific modeling, coarse-grained performance evaluation, design space exploration, and low-level simulation to understand the tradeoffs between energy, latency, and area. The domain specific modeling technique defines a high-level model by identifying various components and parameters specific to a domain that affect the system-wide energy dissipation. A domain is a family of architectures and corresponding algorithms for a given application kernel. The high-level model also consists of functions for estimating energy, latency, and area that facilitate tradeoff analysis. Design space exploration(DSE) analyzes the design space defined by the domain and selects a set of designs. Low-level simulations are used for accurate performance estimation for the designs selected by the DSE and also for final design selection We illustrate our methodology using a family of architectures and algorithms for matrix multiplication. The designs identified by our methodology demonstrate tradeoffs among energy, latency, and area. We compare our designs with a vendor specified matrix multiplication kernel to demonstrate the effectiveness of our methodology. To illustrate the effectiveness of our methodology, we used average power density(E/AT), energy/(area x latency), as themetric for comparison. For various problem sizes, designs obtained using our methodology are on average $25\%$ superior with respect to the E/AT performance metric, compared with the state-of-the-art designs by Xilinx. We also discuss the implementation of our methodology using the MILAN framework.

Dynamic Polling Algorithm Based on Line Utilization Prediction (선로 이용률 예측 기반의 동적 폴링 기법)

  • Jo, Gang-Hong;An, Seong-Jin;Jeong, Jin-Uk
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.489-496
    • /
    • 2002
  • This study proposes a new polling algorithm allowing dynamic change in polling period based on line utilization prediction. Polling is the most important function in network monitoring, but excessive polling data causes rather serious congestion conditions of network when network is In congestion. Therefore, existing multiple polling algorithms decided network congestion or load of agent with previously performed polling Round Trip Time or line utilization, chanced polling period, and controlled polling traffic. But, this algorithm is to change the polling period based on the previous polling and does not reflect network conditions in the current time to be polled. A algorithm proposed in this study is to predict whether polling traffic exceeds threshold of line utilization on polling path based on the past data and to change the polling period with the prediction. In this study, utilization of each line configuring network was predicted with AR model and violation of threshold was presented in probability. In addition, suitability was evaluated by applying the proposed dynamic polling algorithm based on line utilization prediction to the actual network, reasonable level of threshold for line utilization and the violation probability of threshold were decided by experiment. Performance of this algorithm was maximized with these processes.

Recommendation of Best Empirical Route Based on Classification of Large Trajectory Data (대용량 경로데이터 분류에 기반한 경험적 최선 경로 추천)

  • Lee, Kye Hyung;Jo, Yung Hoon;Lee, Tea Ho;Park, Heemin
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.101-108
    • /
    • 2015
  • This paper presents the implementation of a system that recommends empirical best routes based on classification of large trajectory data. As many location-based services are used, we expect the amount of location and trajectory data to become big data. Then, we believe we can extract the best empirical routes from the large trajectory repositories. Large trajectory data is clustered into similar route groups using Hadoop MapReduce framework. Clustered route groups are stored and managed by a DBMS, and thus it supports rapid response to the end-users' request. We aim to find the best routes based on collected real data, not the ideal shortest path on maps. We have implemented 1) an Android application that collects trajectories from users, 2) Apache Hadoop MapReduce program that can cluster large trajectory data, 3) a service application to query start-destination from a web server and to display the recommended routes on mobile phones. We validated our approach using real data we collected for five days and have compared the results with commercial navigation systems. Experimental results show that the empirical best route is better than routes recommended by commercial navigation systems.

A Practical Approximate Sub-Sequence Search Method for DNA Sequence Databases (DNA 시퀀스 데이타베이스를 위한 실용적인 유사 서브 시퀀스 검색 기법)

  • Won, Jung-Im;Hong, Sang-Kyoon;Yoon, Jee-Hee;Park, Sang-Hyun;Kim, Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.119-132
    • /
    • 2007
  • In molecular biology, approximate subsequence search is one of the most important operations. In this paper, we propose an accurate and efficient method for approximate subsequence search in large DNA databases. The proposed method basically adopts a binary trie as its primary structure and stores all the window subsequences extracted from a DNA sequence. For approximate subsequence search, it traverses the binary trie in a breadth-first fashion and retrieves all the matched subsequences from the traversed path within the trie by a dynamic programming technique. However, the proposed method stores only window subsequences of the pre-determined length, and thus suffers from large post-processing time in case of long query sequences. To overcome this problem, we divide a query sequence into shorter pieces, perform searching for those subsequences, and then merge their results. To verify the superiority of the proposed method, we conducted performance evaluation via a series of experiments. The results reveal that the proposed method, which requires smaller storage space, achieves 4 to 17 times improvement in performance over the suffix tree based method. Even when the length of a query sequence is large, our method is more than an order of magnitude faster than the suffix tree based method and the Smith-Waterman algorithm.

Analysis of Performance, Energy-efficiency and Temperature for 3D Multi-core Processors according to Floorplan Methods (플로어플랜 기법에 따른 3차원 멀티코어 프로세서의 성능, 전력효율성, 온도 분석)

  • Choi, Hong-Jun;Son, Dong-Oh;Kim, Jong-Myon;Kim, Cheol-Hong
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.265-274
    • /
    • 2010
  • As the process technology scales down and integration densities continue to increase, interconnection has become one of the most important factors in performance of recent multi-core processors. Recently, to reduce the delay due to interconnection, 3D architecture has been adopted in designing multi-core processors. In 3D multi-core processors, multiple cores are stacked vertically and each core on different layers are connected by direct vertical TSVs(through-silicon vias). Compared to 2D multi-core architecture, 3D multi-core architecture reduces wire length significantly, leading to decreased interconnection delay and lower power consumption. Despite the benefits mentioned above, 3D design technique cannot be practical without proper solutions for hotspots due to high temperature. In this paper, we propose three floorplan schemes for reducing the peak temperature in 3D multi-core processors. According to our simulation results, the proposed floorplan schemes are expected to mitigate the thermal problems of 3D multi-core processors efficiently, resulting in improved reliability. Moreover, processor performance improves by reducing the performance degradation due to DTM techniques. Power consumption also can be reduced by decreased temperature and reduced execution time.

Design and Implementation of a Large-Scale Spatial Reasoner Using MapReduce Framework (맵리듀스 프레임워크를 이용한 대용량 공간 추론기의 설계 및 구현)

  • Nam, Sang Ha;Kim, In Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.397-406
    • /
    • 2014
  • In order to answer the questions successfully on behalf of the human in DeepQA environments such as Jeopardy! of the American quiz show, the computer is required to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a scalable spatial reasoning algorithm for deriving efficiently new directional and topological relations using the MapReduce framework, one of well-known parallel distributed computing environments. The proposed reasoning algorithm assumes as input a large-scale spatial knowledge base including CSD-9 directional relations and RCC-8 topological relations. To infer new directional and topological relations from the given spatial knowledge base, it performs the cross-consistency checks as well as the path-consistency checks on the knowledge base. To maximize the parallelism of reasoning computations according to the principle of the MapReduce framework, we design the algorithm to partition effectively the large knowledge base into smaller ones and distribute them over multiple computing nodes at the map phase. And then, at the reduce phase, the algorithm infers the new knowledge from distributed spatial knowledge bases. Through experiments performed on the sample knowledge base with the MapReduce-based implementation of our algorithm, we proved the high performance of our large-scale spatial reasoner.