• Title/Summary/Keyword: $A^*$ search algorithm

Search Result 3,553, Processing Time 0.035 seconds

An Analysis on Range Block Coherences for Fractal Compression (프랙탈 압축을 위한 레인지 블록간의 유사성 분석)

  • 김영봉
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.4
    • /
    • pp.409-418
    • /
    • 1999
  • The fractal image compression is based on the self-similarity that some area in an image exhibits a very similar shape with other areas. This compression technique has very long encoding time although it has high compression ratio and fast decompression. To cut-off the encoding time, most researches have restricted the search of domain blocks for a range block. These researches have been mainly focused on the coherence between a domain block and a range block, while they have not utilized the coherence among range blocks well. Therefore, we give an analysis on the coherence among range blocks in order to develope an efficient fractal Image compression algorithm. We analysis the range blocks according to not only measures for defining the range block coherence but also threshold of each measure. If these results are joined in a prior work of other fractal compression algorithms, it will give a great effectiveness in encoding time.

  • PDF

High-Performance FFT Using Data Reorganization (데이터 재구성 기법을 이용한 고성능 FFT)

  • Park Neungsoo;Choi Yungho
    • The KIPS Transactions:PartA
    • /
    • v.12A no.3 s.93
    • /
    • pp.215-222
    • /
    • 2005
  • The efficient utilization of cache memories is a key factor in achieving high performance for computing large signal transforms. Nonunit stride access in computation of large DFTs causes cache conflict misses, thereby resulting in poor cache performance. It leads to a severe degradation in overall performance. In this paper, we propose a dynamic data layout approach considering the memory hierarchy system. In our approach, data reorganization is performed between computation stages to reduce the number of cache misses. Also, we develop an efficient search algorithm to determine the optimal tree with the minimum execution time among possible factorization trees considering the size of DFTs and the data access stride. Our approach is applied to compute the fast Fourier Transform (FFT). Experiments were performed on Pentium 4, $Athlon^{TM}$ 64, Alpha 21264, UtraSPARC III. Experiment results show that our FFT achieve performance improvement of up to 3.37 times better than the previous FFT packages.

Development of Bolt Tap Shape Inspection System Using Computer Vision Technology (컴퓨터 비전 기술을 이용한 볼트 탭 형상 검사 시스템 개발)

  • Park, Yang-Jae
    • Journal of Digital Convergence
    • /
    • v.16 no.3
    • /
    • pp.303-309
    • /
    • 2018
  • Computer vision technology is a component inspection to obtain a video image from the camera to the machine to perform the capabilities of the human eye with a field of artificial intelligence, and then analyzed by the algorithm to determine to determine the good and bad of production parts It is widely applied. Shape inspection method was used as how to identify the location of the start point and the end point of the search range, measure the height to the line scan method, in such a manner as to determine the presence or absence of the bolt tabs average brightness of the inspection area in a circular scan type value And the degree of similarity was calculated. The total time it takes to test in the test performance tests of two types of bolts tab enables test 300 min, and demonstrated the accuracy and efficiency of the inspection on the production line represented a complete inspection accuracy.

A Multibit Tree Bitmap based Packet Classification (멀티 비트 트리 비트맵 기반 패킷 분류)

  • 최병철;이정태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3B
    • /
    • pp.339-348
    • /
    • 2004
  • Packet classification is an important factor to support various services such as QoS guarantee and VPN for users in Internet. Packet classification is a searching process for best matching rule on rule tables by employing multi-field such as source address, protocol, and port number as well as destination address in If header. In this paper, we propose hardware based packet classification algorithm by employing tree bitmap of multi-bit trio. We divided prefixes of searching fields and rule into multi-bit stride, and perform a rule searching with multi-bit of fixed size. The proposed scheme can reduce the access times taking for rule search by employing indexing key in a fixed size of upper bits of rule prefixes. We also employ a marker prefixes in order to remove backtracking during searching a rule. In this paper, we generate two dimensional random rule set of source address and destination address using routing tables provided by IPMA Project, and compare its memory usages and performance.

A Design of Two-stage Cascaded Polyphase FIR Filters for the Sample Rate Converter (표본화 속도 변환기용 2단 직렬형 다상 FIR 필터의 설계)

  • Baek Je-In;Kim Jin-Up
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.8C
    • /
    • pp.806-815
    • /
    • 2006
  • It is studied to design a low pass filter of the SRC(sample rate converter), which is used to change the sampling rate of digital signals such as in digital modulation and demodulation systems. The larger the conversion ratio of the sample rate becomes, the more signal processing is needed for the filter, which corresponds to the more complexity in circuit realization. Thus it is important to reduce the amount of signal processing for the case of high conversion ratio. In this paper it is presented a design method of a two-stage cascaded FIR filter, which proved to have reduced amount of signal processing in comparison with a conventional single-stage one. The reduction effect of signal processing turned out to be more noticeable for larger value of conversion ratio, for instance, giving down to 72% in complexity for the conversion ratio of 32. It has been shown that the reduction effect is dependent to specific combination of conversion ratios of the cascaded filters. So an exhaustive search has been performed in order to obtain the optimal combination for various values of the total conversion ratio. In this paper every filter is considered to be implemented in the form of a polyphase FIR filter, and its coefficients are determined by use of the Parks-McCllelan algorithm.

Anomaly Detection Analysis using Repository based on Inverted Index (역방향 인덱스 기반의 저장소를 이용한 이상 탐지 분석)

  • Park, Jumi;Cho, Weduke;Kim, Kangseok
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.294-302
    • /
    • 2018
  • With the emergence of the new service industry due to the development of information and communication technology, cyber space risks such as personal information infringement and industrial confidentiality leakage have diversified, and the security problem has emerged as a critical issue. In this paper, we propose a behavior-based anomaly detection method that is suitable for real-time and large-volume data analysis technology. We show that the proposed detection method is superior to existing signature security countermeasures that are based on large-capacity user log data according to in-company personal information abuse and internal information leakage. As the proposed behavior-based anomaly detection method requires a technique for processing large amounts of data, a real-time search engine is used, called Elasticsearch, which is based on an inverted index. In addition, statistical based frequency analysis and preprocessing were performed for data analysis, and the DBSCAN algorithm, which is a density based clustering method, was applied to classify abnormal data with an example for easy analysis through visualization. Unlike the existing anomaly detection system, the proposed behavior-based anomaly detection technique is promising as it enables anomaly detection analysis without the need to set the threshold value separately, and was proposed from a statistical perspective.

Cooperative spectrum leasing using parallel communication of secondary users

  • Xie, Ping;Li, Lihua;Zhu, Junlong;Jin, Jin;Liu, Yijing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.8
    • /
    • pp.1770-1785
    • /
    • 2013
  • In this paper, a multi-hop transmission protocol based on parallel communication of secondary users (SUs) is proposed. The primary multi-hop network coexists with a set of SUs by cooperative spectrum sharing. The main optimization target of our protocol is the overall performance of the secondary system with the guarantee of the primary outage performance. The energy consumption of the primary system is reduced by the cooperation of SUs. The aim of the primary source is to communicate with the primary destination via a number of primary relays. SUs may serve as extra decode-and-forward relays for the primary network. When an SU acts as a relay for a primary user (PU), some other SUs that satisfy the condition for parallel communication are selected to simultaneously access the primary spectrum for secondary transmissions. For the proposed protocol, two opportunistic routing strategies are proposed, and a search algorithm to select the SUs for parallel communication is described. The throughput of the SUs and the PU is illustrated. Numerical results demonstrate that the average throughput of the SUs is greatly improved, and the end-to-end throughput of the PU is slightly increased in the proposed protocol when there are more than seven SUs.

Text Extraction Algorithm using the HTML Logical Structure Analysis (HTML 논리적 구조분석을 통한 본문추출 알고리즘)

  • Jeon, Hyun-Gee;KOH, Chan
    • Journal of Digital Contents Society
    • /
    • v.16 no.3
    • /
    • pp.445-455
    • /
    • 2015
  • According as internet and computer technology develops, the amount of information has increased exponentially, arising from a variety of web authoring tools and is a new web standard of appearance and a wide variety of web content accessibility as more convenient for the web are produced very quickly. However, web documents are put out on a variety of topics divided into some blocks where each of the blocks are dealing with a topic unrelated to one another as well as you can not see with contents such as many navigations, simple decorations, advertisements, copyright. Extract only the exact area of the web document body to solve this problem and to meet user requirements, and to study the effective information. Later on, as the reconstruction method, we propose a web search system can be optimized systematically manage documents.

Implementation of Temporal Relationship Macros for History Management in SDE (SDE에서 이력 관리를 위한 시간관계 매크로의 구현)

  • Lee, Jong-Yeon;Ryu, Geun-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.553-563
    • /
    • 1999
  • The Spatial Database Engine(SDETM) developed by Environmental Systems Research Institute, Inc. is a spatial database that employs a client-server architecture incorporated with a set of software services to perform efficient spatial operations and to manage large, shared and geographic data sets. It currently supports a wide variety of spatial search methods and spatial relationships determined dynamically. Spatial objects in the space world can be changed by either non-spatial operations or spatial operations. Conventional geographical information systems(GISs) did not manage their historical information, however, because they handle the snapshot images of spatial objects in the world. In this paper we propose a spatio-temporal data model and an algorithm for temporal relationship macro which is able to manage and retrieve the historical information of spatial objects. The proposed spatio-temporal data model and its operations can be used as a software tool for history management of time-varying objects in database without any change.

A Tree-structured XPath Query Reduction Scheme for Enhancing XML Query Processing Performance (XML 질의의 수행성능 향상을 위한 트리 구조 XPath 질의의 축약 기법에 관한 연구)

  • Lee, Min-Soo;Kim, Yun-Mi;Song, Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.585-596
    • /
    • 2007
  • XML data generally consists of a hierarchical tree-structure which is reflected in mechanisms to store and retrieve XML data. Therefore, when storing XML data in the database, the hierarchical relationships among the XML elements are taken into consideration during the restructuring and storing of the XML data. Also, in order to support the search queries from the user, a mechanism is needed to compute the hierarchical relationship between the element structures specified by the query. The structural join operation is one solution to this problem, and is an efficient computation method for hierarchical relationships in an in database based on the node numbering scheme. However, in order to process a tree structured XML query which contains a complex nested hierarchical relationship it still needs to carry out multiple structural joins and results in another problem of having a high query execution cost. Therefore, in this paper we provide a preprocessing mechanism for effectively reducing the cost of multiple nested structural joins by applying the concept of equivalence classes and suggest a query path reduction algorithm to shorten the path query which consists of a regular expression. The mechanism is especially devised to reduce path queries containing branch nodes. The experimental results show that the proposed algorithm can reduce the time requited for processing the path queries to 1/3 of the original execution time.