• Title/Summary/Keyword: search algorithm

Search Result 3,907, Processing Time 0.034 seconds

Video Matching Algorithm of Content-Based Video Copy Detection for Copyright Protection (저작권보호를 위한 내용기반 비디오 복사검출의 비디오 정합 알고리즘)

  • Hyun, Ki-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.315-322
    • /
    • 2008
  • Searching a location of the copied video in video database, signatures should be robust to video reediting, channel noise, time variation of frame rate. Several kinds of signatures has been proposed. Ordinal signature, one of them, is difficult to describe the spatial characteristics of frame due to the site of fixed window, $N{\times}N$, which is compute the average gray value. In this paper, I studied an algorithm of sequence matching in video copy detection for the copyright protection, employing the R-tree index method for retrieval and suggesting a robust ordinal signatures for the original video clips and the same signatures of the pirated video. Robust ordinal has a 2-dimensional vector structures that has a strong to the noise and the variation of the frame rate. Also, it express as MBR form in search space of R-tree. Moreover, I focus on building a video copy detection method into which content publishers register their valuable digital content. The video copy detection algorithms compares the web content to the registered content and notifies the content owners of illegal copies. Experimental results show the proposed method is improve the video matching rate and it has a characteristics of signature suitable to the large video databases.

  • PDF

A Study on Development of Patent Information Retrieval Using Textmining (텍스트 마이닝을 이용한 특허정보검색 개발에 관한 연구)

  • Go, Gwang-Su;Jung, Won-Kyo;Shin, Young-Geun;Park, Sang-Sung;Jang, Dong-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.8
    • /
    • pp.3677-3688
    • /
    • 2011
  • The patent information retrieval system can serve a variety of purposes. In general, the patent information is retrieved using limited key words. To identify earlier technology and priority rights repeated effort is needed. This study proposes a method of content-based retrieval using text mining. Using the proposed algorithm, each of the documents is invested with characteristic value. The characteristic values are used to compare similarities between query documents and database documents. Text analysis is composed of 3 steps: stop-word, keyword analysis and weighted value calculation. In the test results, the general retrieval and the proposed algorithm were compared by using accuracy measurements. As the study arranges the result documents as similarities of the query documents, the surfer can improve the efficiency by reviewing the similar documents first. Also because of being able to input the full-text of patent documents, the users unacquainted with surfing can use it easily and quickly. It can reduce the amount of displayed missing data through the use of content based retrieval instead of keyword based retrieval for extending the scope of the search.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

Development of a back analysis program for reasonable derivation of tunnel design parameters (합리적인 터널설계정수 산정을 위한 역해석 프로그램 개발)

  • Kim, Young-Joon;Lee, Yong-Joo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.15 no.3
    • /
    • pp.357-373
    • /
    • 2013
  • In this paper, a back analysis program for analyzing the behavior of tunnel-ground system and evaluating the material properties and tunnel design parameters was developed. This program was designed to be able to implement the back analysis of underground structure by combination of using FLAC and optimized algorithm as direct method. In particular, Rosenbrock method which is able to do direct search without obtaining differential coefficient was adopted for the back analysis algorithm among optimization methods. This back analysis program was applied to the site to evaluate the design parameters. The back analysis was carried out using field measurement results from 5 sites. In the course of back analysis, nonlinear regression analysis was carried out to identify the optimum function of the measured ground displacement. Exponential function and fractional function were used for the regression analysis and total displacement calculated by optimum function was used as the back analysis input data. As a result, displacement recalculated through the back analysis using measured displacement of the structure showed 4.5% of error factor comparing to the measured data. Hence, the program developed in this study proved to be effectively applicable to tunnel analysis.

A User Authentication System Using Face Analysis and Similarity Comparison (얼굴 분석과 유사도 비교를 이용한 사용자 인증 시스템)

  • Ryu Dong-Yeop;Yim Young-Whan;Yoon Sunnhee;Seo Jeong Min;Lee Chang Hoon;Lee Keunsoo;Lee Sang Moon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1439-1448
    • /
    • 2005
  • In this paper, after similarity of color information in above toro and geometry position analysis of important characteristic information in face and abstraction object that is inputted detects face area using comparison, describe about method to do user certification using ratio information and hair spring degree. Face abstraction algorithm that use color information has comparative advantages than face abstraction algorithm that use form information because have advantage that is not influenced facial degree or site etc. that tip. Because is based on color information, change of lighting or to keep correct performance because is sensitive about color such as background similar to complexion is difficult. Therefore, can be used more efficiently than method to use color information as that detect characteristic information of eye and lips etc. that is facial importance characteristic element except color information and similarity for each object achieves comparison. This paper proposes system that eye and mouth's similarity that calculate characteristic that is ratio red of each individual after divide face by each individual and is segmentalized giving weight in specification calculation recognize user confirming similarity through search. Could experiment method to propose and know that the awareness rate through analysis with the wave rises.

  • PDF

A Tree-structured XPath Query Reduction Scheme for Enhancing XML Query Processing Performance (XML 질의의 수행성능 향상을 위한 트리 구조 XPath 질의의 축약 기법에 관한 연구)

  • Lee, Min-Soo;Kim, Yun-Mi;Song, Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.585-596
    • /
    • 2007
  • XML data generally consists of a hierarchical tree-structure which is reflected in mechanisms to store and retrieve XML data. Therefore, when storing XML data in the database, the hierarchical relationships among the XML elements are taken into consideration during the restructuring and storing of the XML data. Also, in order to support the search queries from the user, a mechanism is needed to compute the hierarchical relationship between the element structures specified by the query. The structural join operation is one solution to this problem, and is an efficient computation method for hierarchical relationships in an in database based on the node numbering scheme. However, in order to process a tree structured XML query which contains a complex nested hierarchical relationship it still needs to carry out multiple structural joins and results in another problem of having a high query execution cost. Therefore, in this paper we provide a preprocessing mechanism for effectively reducing the cost of multiple nested structural joins by applying the concept of equivalence classes and suggest a query path reduction algorithm to shorten the path query which consists of a regular expression. The mechanism is especially devised to reduce path queries containing branch nodes. The experimental results show that the proposed algorithm can reduce the time requited for processing the path queries to 1/3 of the original execution time.

Determining the number of Clusters in On-Line Document Clustering Algorithm (온라인 문서 군집화에서 군집 수 결정 방법)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Lee, Yill-Byung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.513-522
    • /
    • 2007
  • Clustering is to divide given data and automatically find out the hidden meanings in the data. It analyzes data, which are difficult for people to check in detail, and then, makes several clusters consisting of data with similar characteristics. On-Line Document Clustering System, which makes a group of similar documents by use of results of the search engine, is aimed to increase the convenience of information retrieval area. Document clustering is automatically done without human interference, and the number of clusters, which affect the result of clustering, should be decided automatically too. Also, the one of the characteristics of an on-line system is guarantying fast response time. This paper proposed a method of determining the number of clusters automatically by geometrical information. The proposed method composed of two stages. In the first stage, centers of clusters are projected on the low-dimensional plane, and in the second stage, clusters are combined by use of distance of centers of clusters in the low-dimensional plane. As a result of experimenting this method with real data, it was found that clustering performance became better and the response time is suitable to on-line circumstance.

Numerical Study on the Development of the Seismic Response Prediction Method for the Low-rise Building Structures using the Limited Information (제한된 정보를 이용한 저층 건물 구조물의 지진 응답 예측 기법 개발을 위한 해석적 연구)

  • Choi, Se-Woon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.4
    • /
    • pp.271-277
    • /
    • 2020
  • There are increasing cases of monitoring the structural response of structures using multiple sensors. However, owing to cost and management problems, limited sensors are installed in the structure. Thus, few structural responses are collected, which hinders analyzing the behavior of the structure. Therefore, a technique to predict responses at a location where sensors are not installed to a reliable level using limited sensors is necessary. In this study, a numerical study is conducted to predict the seismic response of low-rise buildings using limited information. It is assumed that the available response information is only the acceleration responses of the first and top floors. Using both information, the first natural frequency of the structure can be obtained. The acceleration information on the first floor is used as the ground motion information. To minimize the error on the acceleration history response of the top floor and the first natural frequency error of the target structure, the method for predicting the mass and stiffness information of a structure using the genetic algorithm is presented. However, the constraints are not considered. To determine the range of design variables that mean the search space, the parameter prediction method based on artificial neural networks is proposed. To verify the proposed method, a five-story structure is used as an example.

Extracting Silhouettes of a Polyhedral Model from a Curved Viewpoint Trajectory (곡선 궤적의 이동 관측점에 대한 다면체 모델의 윤곽선 추출)

  • Kim, Gu-Jin;Baek, Nak-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.2
    • /
    • pp.1-7
    • /
    • 2002
  • The fast extraction of the silhouettes of a model is very useful for many applications in computer graphics and animation. In this paper, we present an efficient algorithm to compute a sequence of perspective silhouettes for a polyhedral model from a moving viewpoint. The viewpoint is assumed to move along a trajectory q(t), which is a space curve of a time parameter t. Then, we can compute the time-intervals for each edge of the model to be contained in the silhouette by two major computations: (i) intersecting q(t) with two planes and (ii) a number of dot products. If q(t) is a curve of degree n, then there are at most of n + 1 time-intervals for an edge to be in a silhouette. For each time point $t_i$ we can extract silhouette edges by searching the intervals containing $t_i$ among the computed intervals. For the efficient search, we propose two kinds of data structures for storing the intervals: an interval tree and an array. Our algorithm can be easily extended to compute the parallel silhouettes with minor modifications.

  • PDF

An Improved Search Space for QRM-MLD Signal Detection for Spatially Multiplexed MIMO Systems (공간다중화 MIMO 시스템의 QRM-MLD 신호검출을 위한 개선된 탐색공간)

  • Hur, Hoon;Woo, Hyun-Myung;Yang, Won-Young;Bahng, Seung-Jae;Park, Youn-Ok;Kim, Jae-Kwon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.4A
    • /
    • pp.403-410
    • /
    • 2008
  • In this paper, we propose a variant of the QRM-MLD signal detection method that is used for spatially multiplexed multiple antenna system. The original QRM-MLD signal detection method combines the QR decomposition with the M-algorithm, thereby significantly reduces the prohibitive hardware complexity of the ML signal detection method, still achieving a near ML performance. When the number of transmitter antennas and/or constellation size are increased to achieve higher bit rate, however, its increased complexity makes the hardware implementation challenging. In an effort to overcome this drawback of the original QRM-MLD, a number of variants were proposed. A most strong variant among them, in our opinion, is the ranking method, in which the constellation points are ranked and computation is performed for only highly ranked constellation points, thereby reducing the required complexity. However, the variant using the ranking method experiences a significant performance degradation, when compared with the original QRM-MLD. In this paper, we point out the reasons of the performance degradation, and we propose a novel variant that overcomes the drawbacks. We perform a set of computer simulations to show that the proposed method achieves a near performance of the original QRM-MLD, while its computational complexity is near to that of the QRM-MLD with ranking method.