• Title/Summary/Keyword: 인덱스 테이블

Search Result 104, Processing Time 0.023 seconds

Image retrieval algorithm based on feature vector using color of histogram refinement (칼라 히스토그램 정제를 이용한 특징벡터 기반 영상 검색 알고리즘)

  • Kang, Ji-Young;Park, Jong-An;Beak, Jung-Uk
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.376-379
    • /
    • 2008
  • This paper presents an image retrieval algorithm based on feature vector using color of histogram refinement for a faster and more efficient search in the process of content based image retrieval. First, we segment each of R, G, and B images from RGB color image and extract their respective histograms. Secondly, these histograms of individual R, G and B are divided into sixteen of bins each. Finally, we extract the maximum pixel values in each bins' histogram, which are calculated, compared and analyzed, Now, we can perform image retrieval technique using these maximum pixel value. Hence, the proposed algorithm of this paper effectively extracts features by comparing input and database images, making features from R, G and B into a feature vector table, and prove a batter searching performance than the current algorithm that uses histogram matching and ranks, only.

  • PDF

A Text Processing Method for Devanagari Scripts in Andriod (안드로이드에서 힌디어 텍스트 처리 방법)

  • Kim, Jae-Hyeok;Maeng, Seung-Ryol
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.560-569
    • /
    • 2011
  • In this paper, we propose a text processing method for Hindi characters, Devanagari scripts, in the Android. The key points of the text processing are to device automata, which define the combining rules of alphabets into a set of syllables, and to implement a font rendering engine, which retrieves and displays the glyph images corresponding to specific characters. In general, an automaton depends on the type and the number of characters. For the soft-keyboard, we designed the automata with 14 consonants and 34 vowels based on Unicode. Finally, a combined syllable is converted into a glyph index using the mapping table, used as a handle to load its glyph image. According to the multi-lingual framework of Freetype font engine, Dvanagari scripts can be supported in the system level by appending the implementation of our method to the font engine as the Hindi module. The proposed method is verified through a simple message system.

Extended R-Tree with Grid Filter for Efficient Filtering (효율적인 여과를 위한 그리드 필터를 갖는 R-Tree 의 확장)

  • 김재흥
    • Spatial Information Research
    • /
    • v.8 no.1
    • /
    • pp.155-170
    • /
    • 2000
  • When we use R-Tree,a spatial index, to find objects matches some predicate, it often leads to an incorrect result of perform filtering step only with MBR. And , each candidates need to be inspected to conform if it really satisfies with given query, so called, 'refinement step'. In refinement step. we should perform disk I/O and expansive spatial operations which is the cause of increasing retrieval costs. Therefore, to minimize the number of candidate after filtering step, two-phase filtering methods were studied, but there was many problems such as inefficiency of filtering,maintenance of additional informations and reconstruction of data resulted from the loss of original information. So , in this paper, I propose an Extended R-Tree which provides ability to retrieve spatial objects only with some simple logical operations using Grid Table, truth table strong the information about the existence of spatial objects, in second filtering step. Consequently , this Extended R-Tree using Grid Filter has low cost of operation for filtering because of efficient second filtering step, and better filtering efficiency caused by high quality of approximation.

  • PDF

Accuracy Simulation of Precision Rotary Motion Systems (회전운동 시스템의 정밀도 시뮬레이션 기술)

  • Hwang, Joo-Ho;Shim, Jong-Youp;Hong, Seong-Wook;Lee, Deug-Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.285-291
    • /
    • 2011
  • The error motion of a machine tool spindle directly affects the surface errors of machined parts. The error motions of the spindle are not desired errors in the three linear direction motions and two rotating motions. Those are usually due to the imperfect of bearings, stiffness of spindle, assembly errors, external force or unbalance of rotors. The error motions of the spindle have been needed to be decreased to desired goal of spindle's performance. The level of error motion is needed to be estimated during the design and assembly process of the spindle. In this paper, the estimation method for the five degree of freedom (5 D.O.F) error motions of the spindle is suggested. To estimate the error motions of the spindle, waviness of shaft and bearings, external force model was used as input data. And, the estimation models are considering geometric relationship and force equilibrium of the five degree of the freedom. To calculate error motions of the spindle, not only imperfection of the shaft, bearings, such as rolling element bearing, hydrostatic bearing, and aerostatic bearing, but also driving elements such as worm, pulley, and direct driving motor systems, were considered.

PSR: Pre-Computing Solutions in RDBMS for Efficient Web Services Composition Search (PSR : 효율적인 웹 서비스 컴포지션 검색을 위한 RDBMS 기반의 선 계산 기법)

  • Kwon, Joon-Ho;Park, Kyu-Ho;Lee, Dae-Wook;Lee, Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.333-344
    • /
    • 2008
  • In recent years, the web services composition has received much attention. By web services composition, we mean providing a new service that does not exist on the repository. In this paper, we propose a new system called PSR for web services composition search using a relational database. We also propose algorithms for pre-computing web services composition using joins and indices. We store ontologies from web services in RDBMS, so that the PSR system returns web services composition in order of similarity with user query through the degree of the ontology matching. We demonstrated that our pre-computing web services composition approach in RDBMS yields lower execution time and good scalability when handling a large number of web services and user queries.

XML Document Filtering based on Segments (세그먼트 기반의 XML 문서 필터링)

  • Kwon, Joon-Ho;Rao, Praveen;Moon, Bong-Ki;Lee, Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.368-378
    • /
    • 2008
  • In recent years, publish-subscribe (pub-sub) systems based on XML document filtering have received much attention. In a typical pub-sub system, subscribed users specify their interest in profiles expressed in the XPath language, and each new content is matched against the user profiles so that the content is delivered to only the interested subscribers. As the number of subscribed users and their profiles can grow very large, the scalability of the system is critical to the success of pub-sub services. In this paper, we propose a fast and scalable XML filtering system called SFiST which is an extension of the FiST system. Sharable segments are extracted from twig patterns and stored into the hash-based Segment Table in SFiST system. Segments are used to represent user profiles as Terse Sequences and stored in the Compact Segment Index during filtering. Our experimental study shows that SFiST system has better performance than FiST system in terms of filtering time and memory usage.

A Distributed Spatial Indexing Technique based on Hilbert Curve and MBR for k-NN Query Processing in a Single Broadcast Channel Environment (단일방송채널환경에서 k-최근접질의 처리를 위한 힐버트 곡선과 최소영역 사각형 기반의 분산 공간 인덱싱 기법)

  • Yi, Jung-Hyung;Jung, Sung-Won
    • Journal of KIISE:Databases
    • /
    • v.37 no.4
    • /
    • pp.203-208
    • /
    • 2010
  • This paper deals with an efficient index scheduling technique based on Hilbert curve and MBR for k-NN query in a single wireless broadcast channel environment. Previous works have two major problems. One is that they need a long time to process queries due to the back-tracking problem. The other is that they have to download too many spatial data since they can not reduce search space rapidly. Our proposed method broadcasts spatial data based on Hilbert curve order where a distributed index table is also broadcast with each spatial data. Each entry of index table represents the MBR which groups spatial data. By predicting the unknown location of spatial data, our proposed index scheme allows mobile clients to remove unnecessary data and to reduce search space rapidly. As a result, our method gives the decreased tuning time and access latency.

Comparison of Search Performance of SQLite3 Database by Linux File Systems (Linux File Systems에 따른 SQLite3 데이터베이스의 검색 성능 비교)

  • Choi, Jin-Oh
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.1-6
    • /
    • 2022
  • Recently, IoT sensors are often used to produce stream data locally and they are provided for edge computing applications. Mass-produced data are stored in the mobile device's database for real-time processing and then synchronized with the server when needed. Many mobile databases are developed to support those applications. They are CloudScape, DB2 Everyplace, ASA, PointBase Mobile, etc, and the most widely used database is SQLite3 on Linux. In this paper, we focused on the performance required for synchronization with the server. The search performance required to retrieve SQLite3 was compared and analyzed according to the type of each Linux file system in which the database is stored. Thus, performance differences were checked for each file system according to various search query types, and criteria for applying the more appropriate Linux file system according to the index use environment and table scan environment were prepared and presented.

Development of Quality Assurance Software for $PRESAGE^{REU}$ Gel Dosimetry ($PRESAGE^{REU}$ 겔 선량계의 분석 및 정도 관리 도구 개발)

  • Cho, Woong;Lee, Jaegi;Kim, Hyun Suk;Wu, Hong-Gyun
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.233-241
    • /
    • 2014
  • The aim of this study is to develop a new software tool for 3D dose verification using $PRESAGE^{REU}$ Gel dosimeter. The tool included following functions: importing 3D doses from treatment planning systems (TPS), importing 3D optical density (OD), converting ODs to doses, 3D registration between two volumetric data by translational and rotational transformations, and evaluation with 3D gamma index. To acquire correlation between ODs and doses, CT images of a $PRESAGE^{REU}$ Gel with cylindrical shape was acquired, and a volumetric modulated arc therapy (VMAT) plan was designed to give radiation doses from 1 Gy to 6 Gy to six disk-shaped virtual targets along z-axis. After the VMAT plan was delivered to the targets, 3D OD data were reconstructed from 512 projection data from $Vista^{TM}$ optical CT scanner (Modus Medical Devices Inc, Canada) per every 2 hours after irradiation. A curve for converting ODs to doses was derived by comparing TPS dose profile to OD profile along z-axis, and the 3D OD data were converted to the absorbed doses using the curve. Supra-linearity was observed between doses and ODs, and the ODs were decayed about 60% per 24 hours depending on their magnitudes. Measured doses from the $PRESAGE^{REU}$ Gel were well agreed with the TPS doses at central region, but large under-doses were observed at peripheral region at the cylindrical geometry. Gamma passing rate for 3D doses was 70.36% under the gamma criteria of 3% of dose difference and 3 mm of distance to agreement. The low passing rate was resulted from the mismatching of the refractive index between the PRESAGE gel and oil bath in the optical CT scanner. In conclusion, the developed software was useful for 3D dose verification from PRESAGE gel dosimetry, but further improvement of the Gel dosimetry system were required.

Branching Path Query Processing for XML Documents using the Prefix Match Join (프리픽스 매취 조인을 이용한 XML 문서에 대한 분기 경로 질의 처리)

  • Park Young-Ho;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.452-472
    • /
    • 2005
  • We propose XIR-Branching, a novel method for processing partial match queries on heterogeneous XML documents using information retrieval(IR) techniques and novel instance join techniques. A partial match query is defined as the one having the descendent-or-self axis '//' in its path expression. In its general form, a partial match query has branch predicates forming branching paths. The objective of XIR-Branching is to efficiently support this type of queries for large-scale documents of heterogeneous schemas. XIR-Branching has its basis on the conventional schema-level methods using relational tables(e.g., XRel, XParent, XIR-Linear[21]) and significantly improves their efficiency and scalability using two techniques: an inverted index technique and a novel prefix match join. The former supports linear path expressions as the method used in XIR-Linear[21]. The latter supports branching path expressions, and allows for finding the result nodes more efficiently than containment joins used in the conventional methods. XIR-Linear shows the efficiency for linear path expressions, but does not handle branching path expressions. However, we have to handle branching path expressions for querying more in detail and general. The paper presents a novel method for handling branching path expressions. XIR-Branching reduces a candidate set for a query as a schema-level method and then, efficiently finds a final result set by using a novel prefix match join as an instance-level method. We compare the efficiency and scalability of XIR-Branching with those of XRel and XParent using XML documents crawled from the Internet. The results show that XIR-Branching is more efficient than both XRel and XParent by several orders of magnitude for linear path expressions, and by several factors for branching path expressions.