• Title/Summary/Keyword: Indexing Technique

Search Result 203, Processing Time 0.038 seconds

An Efficient Future Indexing Technique for the Moving Object Location Prediction System (이동 객체 위치 예측 시스템을 위한 효율적인 미래 인덱싱 기법)

  • Lee, Kang-Joon;Kim, Joung-Joon;Han, Ki-Joon
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.3-8
    • /
    • 2007
  • 최근 도로 네트워크 환경에서 이동 객체 위치 정보를 관리하고 이동 객체의 미래 위치를 예측하는 이동 객체 위치 예측 시스템의 필요성이 나날이 증가되고 있다. 이동 객체위치 예측 시스템은 교통 관제 및 다양한 응급 상황 시 이동 객체의 미래 위치를 신속히 예측하기 위해 사용되며, 보다 편리한 위치 기반 서비스의 제공을 가능하게 해준다. 이러한 시스템을 위한 대부분의 미래 인덱싱 기법은 일반적으로 이동 객체의 미래 위치 예측을 위해 과거 이동 궤적을 이용하고 있다. 그러나, 수많은 이동 객체의 과거 이동 궤적 관리가 어렵고, 실시간으로 변화하는 이동 객체의 미래 궤적을 반영하기 위한 방대한 미래 인덱스의 갱신 요청으로 인해 인덱스 유지 비용이 증가하여 미래 위치 질의 요청에 대한 신속한 처리 성능이 떨어지게 된다. 따라서 본 논문에서는 이동 객체 위치 예측 시스템에서 방대한 이동 객체의 과거 이동 궤적으로부터 효율적으로 미래 위치를 예측하기 위해 셀 기반의 미래 인덱싱 방법인 PFCT-Tree(Probability Future Cell Trajectory-Tree)를 제시한다. PFCT-Tree는 방대한 과거 이동 궤적을 셀 단위로 재구성하여 인덱스 크기를 줄이고, 셀 내부 경험치를 기반으로 장기간 질의 시 빠른 미래 위치를 예측할 수 있다. 또한 신속한 미래 이동 궤적의 갱신 속도를 향상시키기 위해 미래 시간을 미래 궤적과 분리하여 인덱싱함으로써 위치 예측 오류로 인한 미래 인덱스 갱신 비용을 최소화 할 수 있다. 마지막으로 실험을 통해 도로 네트워크 환경에서 PFCT-Tree가 기존 인덱싱 기법들보다 갱신 및 검색 성능이 우수함도 입증하였다.ential oil (Bergamot, Grapefruit, Lemon, Petigrain)은 농도 의존적으로 ROS 생성을 증가시켰다. 이상의 결과를 종합하여 볼 때 citrus essential oil은 MSH에 의한 melanin 생성을 억제하는 것으로 보아 미백제로서의 개발 가능성이 있는 것으로 사료된다.가 사용될 수 있음을 제시한다.찍 발견되어 크기는 작았으며, 육안적으로 폴립의 Yamada 형태의 분류는 II, III의 형태를 띠고 있었다.EX>로 한반도 후기 백악기의 고지자기극$(Lat./Long.=70.9^{\circ}N/215.4^{\circ}E,\;A_{95}=5.3^{\circ})$의 위치와 유사하므로 암석의 생성 시기는 후기 백악기로 판단하였다. 한편 함평분지에 분포하는 백악기 화산암류에서는 한 개의 정자화 방향과 두 개의 역자화 방향이 확인되었다. 이들 특성잔류자화 방향은 백악기 화산암 형성 당시 암석에 기록된 성분으로써 당시 지구자기장의 상태를 기록한 것으로 해석하였으며, 이중 정자화 방향을 함평분지 화산암의 대표 방향으로 채택하였다 함평분지 화산암의 고지자기 극의 위치는 정자극의 경우는 $Lat./Long.=70.2^{\circ}N/199.5^{\circ}E,\;(K=18.1,\;A_{95}=9.6^{\circ})$ 이며 역자극의 경우는 $Lat./Long.=65.5^{\circ}S/251.3^{\circ}E,\;(K=7.1,\;A_{95}=20.7^{\circ})$이다. 이중 정자극의 위치는 한반도의 후기 백악기극의 위치와 통계적으로 동일한 것으로 나타나 함평분지 화산암

  • PDF

A Study on the Information Searching Behavior of MEDLINE Retrieval in Medical Librarian (의학전문사서의 정보이용행위에 관한 연구)

  • Lee Jin-Young;Jeong Sang-Kyung
    • Journal of Korean Library and Information Science Society
    • /
    • v.30 no.2
    • /
    • pp.123-153
    • /
    • 1999
  • This article aims at finding the ways, on the basis of the studies about the behaviors to search the existing CD-ROM databases, so that the searchers who retrieve the on-line MEDLINE used in the medical libraries can use the data more efficiently than now. We gave the questionnaires to the librarians in 60 medical libraries and searched the literatures and realities on the behaviors of the data uses to examine the search behaviors of the MEDLINE in the medical libraries. The result is as follows: 1) The medical data system rate for single users was $53\%$ and the ons for multi users $43\%$. As for the time which users retrieve for a week, under two hours was $75\%$, between 3 and 8 hours $18.3\%$, and eve. 9 hours $6.7\%$. 2) The increasing factors of the search result are (1) an enough discussion and interview between librarians and users, and (2) the use of the correct indexing terms, Thesaurus, and Keyword. In principle users must search directly. However, the librarians searched instead in case that the retrieval result was under two hours a week$(75\%)$. 3) As for the search fee, $91\%$ was free and $9\%$ was charged. Also search effectiveness was enhanced by the means of Inter-Library Loan Service & Information Network. 4) The medical librarians answered the questionnaire that they need the application education of professional knowledge, medical terms(thesaurus) and electronic medium, and also they need the computer education, interview technique and reeducation to give a satisfactory service. 5) As for the satisfactory degree of MEDLINE application, they answered $44.6\%$ for economy, $38.2\%$ for the conveniency of the time required, and $58.9\%$ for the users' search satisfaction answered respectively. 6) The application of MEDLINE system enhanced the medical libraries' image and had an effect on the users' satisfaction of using the data and search, the data activities and the research achievement. 7) In the past MeSH was used but as the time passes CD-ROM MEDLINE search behavior was preferred to On-line one.

  • PDF

Soccer Video Highlight Building Algorithm using Structural Characteristics of Broadcasted Sports Video (스포츠 중계 방송의 구조적 특성을 이용한 축구동영상 하이라이트 생성 알고리즘)

  • 김재홍;낭종호;하명환;정병희;김경수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.727-743
    • /
    • 2003
  • This paper proposes an automatic highlight building algorithm for soccer video by using the structural characteristics of broadcasted sports video that an interesting (or important) event (such as goal or foul) in sports video has a continuous replay shot surrounded by gradual shot change effect like wipe. This shot editing rule is used in this paper to analyze the structure of broadcated soccer video and extracts shot involving the important events to build a highlight. It first uses the spatial-temporal image of video to detect wipe transition effects and zoom out/in shot changes. They are used to detect the replay shot. However, using spatial-temporal image alone to detect the wipe transition effect requires too much computational resources and need to change algorithm if the wipe pattern is changed. For solving these problems, a two-pass detection algorithm and a pixel sub-sampling technique are proposed in this paper. Furthermore, to detect the zoom out/in shot change and replay shots more precisely, the green-area-ratio and the motion energy are also computed in the proposed scheme. Finally, highlight shots composed of event and player shot are extracted by using these pre-detected replay shot and zoom out/in shot change point. Proposed algorithm will be useful for web services or broadcasting services requiring abstracted soccer video.

Possibility of Drought stress Indexing by Chlorophyll Fluorescence Imaging Technique in Red Pepper (Capsicum annuum L.) (고추의 엽록소 형광 이미지 분석법에 의한 한발스트레스 지표화 가능성)

  • Yoo, Sung-Yung;Eom, Ki-Cheol;Park, So-Hyun;Kim, Tae-Wan
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.5
    • /
    • pp.676-682
    • /
    • 2012
  • The objectives of this study focused on measuring chlorophyll fluorescence related to drought stress comparing some parameters. Almost parameters were declined although they were not significant on the basis of mean values of fluorescence of total leaf area. While the ratio of fluorescence intensity variable chlorophyll ($F_V$) to fluorescence intensity maximal chlorophyll ($F_M$) was not changed, the effective quantum yield of photochemical energy conversion in photosystemII (${\Phi}PSII$) and chlorophyll fluorescence decrease ratio ($R_{fd}$) were slightly reduced, indicating inhibition of the electron transport from quinone bind protein A ($Q_A$) to quinone bind protein B ($Q_B$). Some parameters such as non-photochemical quenching rate ($NPQ_{_-LSS}$) and coefficients of non-photochemical quenching of variable fluorescence (qN) in mid-zone of leaf and near petiole zone leaf were significantly enhanced within 4 days after drought stress, which can be used as physiological stress parameters. Decrease in ${\Phi}PSII$ could was significantly measured in all leaf zones. In conclusion, three parametric evidences for chlorophyll fluorescence responses such as ${\Phi}PSII$, NPQ, and qN insinuated the possibility of photophysiological indices under drought stress.

The Recognition of Occluded 2-D Objects Using the String Matching and Hash Retrieval Algorithm (스트링 매칭과 해시 검색을 이용한 겹쳐진 이차원 물체의 인식)

  • Kim, Kwan-Dong;Lee, Ji-Yong;Lee, Byeong-Gon;Ahn, Jae-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.7
    • /
    • pp.1923-1932
    • /
    • 1998
  • This paper deals with a 2-D objects recognition algorithm. And in this paper, we present an algorithm which can reduce the computation time in model retrieval by means of hashing technique instead of using the binary~tree method. In this paper, we treat an object boundary as a string of structural units and use an attributed string matching algorithm to compute similarity measure between two strings. We select from the privileged strings a privileged string wIth mmimal eccentricity. This privileged string is treated as the reference string. And thell we wllstructed hash table using the distance between privileged string and the reference string as a key value. Once the database of all model strings is built, the recognition proceeds by segmenting the scene into a polygonal approximation. The distance between privileged string extracted from the scene and the reference string is used for model hypothesis rerieval from the table. As a result of the computer simulation, the proposed method can recognize objects only computing, the distance 2-3tiems, while previous method should compute the distance 8-10 times for model retrieval.

  • PDF

A Practical Approximate Sub-Sequence Search Method for DNA Sequence Databases (DNA 시퀀스 데이타베이스를 위한 실용적인 유사 서브 시퀀스 검색 기법)

  • Won, Jung-Im;Hong, Sang-Kyoon;Yoon, Jee-Hee;Park, Sang-Hyun;Kim, Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.119-132
    • /
    • 2007
  • In molecular biology, approximate subsequence search is one of the most important operations. In this paper, we propose an accurate and efficient method for approximate subsequence search in large DNA databases. The proposed method basically adopts a binary trie as its primary structure and stores all the window subsequences extracted from a DNA sequence. For approximate subsequence search, it traverses the binary trie in a breadth-first fashion and retrieves all the matched subsequences from the traversed path within the trie by a dynamic programming technique. However, the proposed method stores only window subsequences of the pre-determined length, and thus suffers from large post-processing time in case of long query sequences. To overcome this problem, we divide a query sequence into shorter pieces, perform searching for those subsequences, and then merge their results. To verify the superiority of the proposed method, we conducted performance evaluation via a series of experiments. The results reveal that the proposed method, which requires smaller storage space, achieves 4 to 17 times improvement in performance over the suffix tree based method. Even when the length of a query sequence is large, our method is more than an order of magnitude faster than the suffix tree based method and the Smith-Waterman algorithm.

k-Interest Places Search Algorithm for Location Search Map Service (위치 검색 지도 서비스를 위한 k관심지역 검색 기법)

  • Cho, Sunghwan;Lee, Gyoungju;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.259-267
    • /
    • 2013
  • GIS-based web map service is all the more accessible to the public. Among others, location query services are most frequently utilized, which are currently restricted to only one keyword search. Although there increases the demand for the service for querying multiple keywords corresponding to sequential activities(banking, having lunch, watching movie, and other activities) in various locations POI, such service is yet to be provided. The objective of the paper is to develop the k-IPS algorithm for quickly and accurately querying multiple POIs that internet users input and locating the search outcomes on a web map. The algorithm is developed by utilizing hierarchical tree structure of $R^*$-tree indexing technique to produce overlapped geometric regions. By using recursive $R^*$-tree index based spatial join process, the performance of the current spatial join operation was improved. The performance of the algorithm is tested by applying 2, 3, and 4 multiple POIs for spatial query selected from 159 keyword set. About 90% of the test outcomes are produced within 0.1 second. The algorithm proposed in this paper is expected to be utilized for providing a variety of location-based query services, of which demand increases to conveniently support for citizens' daily activities.

An Enhancing Technique for Scan Performance of a Skip List with MVCC (MVCC 지원 스킵 리스트의 범위 탐색 향상 기법)

  • Kim, Leeju;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.5
    • /
    • pp.107-112
    • /
    • 2020
  • Recently, unstructured data is rapidly being produced based on web-based services. NoSQL systems and key value stores that process unstructured data as key and value pairs are widely used in various applications. In this paper, a study was conducted on a skip list used for in-memory data management in an LSM-tree based key value store. The skip list used in the key value store is an insertion-based skip list that does not allow overwriting and processes all changes only by inserting. This behavior can support Multi-Version Concurrency Control (MVCC), which can simultaneously process multiple read/write requests through snapshot isolation. However, since duplicate keys exist in the skip list, the performance significantly degrades due to unnecessary node visits during a list traverse. In particular, serious overhead occurs when a range query or scan operation that collectively searches a specific range of data occurs. This paper proposes a newly designed Stride SkipList to reduce this overhead. The stride skip list additionally maintains an indexing pointer for the last node of the same key to avoid unnecessary node visits. The proposed scheme is implemented using RocksDB's in-memory component, and the performance evaluation shows that the performance of SCAN operation improves by up to 350 times compared to the existing skip list for various workloads.

An Efficient Face Region Detection for Content-based Video Summarization (내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출)

  • Kim Jong-Sung;Lee Sun-Ta;Baek Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.675-686
    • /
    • 2005
  • In this paper, we propose an efficient face region detection technique for the content-based video summarization. To segment video, shot changes are detected from a video sequence and key frames are selected from the shots. We select one frame that has the least difference between neighboring frames in each shot. The proposed face detection algorithm detects face region from selected key frames. And then, we provide user with summarized frames included face region that has an important meaning in dramas or movies. Using Bayes classification rule and statistical characteristic of the skin pixels, face regions are detected in the frames. After skin detection, we adopt the projection method to segment an image(frame) into face region and non-face region. The segmented regions are candidates of the face object and they include many false detected regions. So, we design a classifier to minimize false lesion using CART. From SGLD matrices, we extract the textual feature values such as Inertial, Inverse Difference, and Correlation. As a result of our experiment, proposed face detection algorithm shows a good performance for the key frames with a complex and variant background. And our system provides key frames included the face region for user as video summarized information.

Dynamic Management of Equi-Join Results for Multi-Keyword Searches (다중 키워드 검색에 적합한 동등조인 연산 결과의 동적 관리 기법)

  • Lim, Sung-Chae
    • The KIPS Transactions:PartA
    • /
    • v.17A no.5
    • /
    • pp.229-236
    • /
    • 2010
  • With an increasing number of documents in the Internet or enterprises, it becomes crucial to efficiently support users' queries on those documents. In that situation, the full-text search technique is accepted in general, because it can answer uncontrolled ad-hoc queries by automatically indexing all the keywords found in the documents. The size of index files made for full-text searches grows with the increasing number of indexed documents, and thus the disk cost may be too large to process multi-keyword queries against those enlarged index files. To solve the problem, we propose both of the index file structure and its management scheme suitable to the processing of multi-keyword queries against a large volume of index files. For this, we adopt the structure of inverted-files, which are widely used in the multi-keyword searches, as a basic index structure and modify it to a hierarchical structure for join operations and ranking operations performed during the query processing. In order to save disk costs based on that index structure, we dynamically store in the main memory the results of join operations between two keywords, if they are highly expected to be entered in users' queries. We also do performance comparisons using a cost model of the disk to show the performance advantage of the proposed scheme.