• Title/Summary/Keyword: Search Log

Search Result 176, Processing Time 0.027 seconds

Applications of Transaction Log Analysis for the Web Searching Field (웹 검색 분야에서의 로그 분석 방법론의 활용도)

  • Park, So-Yeon;Lee, Joon-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.1
    • /
    • pp.231-242
    • /
    • 2007
  • Transaction logs capture the interactions between online information retrieval systems and the users. Given the nature of the Web and Web users, transaction logs appear to be a reasonable and relevant method to collect and investigate information searching behaviors from a large number of Web users. Based on a series of research studies that analyzed Naver transaction logs, this study examines how transaction log analysis can be applied and contributed to the field of web searching and suggests future implications for the web searching field. It is expected that this study could contribute to the development and implementation of more effective Web search systems and services.

POLYNOMIAL COMPLEXITY OF PRIMAL-DUAL INTERIOR-POINT METHODS FOR CONVEX QUADRATIC PROGRAMMING

  • Liu, Zhongyi;Sun, Wenyu;De Sampaio, Raimundo J.B.
    • Journal of applied mathematics & informatics
    • /
    • v.27 no.3_4
    • /
    • pp.567-579
    • /
    • 2009
  • Recently, Peng et al. proposed a primal-dual interior-point method with new search direction and self-regular proximity for LP. This new large-update method has the currently best theoretical performance with polynomial complexity of O($n^{\frac{q+1}{2q}}\;{\log}\;{\frac{n}{\varepsilon}}$). In this paper we use this search direction to propose a primal-dual interior-point method for convex quadratic programming (QP). We overcome the difficulty in analyzing the complexity of the primal-dual interior-point methods for convex quadratic programming, and obtain the same polynomial complexity of O($n^{\frac{q+1}{2q}}\;{\log}\;{\frac{n}{\varepsilon}}$) for convex quadratic programming.

  • PDF

NEW PRIMAL-DUAL INTERIOR POINT METHODS FOR P*(κ) LINEAR COMPLEMENTARITY PROBLEMS

  • Cho, Gyeong-Mi;Kim, Min-Kyung
    • Communications of the Korean Mathematical Society
    • /
    • v.25 no.4
    • /
    • pp.655-669
    • /
    • 2010
  • In this paper we propose new primal-dual interior point methods (IPMs) for $P_*(\kappa)$ linear complementarity problems (LCPs) and analyze the iteration complexity of the algorithm. New search directions and proximity measures are defined based on a class of kernel functions, $\psi(t)=\frac{t^2-1}{2}-{\int}^t_1e{^{q(\frac{1}{\xi}-1)}d{\xi}$, $q\;{\geq}\;1$. If a strictly feasible starting point is available and the parameter $q\;=\;\log\;\(1+a{\sqrt{\frac{2{\tau}+2{\sqrt{2n{\tau}}+{\theta}n}}{1-{\theta}}\)$, where $a\;=\;1\;+\;\frac{1}{\sqrt{1+2{\kappa}}}$, then new large-update primal-dual interior point algorithms have $O((1\;+\;2{\kappa})\sqrt{n}log\;n\;log\;{\frac{n}{\varepsilon}})$ iteration complexity which is the best known result for this method. For small-update methods, we have $O((1\;+\;2{\kappa})q{\sqrt{qn}}log\;{\frac{n}{\varepsilon}})$ iteration complexity.

An Analysis of Query Types and Topics Submitted to Navel (클릭 로그에 근거한 네이버 검색 질의의 형태 및 주제 분석)

  • Park Soyeon;Lee Joon-Ho;Kim Ji Seoung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.1
    • /
    • pp.265-278
    • /
    • 2005
  • This study examines web query types and topics submitted to Naver during one year period by analyzing query logs and click logs. Query logs capture queries users submitted to the system, and click logs consist of documents users clicked and viewed. This study presents a methodology to classify query types and topics. A method for click log analysis is also suggested. When classified by query types, there are more site search queries than content search queries. Queries about computer/internet. entertainment, shopping. game, education rank hightest. The implications for system designers and web content providers are discussed.

Improving Lookup Time Complexity of Compressed Suffix Arrays using Multi-ary Wavelet Tree

  • Wu, Zheng;Na, Joong-Chae;Kim, Min-Hwan;Kim, Dong-Kyue
    • Journal of Computing Science and Engineering
    • /
    • v.3 no.1
    • /
    • pp.1-4
    • /
    • 2009
  • In a given text T of size n, we need to search for the information that we are interested. In order to support fast searching, an index must be constructed by preprocessing the text. Suffix array is a kind of index data structure. The compressed suffix array (CSA) is one of the compressed indices based on the regularity of the suffix array, and can be compressed to the $k^{th}$ order empirical entropy. In this paper we improve the lookup time complexity of the compressed suffix array by using the multi-ary wavelet tree at the cost of more space. In our implementation, the lookup time complexity of the compressed suffix array is O(${\log}_{\sigma}^{\varepsilon/(1-{\varepsilon})}\;n\;{\log}_r\;\sigma$), and the space of the compressed suffix array is ${\varepsilon}^{-1}\;nH_k(T)+O(n\;{\log}\;{\log}\;n/{\log}^{\varepsilon}_{\sigma}\;n)$ bits, where a is the size of alphabet, $H_k$ is the kth order empirical entropy r is the branching factor of the multi-ary wavelet tree such that $2{\leq}r{\leq}\sqrt{n}$ and $r{\leq}O({\log}^{1-{\varepsilon}}_{\sigma}\;n)$ and 0 < $\varepsilon$ < 1/2 is a constant.

An Analytic Study on the Categorization of Query through Automatic Term Classification (용어 자동분류를 사용한 검색어 범주화의 분석적 고찰)

  • Lee, Tae-Seok;Jeong, Do-Heon;Moon, Young-Su;Park, Min-Soo;Hyun, Mi-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.133-138
    • /
    • 2012
  • Queries entered in a search box are the results of users' activities to actively seek information. Therefore, search logs are important data which represent users' information needs. The purpose of this study is to examine if there is a relationship between the results of queries automatically classified and the categories of documents accessed. Search sessions were identified in 2009 NDSL(National Discovery for Science Leaders) log dataset of KISTI (Korea Institute of Science and Technology Information). Queries and items used were extracted by session. The queries were processed using an automatic classifier. The identified queries were then compared with the subject categories of items used. As a result, it was found that the average similarity was 58.8% for the automatic classification of the top 100 queries. Interestingly, this result is a numerical value lower than 76.8%, the result of search evaluated by experts. The reason for this difference explains that the terms used as queries are newly emerging as those of concern in other fields of research.

Linear-Time Search in Suffix Arrays (접미사 배열을 이용한 선형시간 탐색)

  • Sin Jeong SeoP;Kim Dong Kyue;Park Heejin;Park Kunsoo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.5
    • /
    • pp.255-259
    • /
    • 2005
  • To search a pattern P in a text, such index data structures as suffix trees and suffix arrays are widely used in diverse applications of string processing and computational biology. It is well known that searching in suffix trees is faster than suffix ways in the aspect of time complexity, i.e., it takes O(${\mid}P{\mid}$) time to search P on a constant-size alphabet in a suffix tree while it takes O(${\mid}P{\mid}+logn$) time in a suffix way where n is the length of the text. In this paper we present a linear-tim8 search algorithm in suffix arrays for constant-size alphabets. For a gene.al alphabet $\Sigma$, it takes O(${\mid}P{\mid}log{\mid}{\Sigma}{\mid}$) time.

A Fast Fractal Image Compression Using The Normalized Variance (정규화된 분산을 이용한 프랙탈 압축방법)

  • Kim, Jong-Koo;Hamn, Do-Yong;Wee, Young-Cheul;Kimn, Ha-Jine
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.499-502
    • /
    • 2001
  • Fractal image coding suffers from the long search time of domain pool although it provides many properties including the high compression ratio. We find that the normalized variance of a block is independent of contrast, brightness. Using this observation, we introduce a self similar block searching method employing the d-dimensional nearest neighbor searching. This method takes Ο(log/N) time for searching the self similar domain blocks for each range block where N is the number of domain blocks. PSNR (Peak Signal Noise Ratio) of this method is similar to that of the full search method that requires Ο(N) time for each range block. Moreover, the image quality of this method is independent of the number of edges in the image.

  • PDF

Design of Intrusion Responsible System For Enterprise Security Management (통합보안 관리를 위한 침입대응 시스템 설계)

  • Lee, Chang-Woo;Sohn, Woo-Yong;Song, Jung-Gil
    • Convergence Security Journal
    • /
    • v.5 no.2
    • /
    • pp.51-56
    • /
    • 2005
  • Service operating management to keep stable and effective environment according as user increase and network environment of the Internet become complex gradually and requirements of offered service and user become various is felt constraint gradually. To solve this problem, invasion confrontation system through proposed this log analysis can be consisted as search of log file that is XML's advantage storing log file by XML form is easy and fast, and can have advantage log files of system analyze unification and manages according to structure anger of data. Also, created log file by Internet Protocol Address sort by do log and by Port number sort do log, invasion type sort log file and comparative analysis created in other invasion feeler system because change sort to various form such as do log by do logarithm, feeler time possible.

  • PDF

QSAR Modeling of Toxicant Concentrations(EC50) on the Use of Bioluminescence Intensity of CMC Immobilized Photobacterium Phosphoreum (CMC 고정화 Photobacterium phosphoreum 의 생체발광량을 이용한 독성농도(EC50)의 QSAR 모델)

  • 이용제;허문석;이우창;전억한
    • KSBB Journal
    • /
    • v.15 no.3
    • /
    • pp.299-306
    • /
    • 2000
  • Concern for the effects of toxic chemicals on the environment leads the search for better bioassay test organisms and test procedures. Photobacterium phosphoreum was used successfully as a test organism and the luminometer detection technique was an effective and simple method for determining the concentration of toxic chemicals. With EC50 a total of 14 chlorine substituted phenols benzenes and ethanes were used for the experiments. The test results showed that the toxicity to P. phosphoreum increased in the order of phenol > benzene > ethane and the toxicity also increased with the number of chlorine substitution. Quantitative structure activity relationship (QSARO) model can be used to predict EC50 to save time and endeavor. Correlation was well established with the QSAR parameters such as log P, log S and solvatochromic parameter(Vi/100 $\pi$, ${\beta}$m and am). The QSAR modeling was used with multi-regression analysis and mono-regression analysis. These analyses resulted in the following QSAR : $log EC_{50} =2.48 + 0.914 log S(n=9 R2=85.5% RE=0.378) log EC_{50}=0.35 - 4.48 Vi/100 + 2.84 \pi^* +9.46{\beta}m-4.48am (n =14 R2=98.2% RE=0.012) log EC_{50} =2.64 -1.66 log P(n=5, R2=98.8% RE=0.16) log EC_{50}=3.44 -1.09 log P(n=9 R2= 80.8% Re=0.207)$

  • PDF