• Title/Summary/Keyword: Number Matching

Search Result 803, Processing Time 0.029 seconds

A Study on Releasing Cryptographic Key by Using Face and Iris Information on mobile phones (휴대폰 환경에서 얼굴 및 홍채 정보를 이용한 암호화키 생성에 관한 연구)

  • Han, Song-Yi;Park, Kang-Ryoung;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.6
    • /
    • pp.1-9
    • /
    • 2007
  • Recently, as a number of media are fused into a phone, the requirement of security of service provided on a mobile phone is increasing. For this, conventional cryptographic key based on password and security card is used in the mobile phone, but it has the characteristics which is easy to be vulnerable and to be illegally stolen. To overcome such a problem, the researches to generate key based on biometrics have been done. However, it has also the problem that biometric information is susceptible to the variation of environment, whereas conventional cryptographic system should generate invariant cryptographic key at any time. So, we propose new method of producing cryptographic key based on "Biometric matching-based key release" instead of "Biometric-based key generation" by using both face and iris information in order to overcome the unstability of uni-modal biometries. Also, by using mega-pixel camera embedded on mobile phone, we can provide users with convenience that both face and iris recognition is possible at the same time. Experimental results showed that we could obtain the EER(Equal Error Rate) performance of 0.5% when producing cryptographic key. And FAR was shown as about 0.002% in case of FRR of 25%. In addition, our system can provide the functionality of controlling FAR and FRR based on threshold.

Signatures Verification by Using Nonlinear Quantization Histogram Based on Polar Coordinate of Multidimensional Adjacent Pixel Intensity Difference (다차원 인접화소 간 명암차의 극좌표 기반 비선형 양자화 히스토그램에 의한 서명인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.375-382
    • /
    • 2016
  • In this paper, we presents a signatures verification by using the nonlinear quantization histogram of polar coordinate based on multi-dimensional adjacent pixel intensity difference. The multi-dimensional adjacent pixel intensity difference is calculated from an intensity difference between a pair of pixels in a horizontal, vertical, diagonal, and opposite diagonal directions centering around the reference pixel. The polar coordinate is converted from the rectangular coordinate by making a pair of horizontal and vertical difference, and diagonal and opposite diagonal difference, respectively. The nonlinear quantization histogram is also calculated from nonuniformly quantizing the polar coordinate value by using the Lloyd algorithm, which is the recursive method. The polar coordinate histogram of 4-directional intensity difference is applied not only for more considering the corelation between pixels but also for reducing the calculation load by decreasing the number of histogram. The nonlinear quantization is also applied not only to still more reflect an attribute of intensity variations between pixels but also to obtain the low level histogram. The proposed method has been applied to verified 90(3 persons * 30 signatures/person) images of 256*256 pixels based on a matching measures of city-block, Euclidean, ordinal value, and normalized cross-correlation coefficient. The experimental results show that the proposed method has a superior to the linear quantization histogram, and Euclidean distance is also the optimal matching measure.

Generalization of Window Construction for Subsequence Matching in Time-Series Databases (시계열 데이터베이스에서의 서브시퀀스 매칭을 위한 윈도우 구성의 일반화)

  • Moon, Yang-Sae;Han, Wook-Shin;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.357-372
    • /
    • 2001
  • In this paper, we present the concept of generalization in constructing windows for subsequence matching and propose a new subsequence matching method. GeneralMatch, based on the generalization. The earlier work of Faloutsos et al.(FRM in short) causes a lot of false alarms due to lack of the point-filtering effect. DualMatch, which has been proposed by the authors, improves performance significantly over FRM by exploiting the point filtering effect, but it has the problem of having a smaller maximum window size (half that FRM) given the minimum query length. GeneralMatch, an improvement of DualMatch, offers advantages of both methods: it can use large windows like FRM and, at the same time, can exploit the point-filtering effect like DualMatch. GeneralMatch divides data sequences into J-sliding windows (generalized sliding windows) and the query sequence into J-disjoint windows (generalized disjoint windows). We formally prove that our GeneralMatch is correct, i.e., it incurs no false dismissal. We also prove that, given the minimum query length, there is a maximum bound of the window size to guarantee correctness of GeneralMatch. We then propose a method of determining the value of J that minimizes the number of page accesses, Experimental results for real stock data show that, for low selectivities ($10^{-6}~10^{-4}$), GeneralMatch improves performance by 114% over DualMatch and by 998% iver FRM on the average; for high selectivities ($10^{-6}~10^{-4}$), by 46% over DualMatch and by 65% over FRM on the average.

  • PDF

Parallel Computation For The Edit Distance Based On The Four-Russians' Algorithm (4-러시안 알고리즘 기반의 편집거리 병렬계산)

  • Kim, Young Ho;Jeong, Ju-Hui;Kang, Dae Woong;Sim, Jeong Seop
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.2
    • /
    • pp.67-74
    • /
    • 2013
  • Approximate string matching problems have been studied in diverse fields. Recently, fast approximate string matching algorithms are being used to reduce the time and costs for the next generation sequencing. To measure the amounts of errors between two strings, we use a distance function such as the edit distance. Given two strings X(|X| = m) and Y(|Y| = n) over an alphabet ${\Sigma}$, the edit distance between X and Y is the minimum number of edit operations to convert X into Y. The edit distance between X and Y can be computed using the well-known dynamic programming technique in O(mn) time and space. The edit distance also can be computed using the Four-Russians' algorithm whose preprocessing step runs in $O((3{\mid}{\Sigma}{\mid})^{2t}t^2)$ time and $O((3{\mid}{\Sigma}{\mid})^{2t}t)$ space and the computation step runs in O(mn/t) time and O(mn) space where t represents the size of the block. In this paper, we present a parallelized version of the computation step of the Four-Russians' algorithm. Our algorithm computes the edit distance between X and Y in O(m+n) time using m/t threads. Then we implemented both the sequential version and our parallelized version of the Four-Russians' algorithm using CUDA to compare the execution times. When t = 1 and t = 2, our algorithm runs about 10 times and 3 times faster than the sequential algorithm, respectively.

Rule Discovery and Matching for Forecasting Stock Prices (주가 예측을 위한 규칙 탐사 및 매칭)

  • Ha, You-Min;Kim, Sang-Wook;Won, Jung-Im;Park, Sang-Hyun;Yoon, Jee-Hee
    • Journal of KIISE:Databases
    • /
    • v.34 no.3
    • /
    • pp.179-192
    • /
    • 2007
  • This paper addresses an approach that recommends investment types for stock investors by discovering useful rules from past changing patterns of stock prices in databases. First, we define a new rule model for recommending stock investment types. For a frequent pattern of stock prices, if its subsequent stock prices are matched to a condition of an investor, the model recommends a corresponding investment type for this stock. The frequent pattern is regarded as a rule head, and the subsequent part a rule body. We observed that the conditions on rule bodies are quite different depending on dispositions of investors while rule heads are independent of characteristics of investors in most cases. With this observation, we propose a new method that discovers and stores only the rule heads rather than the whole rules in a rule discovery process. This allows investors to define various conditions on rule bodies flexibly, and also improves the performance of a rule discovery process by reducing the number of rules. For efficient discovery and matching of rules, we propose methods for discovering frequent patterns, constructing a frequent pattern base, and indexing them. We also suggest a method that finds the rules matched to a query issued by an investor from a frequent pattern base, and a method that recommends an investment type using the rules. Finally, we verify the superiority of our approach via various experiments using real-life stock data.

A Study on Building Object Change Detection using Spatial Information - Building DB based on Road Name Address - (기구축 공간정보를 활용한 건물객체 변화 탐지 연구 - 도로명주소건물DB 중심으로 -)

  • Lee, Insu;Yeon, Sunghyun;Jeong, Hohyun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.1
    • /
    • pp.105-118
    • /
    • 2022
  • The demand for information related to 3D spatial objects model in metaverse, smart cities, digital twins, autonomous vehicles, urban air mobility will be increased. 3D model construction for spatial objects is possible with various equipments such as satellite-, aerial-, ground platforms and technologies such as modeling, artificial intelligence, image matching. However, it is not easy to quickly detect and convert spatial objects that need updating. In this study, based on spatial information (features) and attributes, using matching elements such as address code, number of floors, building name, and area, the converged building DB and the detected building DB are constructed. Both to support above and to verify the suitability of object selection that needs to be updated, one system prototype was developed. When constructing the converged building DB, the convergence of spatial information and attributes was impossible or failed in some buildings, and the matching rate was low at about 80%. It is believed that this is due to omitting of attributes about many building objects, especially in the pilot test area. This system prototype will support the establishment of an efficient drone shooting plan for the rapid update of 3D spatial objects, thereby preventing duplication and unnecessary construction of spatial objects, thereby greatly contributing to object improvement and cost reduction.

A Generation and Matching Method of Normal-Transient Dictionary for Realtime Topic Detection (실시간 이슈 탐지를 위한 일반-급상승 단어사전 생성 및 매칭 기법)

  • Choi, Bongjun;Lee, Hanjoo;Yong, Wooseok;Lee, Wonsuk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.5
    • /
    • pp.7-18
    • /
    • 2017
  • Recently, the number of SNS user has rapidly increased due to smart device industry development and also the amount of generated data is exponentially increasing. In the twitter, Text data generated by user is a key issue to research because it involves events, accidents, reputations of products, and brand images. Twitter has become a channel for users to receive and exchange information. An important characteristic of Twitter is its realtime. Earthquakes, floods and suicides event among the various events should be analyzed rapidly for immediately applying to events. It is necessary to collect tweets related to the event in order to analyze the events. But it is difficult to find all tweets related to the event using normal keywords. In order to solve such a mentioned above, this paper proposes A Generation and Matching Method of Normal-Transient Dictionary for realtime topic detection. Normal dictionaries consist of general keywords(event: suicide-death-loop, death, die, hang oneself, etc) related to events. Whereas transient dictionaries consist of transient keywords(event: suicide-names and information of celebrities, information of social issues) related to events. Experimental results show that matching method using two dictionary finds more tweets related to the event than a simple keyword search.

Protecting the iTrust Information Retrieval Network against Malicious Attacks

  • Chuang, Yung-Ting;Melliar-Smith, P. Michael;Moser, Louise E.;Lombera, Isai Michel
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.3
    • /
    • pp.179-192
    • /
    • 2012
  • This paper presents novel statistical algorithms for protecting the iTrust information retrieval network against malicious attacks. In iTrust, metadata describing documents, and requests containing keywords, are randomly distributed to multiple participating nodes. The nodes that receive the requests try to match the keywords in the requests with the metadata they hold. If a node finds a match, the matching node returns the URL of the associated information to the requesting node. The requesting node then uses the URL to retrieve the information from the source node. The novel detection algorithm determines empirically the probabilities of the specific number of matches based on the number of responses that the requesting node receives. It also calculates the analytical probabilities of the specific numbers of matches. It compares the observed and the analytical probabilities to estimate the proportion of subverted or non-operational nodes in the iTrust network using a window-based method and the chi-squared statistic. If the detection algorithm determines that some of the nodes in the iTrust network are subverted or non-operational, then the novel defensive adaptation algorithm increases the number of nodes to which the requests are distributed to maintain the same probability of a match when some of the nodes are subverted or non-operational as compared to when all of the nodes are operational. Experimental results substantiate the effectiveness of the detection and defensive adaptation algorithms for protecting the iTrust information retrieval network against malicious attacks.

Object Location Sensing using Signal Pattern Matching Methods (신호 패턴 매칭 방법을 이용한 이동체 위치 인식)

  • Byun, Yung-Cheol;Park, Sang-Yeol
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.548-558
    • /
    • 2007
  • This paper presents a method of location sensing of mobile objects using RF devices. By analyzing signal strengths between a certain number of fixed RF devices and a moving RF device, we can recognize the location of a moving object in real time. Firstly, signal strength values between RF devices are gathered, and then the values are normalized and constructed as a model feature vector for specific location. A number of model patterns are acquired and registered for all of the location which we want to recognize. For location sensing, signal strength information for an arbitrary moving RF device is acquired and compared with model feature vectors registered previously. In this case, distance value is calculated and the moving RF device is classified as one of the known model patterns. Experimental results show that our methods have performed the location sensing successfully with 100% rate of recognition when the number of fixed RF devices is 10 or more than 12. In terms of cost and applicability, experimental results seem to be very encouraging.

  • PDF

Weighted Binary Prefix Tree for IP Address Lookup (IP 주소 검색을 위한 가중 이진 프리픽스 트리)

  • Yim Changhoon;Lim Hyesook;Lee Bomi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11B
    • /
    • pp.911-919
    • /
    • 2004
  • IP address lookup is one of the essential functions on internet routers, and it determines overall router performance. The most important evaluation factor for software-based IP address lookup is the number of the worst case memory accesses. Binary prefix tree (BPT) scheme gives small number of worst case memory accesses among previous software-based schemes. However the tree structure of BPT is normally unbalanced. In this paper, we propose weighted binary prefix tree (WBP) scheme which generates nearly balanced tree, through combining the concept of weight to the BPT generation process. The proposed WBPT gives very small number of worst case memory accesses compared to the previous software-based schemes. Moreover the WBPT requires comparably small size of memory which can be fit within L2 cache for about 30,000 prefixes, and it is rather simple for prefix addition and deletion. Hence the proposed WBPT can be used for software-based If address lookup in practical routers.