• Title/Summary/Keyword: 부분문자열

Search Result 67, Processing Time 0.022 seconds

Video character recognition improvement by support vector machines and regularized discriminant analysis (서포트벡터머신과 정칙화판별함수를 이용한 비디오 문자인식의 분류 성능 개선)

  • Lim, Su-Yeol;Baek, Jang-Sun;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.4
    • /
    • pp.689-697
    • /
    • 2010
  • In this study, we propose a new procedure for improving the character recognition of text area extracted from video images. The recognition of strings extracted from video, which are mixed with Hangul, English, numbers and special characters, etc., is more difficult than general character recognition because of various fonts and size, graphic forms of letters tilted image, disconnection, miscellaneous videos, tangency, characters of low definition, etc. We improved the recognition rate by taking commonly used letters and leaving out the barely used ones instead of recognizing all of the letters, and then using SVM and RDA character recognition methods. Our numerical results indicate that combining SVM and RDA performs better than other methods.

URL Normalization for Web Applications (웹 어플리케이션을 위한 URL 정규화)

  • Hong, Seok-Hoo;Kim, Sung-Jin;Lee, Sang-Ho
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.6
    • /
    • pp.716-722
    • /
    • 2005
  • In the m, syntactically different URLs could represent the same resource. The URL normalization is a process that transform a URL, syntactically different and represent the same resource, into canonical form. There are on-going efforts to define standard URL normalization. The standard URL normalization designed to minimize false negative while strictly avoiding false positive. This paper considers the four URL normalization issues beyond ones specified in the standard URL normalization. The idea behind our work is that in the URL normalization we want to minimize false negatives further while allowing false positives in a limited level. Two metrics are defined to analyze the effect of each step in the URL normalization. Over 170 million URLs that were collected in the real web pages, we did an experiment, and interesting statistical results are reported in this paper.

Tangible Interaction : Application for A New Interface Method for Mobile Device -Focused on development of virtual keyboard using camera input - (체감형 인터랙션 : 모바일 기기의 새로운 인터페이스 방법으로서의 활용 -카메라 인식에 의한 가상 키보드입력 방식의 개발을 중심으로 -)

  • 변재형;김명석
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.441-448
    • /
    • 2004
  • Mobile devices such as mobile phones or PDAs are considered as main interlace tools in ubiquitous computing environment. For searching information in mobile device, it should be possible for user to input some text as well as to control cursor for navigation. So, we should find efficient interlace method for text input in limited dimension of mobile devices. This study intends to suggest a new approach to mobile interaction using camera based virtual keyboard for text input in mobile devices. We developed a camera based virtual keyboard prototype using a PC camera and a small size LCD display. User can move the prototype in the air to control the cursor over keyboard layout in screen and input text by pressing a button. The new interaction method in this study is evaluated as competitive compared to mobile phone keypad in left input efficiency. And the new method can be operated by one hand and make it possible to design smaller device by eliminating keyboard part. The new interaction method can be applied to text input method for mobile devices requiring especially small dimension. And this method can be modified to selection and navigation method for wireless internet contents on small screen devices.

  • PDF

The Method of Deriving Japanese Keyword Using Dependence (의존관계에 기초한 일본어 키워드 추출방법)

  • Lee, Tae-Hun;Jung, Kyu-Cheol;Park, Ki-Hong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.41-46
    • /
    • 2003
  • This thesis composes separated words in text for extracting keywords from Japanese, proposes extracting indexing keywords which consist of a compound noun using words and sentences information with the rules in the sentences. It constructs generative rules of compound nouns to be based In dependence as a result of analysing character of keywords in the text not the same way as before. To hold other extracting keywords and the content of sentence, and suggest how to decide importance concerned some restriction and repetition of words about generative rules. To verify the validity of keywords extracting, we have used titles and abstracts from Japanese thesis 65 files about natural language and/or voice processing, and obtain 63% in outputting one in the top rank.

내용기반 웹 서비스 검색 엔진의 개발

  • Son, Seung-Beom;Lee, Gyu-Cheol
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2006.06a
    • /
    • pp.656-699
    • /
    • 2006
  • 웹 서비스는 사용자가 다양한 인터페이스 정의와 교환 메시지 형식을 가지는 서비스를 개발하는데 있어 보다 효과적이고 단일화된 방법을 제공한다. 웹 서비스에서 인터페이스 정의와 교환 메시지 형식은 WSDL 통해 정의되며, 이 WSDL 문서를 통해 이용할 서비스의 인터페이스와 교환 메시지 형식을 파악하여 빠르게 해당 서비스를 이용할 수 있도록 한다. 이러한 웹 서비스의 등록과 검색을 위해서는 레지스트리 방식을 이용한다. 개발된 서비스에 관한 설명 정보는 서비스 제공자에 의해 작성되어 레지스트리에 등록되며, 서비스 요청자는 레지스트리로부터 필요한 서비스를 검색하여 이용한다. UDDI는 웹 서비스를 위한 분산 레지스트리 표준으로 웹 서비스를 위한 등록과 검색 메커니즘을 제공한다. UDDI에서 지원하는 검색 메커니즘은 크게 키워드 검색과 비즈니스와 서비스에 대한 카테고리별 검색으로 구분된다. 키워드 기반 검색은 SQL LIKE 연산을 통해 비즈니스와 서비스의 이름에 대하여 부분 문자열이 일치하는지 검사하는 방식으로 이루어진다. 이러한 UDDI 의 키워드 기반 검색은 등록된 서비스의 이름 이외의 내용 정보에 대한 검색을 지원하지 못하므로 효과적인 검색을 지원하지 못하는 단점을 가진다. 또한 UDDI는 WSDL 문서의 내용에 대한 검색은 지원하지 못하는 단점을 가진다. 이에 따라 현대의 서비스 검색은 서비스의 이름에 대한 검색만을 지원한다. 이러한 현재의 웹 서비스 검색에서의 문제점을 해결하기 위해서는 UDDI 에 등록된 설명 정보와 WSDL 문서 모두에 대한 내용 기반의 검색을 지원하고 검색 결과를 순위화 (ranking)하여 제시할 수 있는 검색 엔진이 요구된다. 이 논문은 이러한 문제점들을 해결할 수 있도록 내용 기반 검색을 지원할 수 있는 웹 서비스를 위 한 검색 엔진을 제안한다. 제안한 검색 엔진은 UDDI 등록 정보에 대하여 내용 기반 검색을 수행할 수 있도록 벡터 공간 모델을 활용한 유사도 비교 방법을 이용한다. 또한 UDDI 등록 정보 외에 실질 적인 서비스의 인터페이스와 교환 메시지 형식에 대한 비교의 수행을 위하여 WSDL 문서에 대한 유사도 비교를 수행한다. 유사도 측정시 UDDI 등록 정보와 WSDL 문서와 같은 계층적인 문서 구조를 검색 결과에 반영할 수 있는 방법을 지원한다. 지원하는 검색 방법은 두 가지로 키워드 검색과 함께 텀플릿 검색을 지원한다. 템플릿 검색은 서비스의 등록 정보 외에 인터페이스 정의가 얼마나 일치하는지를 비교하기 위해 WSDL 문서에 대한 유사도를 비교할 수 있도록 한다. 이러한 검색의 지원을 통해 제안한 웹 서비스를 위한 검색 엔진은 기존의 레지스트리를 이용한 검 색 방법보다 정확한 검색 결과를 제공한다.

  • PDF

An Analysis System for Whole Genomic Sequence Using String B-Tree (스트링 B-트리를 이용한 게놈 서열 분석 시스템)

  • Choe, Jeong-Hyeon;Jo, Hwan-Gyu
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.509-516
    • /
    • 2001
  • As results of many genome projects, genomic sequences of many organisms are revealed. Various methods such as global alignment, local alignment are used to analyze the sequences of the organisms, and k -mer analysis is one of the methods for analyzing the genomic sequences. The k -mer analysis explores the frequencies of all k-mers or the symmetry of them where the k -mer is the sequenced base with the length of k. However, existing on-memory algorithms are not applicable to the k -mer analysis because a whole genomic sequence is usually a large text. Therefore, efficient data structures and algorithms are needed. String B-tree is a good data structure that supports external memory and fits into pattern matching. In this paper, we improve the string B-tree in order to efficiently apply the data structure to k -mer analysis, and the results of k -mer analysis for C. elegans and other 30 genomic sequences are shown. We present a visualization system which enables users to investigate the distribution and symmetry of the frequencies of all k -mers using CGR (Chaotic Game Representation). We also describe the method to find the signature which is the part of the sequence that is similar to the whole genomic sequence.

  • PDF

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.