• Title/Summary/Keyword: Voting method

Search Result 187, Processing Time 0.023 seconds

An Improved Face Recognition Method Using SIFT-Grid (SIFT-Grid를 사용한 향상된 얼굴 인식 방법)

  • Kim, Sung Hoon;Kim, Hyung Ho;Lee, Hyon Soo
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.299-307
    • /
    • 2013
  • The aim of this paper is the improvement of identification performance and the reduction of computational quantities in the face recognition system based on SIFT-Grid. Firstly, we propose a composition method of integrated template by removing similar SIFT keypoints and blending different keypoints in variety training images of one face class. The integrated template is made up of computation of similarity matrix and threshold-based histogram from keypoints in a same sub-region which divided by applying SIFT-Grid of training images. Secondly, we propose a computation method of similarity for identify of test image from composed integrated templates efficiently. The computation of similarity is performed that a test image to compare one-on-one with the integrated template of each face class. Then, a similarity score and a threshold-voting score calculates according to each sub-region. In the experimental results of face recognition tasks, the proposed methods is founded to be more accurate than both two other methods based on SIFT-Grid, also the computational quantities are reduce.

Fingerprint Liveness Detection Using Patch-Based Convolutional Neural Networks (패치기반 컨볼루션 뉴럴 네트워크 특징을 이용한 위조지문 검출)

  • Park, Eunsoo;Kim, Weonjin;Li, Qiongxiu;Kim, Jungmin;Kim, Hakil
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.1
    • /
    • pp.39-47
    • /
    • 2017
  • Nowadays, there have been an increasing number of illegal use cases where people try to fabricate the working hours by using fake fingerprints. So, the fingerprint liveness detection techniques have been actively studied and widely demanded in various applications. This paper proposes a new method to detect fake fingerprints using CNN (Convolutional Neural Ntworks) based on the patches of fingerprint images. Fingerprint image is divided into small square sized patches and each patch is classified as live, fake, or background by the CNN. Finally, the fingerprint image is classified into either live or fake based on the voting result between the numbers of fake and live patches. The proposed method does not need preprocessing steps such as segmentation because it includes the background class in the patch classification. This method shows promising results of 3.06% average classification errors on LivDet2011, LivDet2013 and LivDet2015 dataset.

Web Mining Using Fuzzy Integration of Multiple Structure Adaptive Self-Organizing Maps (다중 구조적응 자기구성지도의 퍼지결합을 이용한 웹 마이닝)

  • 김경중;조성배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.61-70
    • /
    • 2004
  • It is difficult to find an appropriate web site because exponentially growing web contains millions of web documents. Personalization of web search can be realized by recommending proper web sites using user profile but more efficient method is needed for estimating preference because user's evaluation on web contents presents many aspects of his characteristics. As user profile has a property of non-linearity, estimation by classifier is needed and combination of classifiers is necessary to anticipate diverse properties. Structure adaptive self-organizing map (SASOM) that is suitable for Pattern classification and visualization is an enhanced model of SOM and might be useful for web mining. Fuzzy integral is a combination method using classifiers' relevance that is defined subjectively. In this paper, estimation of user profile is conducted by using ensemble of SASOM's teamed independently based on fuzzy integral and evaluated by Syskill & Webert UCI benchmark data. Experimental results show that the proposed method performs better than previous naive Bayes classifier as well as voting of SASOM's.

Vote Decision-based Deinterlacing Scheme For Directional Error Correction (방향성 오류 교정을 위한 투표 결정 기반의 디인터레이싱 방법)

  • Oh, Sye-Hoon;Lee, Yeo-Song;Ahn, Chang-Beom;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.342-356
    • /
    • 2009
  • This paper presents a vote decision-based deinterlacing scheme for false directional error correction(VDD) to convert interlaced signal into non-interlaced signal using only one fields. The VDD using the vote decision goes through four steps process. The first step extracts regions having doubt of false edge using MM-ELA method. In these regions, the edge direction is decided by the majority vote using upper adjacent pixels's information through the second step. But, we still have undecided directions, which will be decided by the majority vote and the directional average decision at the third step. This step preserves the edge directions and minimizes visual degradation. Finally, the last step interpolates undecided pixels using DOI method which can consider the fine edge direction. Although the VDD with hierarchical structure has a high complexity, it can extract delicate edge compared to other pixel-by-pixel or window-by-window deinterlacing algorithms. Simulation results show that it has significantly improved both the subjective and objective qualities of the reconstructed images.

A Spam Mail Classification Using Link Structure Analysis (링크구조분석을 이용한 스팸메일 분류)

  • Rhee, Shin-Young;Khil, A-Ra;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.30-39
    • /
    • 2007
  • The existing content-based spam mail filtering algorithms have difficulties in filtering spam mails when e-mails contain images but little text. In this thesis we propose an efficient spam mail classification algorithm that utilizes the link structure of e-mails. We compute the number of hyperlinks in an e-mail and the in-link frequencies of the web pages hyperlinked in the e-mail. Using these two features we classify spam mails and legitimate mails based on the decision tree trained for spam mail classification. We also suggest a hybrid system combining three different algorithms by majority voting: the link structure analysis algorithm, a modified link structure analysis algorithm, in which only the host part of the hyperlinked pages of an e-mail is used for link structure analysis, and the content-based method using SVM (support vector machines). The experimental results show that the link structure analysis algorithm slightly outperforms the existing content-based method with the accuracy of 94.8%. Moreover, the hybrid system achieves the accuracy of 97.0%, which is a significant performance improvement over the existing method.

Improvement of Classification Rate of Handwritten Digits by Combining Multiple Dynamic Topology-Preserving Self-Organizing Maps (다중 동적 위상보존 자기구성 지도의 결합을 통한 필기숫자 데이타의 분류율 향상)

  • Kim, Hyun-Don;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.12
    • /
    • pp.875-884
    • /
    • 2001
  • Although the self organizing map (SOM) is widely utilized in such fields of data visualization and topology preserving mapping, since it should have the topology fixed before trained, it has some shortcomings that it is difficult to apply it to practical problems, and classification capability is quite low despite better clustering performance. To overcome these points this paper proposes the dynamic topology preserving self-organizing map(DTSOM) that dynamically splits the output nodes on the map and trains them, and attempts to improve the classification capability by combining multiple DTSOMs K-Winner method has been applied to combine DTSOMs which produces K outputs with winner node selection method. This produces even better performance than the conventional combining methods such as majority voting weighting, BKS Bayesian, Borda, Condorect and reliability sum. DTSOM remedies the shortcoming of determining the topology in advance, and the classification rate increases significantly by combing multiple maps trained with different features. Experimental results with handwritten digit recognition indicate that the proposed method works out to problems of conventional SOM effectively so to improve the classification rate to 98.1%.

  • PDF

Model selection method for categorical data with non-response (무응답을 가지고 있는 범주형 자료에 대한 모형 선택 방법)

  • Yoon, Yong-Hwa;Choi, Bo-Seung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.627-641
    • /
    • 2012
  • We consider a model estimation and model selection methods for the multi-way contingency table data with non-response or missing values. We also consider hierarchical Bayesian model in order to handle a boundary solution problem that can happen in the maximum likelihood estimation under non-ignorable non-response model and we deal with a model selection method to find the best model for the data. We utilized Bayes factors to handle model selection problem under Bayesian approach. We applied proposed method to the pre-election survey for the 2004 Korean National Assembly race. As a result, we got the non-ignorable non-response model was favored and the variable of voting intention was most suitable.

Development of a Model for Winner Prediction in TV Audition Program Using Machine Learning Method: Focusing on Program (머신러닝을 활용한 TV 오디션 프로그램의 우승자 예측 모형 개발: 프로듀스X 101 프로그램을 중심으로)

  • Gwak, Juyoung;Yoon, Hyun Shik
    • Knowledge Management Research
    • /
    • v.20 no.3
    • /
    • pp.155-171
    • /
    • 2019
  • In the entertainment industry which has great uncertainty, it is essential to predict public preference first. Thanks to various mass media channels such as cable TV and internet-based streaming services, the reality audition program has been getting big attention every day and it is being used as a new window to new entertainers' debut. This phenomenon means that it is changing from a closed selection process to an open selection process, which delegates selection rights to the public. This is characterized by the popularity of the public being reflected in the selection process. Therefore, this study aims to implement a machine learning model which predicts the winner of , which has recently been popular in South Korea. By doing so, this study is to extend the research method in the cultural industry and to suggest practical implications. We collected the data of winners from the 1st, 2nd, and 3rd seasons of the Produce 101 and implemented the predictive model through the machine learning method with the accumulated data. We tried to develop the best predictive model that can predict winners of by using four machine learning methods such as Random Forest, Decision Tree, Support Vector Machine (SVM), and Neural Network. This study found that the audience voting and the amount of internet news articles on each participant were the main variables for predicting the winner and extended the discussion by analyzing the precision of prediction.

Performance comparison on vocal cords disordered voice discrimination via machine learning methods (기계학습에 의한 후두 장애음성 식별기의 성능 비교)

  • Cheolwoo Jo;Soo-Geun Wang;Ickhwan Kwon
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • This paper studies how to improve the identification rate of laryngeal disability speech data by convolutional neural network (CNN) and machine learning ensemble learning methods. In general, the number of laryngeal dysfunction speech data is small, so even if identifiers are constructed by statistical methods, the phenomenon caused by overfitting depending on the training method can lead to a decrease the identification rate when exposed to external data. In this work, we try to combine results derived from CNN models and machine learning models with various accuracy in a multi-voting manner to ensure improved classification efficiency compared to the original trained models. The Pusan National University Hospital (PNUH) dataset was used to train and validate algorithms. The dataset contains normal voice and voice data of benign and malignant tumors. In the experiment, an attempt was made to distinguish between normal and benign tumors and malignant tumors. As a result of the experiment, the random forest method was found to be the best ensemble method and showed an identification rate of 85%.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.