• Title/Summary/Keyword: Software Clustering

Search Result 321, Processing Time 0.02 seconds

Design of an Arm Gesture Recognition System Using Feature Transformation and Hidden Markov Models (특징 변환과 은닉 마코프 모델을 이용한 팔 제스처 인식 시스템의 설계)

  • Heo, Se-Kyeong;Shin, Ye-Seul;Kim, Hye-Suk;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.723-730
    • /
    • 2013
  • This paper presents the design of an arm gesture recognition system using Kinect sensor. A variety of methods have been proposed for gesture recognition, ranging from the use of Dynamic Time Warping(DTW) to Hidden Markov Models(HMM). Our system learns a unique HMM corresponding to each arm gesture from a set of sequential skeleton data. Whenever the same gesture is performed, the trajectory of each joint captured by Kinect sensor may much differ from the previous, depending on the length and/or the orientation of the subject's arm. In order to obtain the robust performance independent of these conditions, the proposed system executes the feature transformation, in which the feature vectors of joint positions are transformed into those of angles between joints. To improve the computational efficiency for learning and using HMMs, our system also performs the k-means clustering to get one-dimensional integer sequences as inputs for discrete HMMs from high-dimensional real-number observation vectors. The dimension reduction and discretization can help our system use HMMs efficiently to recognize gestures in real-time environments. Finally, we demonstrate the recognition performance of our system through some experiments using two different datasets.

XML Document Analysis based on Similarity (유사성 기반 XML 문서 분석 기법)

  • Lee, Jung-Won;Lee, Ki-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.6
    • /
    • pp.367-376
    • /
    • 2002
  • XML allows users to define elements using arbitrary words and organize them in a nested structure. These features of XML offer both challenges and opportunities in information retrieval and document management. In this paper, we propose a new methodology for computing similarity considering XML semantics - meanings of the elements and nested structures of XML documents. We generate extended-element vectors, using thesaurus, to normalize synonyms, compound words, and abbreviations and build similarity matrix using them. And then we compute similarity between XML elements. We also discover and minimize XML structure using automata(NFA(Nondeterministic Finite Automata) and DFA(Deterministic Finite automata). We compute similarity between XML structures using similarity matrix between elements and minimized XML structures. Our methodology considering XML semantics shows 100% accuracy in identifying the category of real documents from on-line bookstore.

A Statistical Approach for Improving the Embedding Capacity of Block Matching based Image Steganography (블록 매칭 기반 영상 스테가노그래피의 삽입 용량 개선을 위한 통계적 접근 방법)

  • Kim, Jaeyoung;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.643-651
    • /
    • 2017
  • Steganography is one of information hiding technologies and discriminated from cryptography in that it focuses on avoiding the existence the hidden information from being detected by third parties, rather than protecting it from being decoded. In this paper, as an image steganography method which uses images as media, we propose a new block matching method that embeds information into the discrete wavelet transform (DWT) domain. The proposed method, based on a statistical analysis, reduces loss of embedding capacity due to inequable use of candidate blocks. It works in such a way that computes the variance of each candidate block, preserves candidate blocks with high frequency components while reducing candidate blocks with low frequency components by compressing them exploiting the k-means clustering algorithm. Compared with the previous block matching method, the proposed method can reconstruct secret images with similar PSNRs while embedding higher-capacity information.

A New Approach Combining Content-based Filtering and Collaborative Filtering for Recommender Systems (추천시스템을 위한 내용기반 필터링과 협력필터링의 새로운 결합 기법)

  • Kim, Byeong-Man;Li, Qing;Kim, Si-Gwan;Lim, En-Ki;Kim, Ju-Yeon
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.332-342
    • /
    • 2004
  • With the explosive growth of information in our real life, information filtering is quickly becoming a popular technique for reducing information overload. Information filtering technique is divided into two categories: content-based filtering and collaborative filtering (or social filtering). Content-based filtering selects the information based on contents; while collaborative filtering combines the opinions of other persons to make a prediction for the target user. In this paper, we describe a new filtering approach that seamlessly combines content-based filtering and collaborative filtering to take advantages from both of them, where a technique using user profiles efficiently on the collaborative filtering framework is introduced to predict a user's preference. The proposed approach is experimentally evaluated and compared to conventional filtering. Our experiments showed that the proposed approach not only achieved significant improvement in prediction quality, but also dealt with new users well.

Rule Models for the Integrated Design of Knowledge Acquisition, Reasoning, and Knowledge Refinement (지식획득, 추론, 지식정제의 통합적 설계를 위한 규칙모델의 구축)

  • Lee, Gye-Sung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.7
    • /
    • pp.1781-1791
    • /
    • 1996
  • A number of research issues such as knowledge acquisition, inferencing techniques, and knowledge refinement methodologies have been involved in the development of expert systems. Since each issue is considered very com- plicated, there has been little effort to take all the issues into account collectively at once. However, knowledge acquisition and inferencing are closely reated because the knowledge is extracted by human experts from the inferencing process for solving a specific task or problem. Knowledge refinement is also accomplished by hand-ling problems caused during the inferencing process of the system due to incompleteness and inconsistency of the knowledge base. From this perspecitive, we present a method by which software platform is established in which those issues are integrated in the development of expert systems, especially in the domain where the domain models and concepts are hard to be constructed because of inherent fuzziness of the domain. We apply a machine learning technique,technique, conceptual clustering,to build a knowledge base and rual models by which an efficient inferencing,incermental knp\owledge acquisition and refinment are possible.

  • PDF

Identification of Fuzzy Inference Systems Using a Multi-objective Space Search Algorithm and Information Granulation

  • Huang, Wei;Oh, Sung-Kwun;Ding, Lixin;Kim, Hyun-Ki;Joo, Su-Chong
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.6
    • /
    • pp.853-866
    • /
    • 2011
  • We propose a multi-objective space search algorithm (MSSA) and introduce the identification of fuzzy inference systems based on the MSSA and information granulation (IG). The MSSA is a multi-objective optimization algorithm whose search method is associated with the analysis of the solution space. The multi-objective mechanism of MSSA is realized using a non-dominated sorting-based multi-objective strategy. In the identification of the fuzzy inference system, the MSSA is exploited to carry out parametric optimization of the fuzzy model and to achieve its structural optimization. The granulation of information is attained using the C-Means clustering algorithm. The overall optimization of fuzzy inference systems comes in the form of two identification mechanisms: structure identification (such as the number of input variables to be used, a specific subset of input variables, the number of membership functions, and the polynomial type) and parameter identification (viz. the apexes of membership function). The structure identification is developed by the MSSA and C-Means, whereas the parameter identification is realized via the MSSA and least squares method. The evaluation of the performance of the proposed model was conducted using three representative numerical examples such as gas furnace, NOx emission process data, and Mackey-Glass time series. The proposed model was also compared with the quality of some "conventional" fuzzy models encountered in the literature.

A Term Weight Mensuration based on Popularity for Search Query Expansion (검색 질의 확장을 위한 인기도 기반 단어 가중치 측정)

  • Lee, Jung-Hun;Cheon, Suh-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.620-628
    • /
    • 2010
  • With the use of the Internet pervasive in everyday life, people are now able to retrieve a lot of information through the web. However, exponential growth in the quantity of information on the web has brought limits to online search engines in their search performance by showing piles and piles of unwanted information. With so much unwanted information, web users nowadays need more time and efforts than in the past to search for needed information. This paper suggests a method of using query expansion in order to quickly bring wanted information to web users. Popularity based Term Weight Mensuration better performance than the TF-IDF and Simple Popularity Term Weight Mensuration to experiments without changes of search subject. When a subject changed during search, Popularity based Term Weight Mensuration's performance change is smaller than others.

Improvement of Component Design using Component Metrics (컴포넌트 메트릭스를 이용한 컴포넌트 설계 재정비)

  • 고병선;박재년
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.980-990
    • /
    • 2004
  • The component-based development methodology aims at the high state of abstraction and the reusability with components larger than classes. It is indispensible to measure the component so as to improve the quality of the component-based system and the individual component. And, the quality of the component should be improved through putting the results into the process of the development. So, it is necessary to study the component metric which can be applied in the stage of the component analysis and design. Hence, in this paper, we propose component cohesion, coupling, independence metrics reflecting the information extracted in the step of component analysis and design. The proposed component metric bases on the similarity information about behavior patterns of operations to offer the component's service. Also, we propose the redesigning process for the improvement of component design. That process uses the techniques of clustering and is for the thing that makes the component as the independent functional unit having the low complexity and easy maintenance. And, we examine that the component design model can be improved by the component metrics and the component redesigning process.

A Motion Correspondence Algorithm based on Point Series Similarity (점 계열 유사도에 기반한 모션 대응 알고리즘)

  • Eom, Ki-Yeol;Jung, Jae-Young;Kim, Moon-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.305-310
    • /
    • 2010
  • In this paper, we propose a heuristic algorithm for motion correspondence based on a point series similarity. A point series is a sequence of points which are sorted in the ascending order of their x-coordinate values. The proposed algorithm clusters the points of a previous frame based on their local adjacency. For each group, we construct several potential point series by permuting the points in it, each of which is compared to the point series of the following frame in order to match the set of points through their similarity based on a proximity constraint. The longest common subsequence between two point series is used as global information to resolve the local ambiguity. Experimental results show an accuracy of more than 90% on two image sequences from the PETS 2009 and the CAVIAR data sets.

Automatic Email Multi-category Classification Using Dynamic Category Hierarchy and Non-negative Matrix Factorization (비음수 행렬 분해와 동적 분류 체계를 사용한 자동 이메일 다원 분류)

  • Park, Sun;An, Dong-Un
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.378-385
    • /
    • 2010
  • The explosive increase in the use of email has made to need email classification efficiently and accurately. Current work on the email classification method have mainly been focused on a binary classification that filters out spam-mails. This methods are based on Support Vector Machines, Bayesian classifiers, rule-based classifiers. Such supervised methods, in the sense that the user is required to manually describe the rules and keyword list that is used to recognize the relevant email. Other unsupervised method using clustering techniques for the multi-category classification is created a category labels from a set of incoming messages. In this paper, we propose a new automatic email multi-category classification method using NMF for automatic category label construction method and dynamic category hierarchy method for the reorganization of email messages in the category labels. The proposed method in this paper, a large number of emails are managed efficiently by classifying multi-category email automatically, email messages in their category are reorganized for enhancing accuracy whenever users want to classify all their email messages.