• 제목/요약/키워드: point dataset

검색결과 195건 처리시간 0.024초

A Novel Method for Hand Posture Recognition Based on Depth Information Descriptor

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권2호
    • /
    • pp.763-774
    • /
    • 2015
  • Hand posture recognition has been a wide region of applications in Human Computer Interaction and Computer Vision for many years. The problem arises mainly due to the high dexterity of hand and self-occlusions created in the limited view of the camera or illumination variations. To remedy these problems, a hand posture recognition method using 3-D point cloud is proposed to explicitly utilize 3-D information from depth maps in this paper. Firstly, hand region is segmented by a set of depth threshold. Next, hand image normalization will be performed to ensure that the extracted feature descriptors are scale and rotation invariant. By robustly coding and pooling 3-D facets, the proposed descriptor can effectively represent the various hand postures. After that, SVM with Gaussian kernel function is used to address the issue of posture recognition. Experimental results based on posture dataset captured by Kinect sensor (from 1 to 10) demonstrate the effectiveness of the proposed approach and the average recognition rate of our method is over 96%.

기술혁신 횟수의 분포함수 추정 -혼합모형을 적용하여- (Approximation of the Distribution Function for the Number of Innovation Activities Using a Mixture Model)

  • 유승훈;박두호
    • 기술혁신학회지
    • /
    • 제8권3호
    • /
    • pp.887-910
    • /
    • 2005
  • This paper attempts to approximate the distribution function for the number of innovation activities (NIA). To this end, the dataset of 2002 Korean Innovation Survey (KIS 2002) published by Science and Technology Policy Institute is used. To deal with zero NTI values given by a considerable number of firms in the KIS 2002 survey, a mixture model of distributions for NIA is applied. The NIA is specified as a mixture of two distributions, one with a point mass at zero and the other with full support on the positive half of the real line. The model was empirically verified for the KIS 2002 data. The mixture model can easily capture the common bimodality feature of the NIA distribution. In addition, when covariates were added to the mixture model, it was found that the probability that a firm has zero NIA significantly varies with some variables.

  • PDF

A comparative study in Bayesian semiparametric approach to small area estimation

  • Heo, Simyoung;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권5호
    • /
    • pp.1433-1441
    • /
    • 2016
  • Small area model provides reliable and accurate estimations when the sample size is not sufficient. Our dataset has an inherent nonlinear pattern which signicantly affects our inference. In this case, we could consider semiparametric models such as truncated polynomial basis function and radial basis function. In this paper, we study four Bayesian semiparametric models for small areas to handle this point. Four small area models are based on two kinds of basis function and different knots positions. To evaluate the different estimates, four comparison measurements have been employed as criteria. In these comparison measurements, the truncated polynomial basis function with equal quantile knots has shown the best result. In Bayesian calculation, we use Gibbs sampler to solve the numerical problems.

A Mixed Co-clustering Algorithm Based on Information Bottleneck

  • Liu, Yongli;Duan, Tianyi;Wan, Xing;Chao, Hao
    • Journal of Information Processing Systems
    • /
    • 제13권6호
    • /
    • pp.1467-1486
    • /
    • 2017
  • Fuzzy co-clustering is sensitive to noise data. To overcome this noise sensitivity defect, possibilistic clustering relaxes the constraints in FCM-type fuzzy (co-)clustering. In this paper, we introduce a new possibilistic fuzzy co-clustering algorithm based on information bottleneck (ibPFCC). This algorithm combines fuzzy co-clustering and possibilistic clustering, and formulates an objective function which includes a distance function that employs information bottleneck theory to measure the distance between feature data point and feature cluster centroid. Many experiments were conducted on three datasets and one artificial dataset. Experimental results show that ibPFCC is better than such prominent fuzzy (co-)clustering algorithms as FCM, FCCM, RFCC and FCCI, in terms of accuracy and robustness.

Exploring an Optimal Feature Selection Method for Effective Opinion Mining Tasks

  • Eo, Kyun Sun;Lee, Kun Chang
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권2호
    • /
    • pp.171-177
    • /
    • 2019
  • This paper aims to find the most effective feature selection method for the sake of opinion mining tasks. Basically, opinion mining tasks belong to sentiment analysis, which is to categorize opinions of the online texts into positive and negative from a text mining point of view. By using the five product groups dataset such as apparel, books, DVDs, electronics, and kitchen, TF-IDF and Bag-of-Words(BOW) fare calculated to form the product review feature sets. Next, we applied the feature selection methods to see which method reveals most robust results. The results show that the stacking classifier based on those features out of applying Information Gain feature selection method yields best result.

A Study on Representative Skyline Using Connected Component Clustering

  • Choi, Jong-Hyeok;Nasridinov, Aziz
    • Journal of Multimedia Information System
    • /
    • 제6권1호
    • /
    • pp.37-42
    • /
    • 2019
  • Skyline queries are used in a variety of fields to make optimal decisions. However, as the volume of data and the dimension of the data increase, the number of skyline points increases with the amount of time it takes to discover them. Mainly, because the number of skylines is essential in many real-life applications, various studies have been proposed. However, previous researches have used the k-parameter methods such as top-k and k-means to discover representative skyline points (RSPs) from entire skyline point set, resulting in high query response time and reduced representativeness due to k dependency. To solve this problem, we propose a new Connected Component Clustering based Representative Skyline Query (3CRS) that can discover RSP quickly even in high-dimensional data through connected component clustering. 3CRS performs fast discovery and clustering of skylines through hash indexes and connected components and selects RSPs from each cluster. This paper proves the superiority of the proposed method by comparing it with representative skyline queries using k-means and DBSCAN with the real-world dataset.

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • 제45권4호
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

Predicting depth value of the future depth-based multivariate record

  • Samaneh Tata;Mohammad Reza Faridrohani
    • Communications for Statistical Applications and Methods
    • /
    • 제30권5호
    • /
    • pp.453-465
    • /
    • 2023
  • The prediction problem of univariate records, though not addressed in multivariate records, has been discussed by many authors based on records values. There are various definitions for multivariate records among which depth-based records have been selected for the aim of this paper. In this paper, by means of the maximum likelihood and conditional median methods, point and interval predictions of depth values which are related to the future depth-based multivariate records are considered on the basis of the observed ones. The observations derived from some elements of the elliptical distributions are the main reason of studying this problem. Finally, the satisfactory performance of the prediction methods is illustrated via some simulation studies and a real dataset about Kermanshah city drought.

Term Frequency-Inverse Document Frequency (TF-IDF) Technique Using Principal Component Analysis (PCA) with Naive Bayes Classification

  • J.Uma;K.Prabha
    • International Journal of Computer Science & Network Security
    • /
    • 제24권4호
    • /
    • pp.113-118
    • /
    • 2024
  • Pursuance Sentiment Analysis on Twitter is difficult then performance it's used for great review. The present be for the reason to the tweet is extremely small with mostly contain slang, emoticon, and hash tag with other tweet words. A feature extraction stands every technique concerning structure and aspect point beginning particular tweets. The subdivision in a aspect vector is an integer that has a commitment on ascribing a supposition class to a tweet. The cycle of feature extraction is to eradicate the exact quality to get better the accurateness of the classifications models. In this manuscript we proposed Term Frequency-Inverse Document Frequency (TF-IDF) method is to secure Principal Component Analysis (PCA) with Naïve Bayes Classifiers. As the classifications process, the work proposed can produce different aspects from wildly valued feature commencing a Twitter dataset.

GPM 위성 강우자료의 검증과 지상관측 자료를 통한 강우 보정 기법 (Assessment and merging technique for GPM satellite precipitation product using ground based measurement)

  • 백종진;박종민;김기영;최민하
    • 한국수자원학회논문집
    • /
    • 제51권2호
    • /
    • pp.131-140
    • /
    • 2018
  • 강우는 물순환 시스템을 이해를 증가 시킬 뿐만 아니라, 효율적인 수자원 확보 및 관리에 있어서 가장 핵심적인 인자이다. 본 연구는 2015년을 대상으로 한반도에서의 92개의 ASOS 지점자료와 최근에 발사된 GPM 위성강우 자료의 비교를 통하여 활용가능성을 평가하였다. 또한 지점 자료의 장점과 인공위성 자료의 장점을 융합함으로써 보다 개선된 강우자료를 산출하기 위해 3가지의 상세화 방법(Geographical Differential Analysis, Geographical Ratio Analysis, Conditional Merging)들을 적용하였다. 이 연구에서 도출된 결과는 다음과 같다. 1) ASOS 자료와의 검증을 통해 GPM 강우자료가 약간 과대산정되는 편향을 가지고 있는 것을 확인하였으며, 특히 여름 기간에 오차가 높게 발생하는 것으로 나타났다. 2) Jackknife 방법을 통하여 각 합성방법에 대해서 검증하였을 때, 공간해상도가 높아짐에 따라서 오차가 줄어드는 것을 확인하였으며, 상세화 방법 중 conditional merging 방법이 가장 좋은 성능을 나타내었다.