• Title/Summary/Keyword: K-최근이웃

Search Result 213, Processing Time 0.029 seconds

Comparison of Neighborhood Information Systems for Lattice Data Analysis (격자자료분석을 위한 이웃정보시스템의 비교)

  • Lee, Kang-Seok;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.3
    • /
    • pp.387-397
    • /
    • 2008
  • Recently many researches on data analysis using spatial statistics have been studied in various field and the studies on small area estimations using spatial statistics are in actively progress. In analysis of lattice data, defining the neighborhood information system is the most crucial procedure because it also determines the result of the analysis. However the used neighborhood informal ion system is generally defined by sharing the common border lines of small areas. In this paper the other neighborhood information systems are introduced and those systems are compared with Moran's I statistic and for the comparisons, Economic Active Population Survey (2001) is used.

Missing values imputation for time course gene expression data using the pattern consistency index adaptive nearest neighbors (시간경로 유전자 발현자료에서 패턴일치지수와 적응 최근접 이웃을 활용한 결측값 대치법)

  • Shin, Heyseo;Kim, Dongjae
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.3
    • /
    • pp.269-280
    • /
    • 2020
  • Time course gene expression data is a large amount of data observed over time in microarray experiments. This data can also simultaneously identify the level of gene expression. However, the experiment process is complex, resulting in frequent missing values due to various causes. In this paper, we propose a pattern consistency index adaptive nearest neighbors as a method of missing value imputation. This method combines the adaptive nearest neighbors (ANN) method that reflects local characteristics and the pattern consistency index that considers consistent degree for gene expression between observations over time points. We conducted a Monte Carlo simulation study to evaluate the usefulness of proposed the pattern consistency index adaptive nearest neighbors (PANN) method for two yeast time course data.

Efficient Path Finding Based on the $A^*$ algorithm for Processing k-Nearest Neighbor Queries in Road Network Databases (도로 네트워크에서 $A^*$ 알고리즘을 이용한 k-최근접 이웃 객체에 대한 효과적인 경로 탐색 방법)

  • Shin, Sung-Hyun;Lee, Sang-Chul;Kim, Sang-Wook;Lee, Jung-Hoon;Im, Eul-Kyu
    • Journal of KIISE:Databases
    • /
    • v.36 no.5
    • /
    • pp.405-410
    • /
    • 2009
  • This paper proposes an efficient path finding scheme capable of searching the paths to k static objects from a given query point, aiming at both improving the legacy k-nearest neighbor search and making it easily applicable to the road network environment. To the end of improving the speed of finding one-to-many paths, the modified A* obviates the duplicated part of node scans involved in the multiple executions of a one-to-one path finding algorithm. Additionally, the cost to the each object found in this step makes it possible to finalize the k objects according to the network distance from the candidate set as well as to order them by the path cost. Experiment results show that the proposed scheme has the accuracy of around 100% and improves the search speed by $1.3{\sim}3.0$ times of k-nearest neighbor searches, compared with INE, post-Dijkstra, and $na{\ddot{i}}ve$ method.

K Nearest Neighbor Joins for Big Data Processing based on Spark (Spark 기반 빅데이터 처리를 위한 K-최근접 이웃 연결)

  • JIAQI, JI;Chung, Yeongjee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1731-1737
    • /
    • 2017
  • K Nearest Neighbor Join (KNN Join) is a simple yet effective method in machine learning. It is widely used in small dataset of the past time. As the number of data increases, it is infeasible to run this model on an actual application by a single machine due to memory and time restrictions. Nowadays a popular batch process model called MapReduce which can run on a cluster with a large number of computers is widely used for large-scale data processing. Hadoop is a framework to implement MapReduce, but its performance can be further improved by a new framework named Spark. In the present study, we will provide a KNN Join implement based on Spark. With the advantage of its in-memory calculation capability, it will be faster and more effective than Hadoop. In our experiments, we study the influence of different factors on running time and demonstrate robustness and efficiency of our approach.

k-Nearest Neighbor Learning with Varying Norms (놈(Norm)에 따른 k-최근접 이웃 학습의 성능 변화)

  • Kim, Doo-Hyeok;Kim, Chan-Ju;Hwang, Kyu-Baek
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.371-375
    • /
    • 2008
  • 예제 기반 학습(instance-based learning) 방법 중 하나인 k-최근접 이웃(k-nearest reighbor, k-NN) 학습은 간단하고 예측 정확도가 비교적 높아 분류 및 회귀 문제 해결을 위한 기반 방법론으로 널리 적용되고 있다. k-NN 학습을 위한 알고리즘은 기본적으로 유클리드 거리 혹은 2-놈(norm)에 기반하여 학습예제들 사이의 거리를 계산한다. 본 논문에서는 유클리드 거리를 일반화한 개념인 p-놈의 사용이 k-NN 학습의 성능에 어떠한 영향을 미치는지 연구하였다. 구체적으로 합성데이터와 다수의 기계학습 벤치마크 문제 및 실제 데이터에 다양한 p-놈을 적용하여 그 일반화 성능을 경험적으로 조사하였다. 실험 결과, 데이터에 잡음이 많이 존재하거나 문제가 어려운 경우에 p의 값을 작게 하는 것이 성능을 향상시킬 수 있었다.

  • PDF

Prototype based Classification by Generating Multidimensional Spheres per Class Area (클래스 영역의 다차원 구 생성에 의한 프로토타입 기반 분류)

  • Shim, Seyong;Hwang, Doosung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.2
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a prototype-based classification learning by using the nearest-neighbor rule. The nearest-neighbor is applied to segment the class area of all the training data into spheres within which the data exist from the same class. Prototypes are the center of spheres and their radii are computed by the mid-point of the two distances to the farthest same class point and the nearest another class point. And we transform the prototype selection problem into a set covering problem in order to determine the smallest set of prototypes that include all the training data. The proposed prototype selection method is based on a greedy algorithm that is applicable to the training data per class. The complexity of the proposed method is not complicated and the possibility of its parallel implementation is high. The prototype-based classification learning takes up the set of prototypes and predicts the class of test data by the nearest neighbor rule. In experiments, the generalization performance of our prototype classifier is superior to those of the nearest neighbor, Bayes classifier, and another prototype classifier.

An Efficient Method for Finding the Neighbor MBRs on Voronoi Diagram (보르노이 다이어그램 상의 효율적인 이웃 MBR 연산 기법)

  • Park, Yonghun;Lee, Jinju;Lim, Jongtae;Choi, Kilseong;Yoo, Jaesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2010.05a
    • /
    • pp.13-15
    • /
    • 2010
  • 이동객체의 공간 데이터를 색인하기 위해 검색성능이 뛰어난 R-tree구조가 많이 활용된다. 최근 R-tree를 B+-tree처럼 인접한 단말노드 간의 연결을 통해 질의 처리를 수행하는 ISR-tree와 ISG-index가 제안되었다. 이 기법들은 MBR (Minimum Boundary Rectangle) 간의 인접한 이웃 노드를 결정하기 위해 보르노이 다이어그램(Voronoi Diagram)을 이용한다. MBR을 대상으로 하는 보르노이 다이어그램은 매우 복잡한 연산과정을 거친다. 본 논문에서는 점을 대상으로 하는 보르노이 다이어그램 연산을 활용한 인접한 이웃 MBR을 연산하는 기법을 제안한다. 각 MBR의 꼭지점들을 기준으로 보르노이 다이어그램을 만들 경우, 인접한 MBR의 꼭지점들의 보르노이 셀이 항상 인접한 것을 알아내었고, 이를 활용한다. 제안하는 기법의 우수성을 증명하기 위해 기존의 기법과 비교하여 성능평가를 수행하였다.

  • PDF

k-Nearest Neighbor Querv Processing using Approximate Indexing in Road Network Databases (도로 네트워크 데이타베이스에서 근사 색인을 이용한 k-최근접 질의 처리)

  • Lee, Sang-Chul;Kim, Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.35 no.5
    • /
    • pp.447-458
    • /
    • 2008
  • In this paper, we address an efficient processing scheme for k-nearest neighbor queries to retrieve k static objects in road network databases. Existing methods cannot expect a query processing speed-up by index structures in road network databases, since it is impossible to build an index by the network distance, which cannot meet the triangular inequality requirement, essential for index creation, but only possible in a totally ordered set. Thus, these previous methods suffer from a serious performance degradation in query processing. Another method using pre-computed network distances also suffers from a serious storage overhead to maintain a huge amount of pre-computed network distances. To solve these performance and storage problems at the same time, this paper proposes a novel approach that creates an index for moving objects by approximating their network distances and efficiently processes k-nearest neighbor queries by means of the approximate index. For this approach, we proposed a systematic way of mapping each moving object on a road network into the corresponding absolute position in the m-dimensional space. To meet the triangular inequality this paper proposes a new notion of average network distance, and uses FastMap to map moving objects to their corresponding points in the m-dimensional space. After then, we present an approximate indexing algorithm to build an R*-tree, a multidimensional index, on the m-dimensional points of moving objects. The proposed scheme presents a query processing algorithm capable of efficiently evaluating k-nearest neighbor queries by finding k-nearest points (i.e., k-nearest moving objects) from the m-dimensional index. Finally, a variety of extensive experiments verifies the performance enhancement of the proposed approach by performing especially for the real-life road network databases.

Performance Comparison of Machine Learning Algorithms for Malware Detection (악성코드 탐지를 위한 기계학습 알고리즘의 성능 비교)

  • Lee, Hyun-Jong;Heo, Jae Hyeok;Hwang, Doosung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.01a
    • /
    • pp.143-146
    • /
    • 2018
  • 서명기반 악성코드 탐지는 악성 파일의 고유 해싱 값을 사용하거나 패턴화된 공격 규칙을 이용하므로, 변형된 악성코드 탐지에 취약한 단점이 있다. 기계 학습을 적용한 악성코드 탐지는 이러한 취약점을 극복할 수 있는 방안으로 인식되고 있다. 본 논문은 정적 분석으로 n-gram과 API 특징점을 추출해 특징 벡터로 구성하여 XGBoost, k-최근접 이웃 알고리즘, 지지 벡터 기기, 신경망 알고리즘, 심층 학습 알고리즘의 일반화 성능을 비교한다. 실험 결과로 XGBoost가 일반화 성능이 99%로 가장 우수했으며 k-최근접 이웃 알고리즘이 학습 시간이 가장 적게 소요됐다. 일반화 성능과 시간 복잡도 측면에서 XGBoost가 비교 대상 알고리즘에 비해 우수한 성능을 보였다.

  • PDF

A Hashing Method Using PCA-based Clustering (PCA 기반 군집화를 이용한 해슁 기법)

  • Park, Cheong Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.215-218
    • /
    • 2014
  • In hashing-based methods for approximate nearest neighbors(ANN) search, by mapping data points to k-bit binary codes, nearest neighbors are searched in a binary embedding space. In this paper, we present a hashing method using a PCA-based clustering method, Principal Direction Divisive Partitioning(PDDP). PDDP is a clustering method which repeatedly partitions the cluster with the largest variance into two clusters by using the first principal direction. The proposed hashing method utilizes the first principal direction as a projective direction for binary coding. Experimental results demonstrate that the proposed method is competitive compared with other hashing methods.