• Title/Summary/Keyword: 데이타 가중치

Search Result 75, Processing Time 0.036 seconds

Temporar Ranked Query Processing (시간 순위 질의의 처리)

  • 권준호;송병호;이석호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.214-216
    • /
    • 2002
  • 시간에 따라 변화하는 사건을 기록하는 시간 데이타베이스에서는 사건을 저장할 때 시간 속성도 같이 저장한다. 최근에는 시간 데이타베이스의 속성을 고려하여 집지 함수와 같이 기존의 연산자를 확장하여 시간 데이타베이스에서 효율적으로 처리하려는 연구가 활발하게 진행되어 왔다. 사용자들은 종종 여러 애트리뷰트에 가중치를 두고 그 가중치 순서대로 결과가 보여지는 순위 질의를 실행한다. 기존의 순위 질의 개념을 그대로 시간 지인 데이타메이스에서 사용할 수 없다. 따라서 본 논문에서는 기존의 순위 질의에 시간 개념을 확장한 시간 순위 질의를 정의한다. 또한 시간 순위 질의 처리방법을 제 시 한다.

  • PDF

A Feature Re-weighting Approach for the Non-Metric Feature Space (가변적인 길이의 특성 정보를 지원하는 특성 가중치 조정 기법)

  • Lee Robert-Samuel;Kim Sang-Hee;Park Ho-Hyun;Lee Seok-Lyong;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.372-383
    • /
    • 2006
  • Among the approaches to image database management, content-based image retrieval (CBIR) is viewed as having the best support for effective searching and browsing of large digital image libraries. Typical CBIR systems allow a user to provide a query image, from which low-level features are extracted and used to find 'similar' images in a database. However, there exists the semantic gap between human visual perception and low-level representations. An effective methodology for overcoming this semantic gap involves relevance feedback to perform feature re-weighting. Current approaches to feature re-weighting require the number of components for a feature representation to be the same for every image in consideration. Following this assumption, they map each component to an axis in the n-dimensional space, which we call the metric space; likewise the feature representation is stored in a fixed-length vector. However, with the emergence of features that do not have a fixed number of components in their representation, existing feature re-weighting approaches are invalidated. In this paper we propose a feature re-weighting technique that supports features regardless of whether or not they can be mapped into a metric space. Our approach analyses the feature distances calculated between the query image and the images in the database. Two-sided confidence intervals are used with the distances to obtain the information for feature re-weighting. There is no restriction on how the distances are calculated for each feature. This provides freedom for how feature representations are structured, i.e. there is no requirement for features to be represented in fixed-length vectors or metric space. Our experimental results show the effectiveness of our approach and in a comparison with other work, we can see how it outperforms previous work.

Feature Weighting in Projected Clustering for High Dimensional Data (고차원 데이타에 대한 투영 클러스터링에서 특성 가중치 부여)

  • Park, Jong-Soo
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.228-242
    • /
    • 2005
  • The projected clustering seeks to find clusters in different subspaces within a high dimensional dataset. We propose an algorithm to discover near optimal projected clusters without user specified parameters such as the number of output clusters and the average cardinality of subspaces of projected clusters. The objective function of the algorithm computes projected energy, quality, and the number of outliers in each process of clustering. In order to minimize the projected energy and to maximize the quality in clustering, we start to find best subspace of each cluster on the density of input points by comparing standard deviations of the full dimension. The weighting factor for each dimension of the subspace is used to get id of probable error in measuring projected distances. Our extensive experiments show that our algorithm discovers projected clusters accurately and it is scalable to large volume of data sets.

Data Weight based Scheduling Scheme for Fair data collection in Sensor Networks with Mobile Sink (모바일 싱크 기반 무선 센서 네트워크에서 균등한 데이타 수집을 위한 데이타 가중치 기반 스케줄링 기법)

  • Jo, Young-Tae;Park, Chong-Myung;Lee, Joa-Hyoung;Jung, In-Bum
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.1
    • /
    • pp.21-33
    • /
    • 2008
  • The wireless sensor nodes near to the fixed sink node suffer from the quickly exhausted battery energy. To address this problem, the mobile sink node has been applied to distribute the energy consumption into all wireless sensor nodes. However, since the mobile sink node moves, the data collection scheduling scheme is necessary for the sink node to receive the data from all sensor nodes as fair as possible. The application fields of wireless sensor network need the real-time processing. If the uneven data collection occurs in the wireless sensor network, the real-time processing for the urgent events can not be satisfied. In this paper, a new method is proposed to support the lair data collection between all sensor nodes. The proposed method performs the scheduling algorithm based on the resident time of the sink node staying in a radius of communication range and the amount of data transferred already. In this paper, the proposed method and existing data collection scheduling schemes are evaluated in wireless sensor network with the mobile sink node. The result shows that the proposed method provides the best fairness among all data collection schemes.

Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map (가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.612-624
    • /
    • 2005
  • In this paper, we propose a surface-based registration using a gaussian weighted distance map for PET-CT brain image fusion. Our method is composed of three main steps: the extraction of feature points, the generation of gaussian weighted distance map, and the measure of similarities based on weight. First, we segment head using the inverse region growing and remove noise segmented with head using region growing-based labeling in PET and CT images, respectively. And then, we extract the feature points of the head using sharpening filter. Second, a gaussian weighted distance map is generated from the feature points in CT images. Thus it leads feature points to robustly converge on the optimal location in a large geometrical displacement. Third, weight-based cross-correlation searches for the optimal location using a gaussian weighted distance map of CT images corresponding to the feature points extracted from PET images. In our experiment, we generate software phantom dataset for evaluating accuracy and robustness of our method, and use clinical dataset for computation time and visual inspection. The accuracy test is performed by evaluating root-mean-square-error using arbitrary transformed software phantom dataset. The robustness test is evaluated whether weight-based cross-correlation achieves maximum at optimal location in software phantom dataset with a large geometrical displacement and noise. Experimental results showed that our method gives more accuracy and robust convergence than the conventional surface-based registration.

Selective Rendering of Specific Volume using a Distance Transform and Data Intermixing Method for Multiple Volumes (거리변환을 통한 특정 볼륨의 선택적 렌더링과 다중 볼륨을 위한 데이타 혼합방법)

  • Hong, Helen;Kim, Myoung-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.629-638
    • /
    • 2000
  • The main difference between mono-volume rendering and multi-volume rendering is data intermixing. In this paper, we first propose a selective rendering method for fast visualizing specific volume according to the surface level and then present data intermixing method for multiple volumes. The selective rendering method is to generate distance transformed volume using a distance transform to determine the minimum distance to the nearest interesting part and then render it. The data intermixing method for multiple volumes is to combine several volumes using intensity weighted intermixing method, opacity weighted intermixing method, opacity weighted intermixing method with depth information and then render it. We show the results of selective rendering of left ventricle and right ventricle generated from EBCT cardiac images and of data intermixing for combining original volume and left ventricular volume or right ventricular volume. Our method offers a visualization technique of specific volume according to the surface level and an acceleration technique using a distance transformed volume and the effective visual output and relation of multiple images using three different intermixing methods in three-dimensional space.

  • PDF

Face Detection Based on Incremental Learning from Very Large Size Training Data (대용량 훈련 데이타의 점진적 학습에 기반한 얼굴 검출 방법)

  • 박지영;이준호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.949-958
    • /
    • 2004
  • race detection using a boosting based algorithm requires a very large size of face and nonface data. In addition, the fact that there always occurs a need for adding additional training data for better detection rates demands an efficient incremental teaming algorithm. In the design of incremental teaming based classifiers, the final classifier should represent the characteristics of the entire training dataset. Conventional methods have a critical problem in combining intermediate classifiers that weight updates depend solely on the performance of individual dataset. In this paper, for the purpose of application to face detection, we present a new method to combine an intermediate classifier with previously acquired ones in an optimal manner. Our algorithm creates a validation set by incrementally adding sampled instances from each dataset to represent the entire training data. The weight of each classifier is determined based on its performance on the validation set. This approach guarantees that the resulting final classifier is teamed by the entire training dataset. Experimental results show that the classifier trained by the proposed algorithm performs better than by AdaBoost which operates in batch mode, as well as by ${Learn}^{++}$.

Calculating Attribute Weights in K-Nearest Neighbor Algorithms using Information Theory (정보이론을 이용한 K-최근접 이웃 알고리즘에서의 속성 가중치 계산)

  • Lee Chang-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.920-926
    • /
    • 2005
  • Nearest neighbor algorithms classify an unseen input instance by selecting similar cases and use the discovered membership to make predictions about the unknown features of the input instance. The usefulness of the nearest neighbor algorithms have been demonstrated sufficiently in many real-world domains. In nearest neighbor algorithms, it is an important issue to assign proper weights to the attributes. Therefore, in this paper, we propose a new method which can automatically assigns to each attribute a weight of its importance with respect to the target attribute. The method has been implemented as a computer program and its effectiveness has been tested on a number of machine learning databases publicly available.

An Information-theoretic Approach for Value-Based Weighting in Naive Bayesian Learning (나이브 베이시안 학습에서 정보이론 기반의 속성값 가중치 계산방법)

  • Lee, Chang-Hwan
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.285-291
    • /
    • 2010
  • In this paper, we propose a new paradigm of weighting methods for naive Bayesian learning. We propose more fine-grained weighting methods, called value weighting method, in the context of naive Bayesian learning. While the current weighting methods assign a weight to an attribute, we assign a weight to an attribute value. We develop new methods, using Kullback-Leibler function, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general naive bayesian. The proposed method shows better performance in most of the cases.

Improving Correctness in the Satellite Remote Sensing Data Analysis -Laying Stress on the Application of Bayesian MLC in the Classification Stage- (인공위성 원격탐사 데이타의 분석 정확도 향상에 관한 연구 -분류과정에서의 Bayesian MIC 적용을 중심으로-)

  • 안철호;김용일
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.9 no.2
    • /
    • pp.81-91
    • /
    • 1991
  • This thesis aims to improve the analysis accuracy of remotely sensed digital imagery, and the improvement is achieved by considering the weight factors(a priori probabilities) of Bayesian MLC in the classification stage. To be concrete, Bayesian decision theory is studied from remote sensing field of view, and the equations in the n-dimensional form are derived from normal probability density functions. The amount of the misclassified pixels is extracted from probability function data using the thres-holding, and this is a basis of evaluating the classification accuracy. The results indicate that 5.21% of accuracy improvement was carried out. The data used in this study is LANDSAT TM(1985.10.21 ; 116-34), and the study area is within the administrative boundary of Seoul.

  • PDF