• Title/Summary/Keyword: 최대 부분 집합

Search Result 30, Processing Time 0.024 seconds

A New Similarity Measure based on RMF and It s Application to Linguistic Approximation (상대적 소수 함수에 기반을 둔 새로운 유사성 측도와 언어 근사에의 응용)

  • Choe, Dae-Yeong
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.463-468
    • /
    • 2001
  • We propose a new similarity measure based on relative membership function (RMF). In this paper, the RMF is suggested to represent the relativity between fuzzy subsets easily. Since the shape of the RMF is determined according to the values of its parameters, we can easily represent the relativity between fuzzy subsets by adjusting only the values of its parameters. Hence, we can easily reflect the relativity among individuals or cultural differences when we represent the subjectivity by using the fuzzy subsets. In this case, these parameters may be regarded as feature points for determining the structure of fuzzy subset. In the sequel, the degree of similarity between fuzzy subsets can be quickly computed by using the parameters of the RMF. We use Euclidean distance to compute the degree of similarity between fuzzy subsets represented by the RMF. In the meantime, we present a new linguistic approximation method as an application area of the proposed similarity measure and show its numerical example.

  • PDF

Algorithm Based on Cardinality Number of Exact Cover Problem (완전 피복 문제의 원소 수 기반 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.2
    • /
    • pp.185-191
    • /
    • 2023
  • To the exact cover problem that remains NP-complete to which no polynomial time algorithm is made available, this paper proposes a linear time algorithm that yields an optimal solution. The proposed algorithm makes use of the set cover problem's major feature which states that "no identical element shall be included in more than one covering set". To satisfy this criterion, the proposed algorithm initially selects a subset with the minimum cardinality and deletes those that contain the cardinality identical to that of the selected subset. This process is repeatedly performed on remaining subsets until the final solution is obtained. Provided that the solution is unattainable, it selects subsets with the maximum cardinality and repeats the same process. The proposed algorithm has not only obtained the optimal solution with ease but also proved its wide applicability on N-queens problems, hence disproving the NP-completeness of the exact cover problem.

A Study on the Efficiency of Join Operation On Stream Data Using Sliding Windows (스트림 데이터에서 슬라이딩 윈도우를 사용한 조인 연산의 효율에 관한 연구)

  • Yang, Young-Hyoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.149-157
    • /
    • 2012
  • In this thesis, the problem of computing approximate answers to continuous sliding-window joins over data streams when the available memory may be insufficient to keep the entire join state. One approximation scenario is to provide a maximum subset of the result, with the objective of losing as few result tuples as possible. An alternative scenario is to provide a random sample of the join result, e.g., if the output of the join is being aggregated. It is shown formally that neither approximation can be addressed effectively for a sliding-window join of arbitrary input streams. Previous work has addressed only the maximum-subset problem, and has implicitly used a frequency based model of stream arrival. There exists a sampling problem for this model. More importantly, it is shown that a broad class of applications for which an age-based model of stream arrival is more appropriate, and both approximation scenarios under this new model are addressed. Finally, for the case of multiple joins being executed with an overall memory constraint, an algorithm for memory allocation across the join that optimizes a combined measure of approximation in all scenarios considered is provided.

A Generalized Subtractive Algorithm for Subset Sum Problem (부분집합 합 문제의 일반화된 감산 알고리즘)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.9-14
    • /
    • 2022
  • This paper presents a subset sum problem (SSP) algorithm which takes the time complexity of O(nlogn). The SSP can be classified into either super-increasing sequence or random sequence depending on the element of Set S. Additive algorithm that runs in O(nlogn) has already been proposed to and utilized for the super-increasing sequence SSP, but exhaustive Brute-Force method with time complexity of O(n2n) remains as the only viable algorithm for the random sequence SSP, which is thus considered NP-complete. The proposed subtractive algorithm basically selects a subset S comprised of values lower than target value t, then sets the subset sum less the target value as the Residual r, only to remove from S the maximum value among those lower than t. When tested on various super-increasing and random sequence SSPs, the algorithm has obtained optimal solutions running less than the cardinality of S. It can therefore be used as a general algorithm for the SSP.

Binary Locally Repairable Codes from Complete Multipartite Graphs (완전다분할그래프 기반 이진 부분접속복구 부호)

  • Kim, Jung-Hyun;Nam, Mi-Young;Song, Hong-Yeop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.9
    • /
    • pp.1734-1740
    • /
    • 2015
  • This paper introduces a generalized notion, referred to as joint locality, of the usual locality in distributed storage systems and proposes a code construction of binary locally repairable codes with joint locality ($r_1$=2, $r_2$=3 or 4). Joint locality is a set of numbers of nodes for repairing various failure patterns of nodes. The proposed scheme simplifies the code design problem utilizing complete multipartite graphs. Moreover, our construction can generate binary locally repairable codes achieving (2,t)-availability for any positive integer t. It means that each node can be repaired by t disjoint repair sets of cardinality 2. This property is useful for distributed storage systems since it permits parallel access to hot data.

An Implementation of an Edge-based Algorithm for Separating and Intersecting Spherical Polygons (구 볼록 다각형 들의 분리 및 교차를 위한 간선 기반 알고리즘의 구현)

  • Ha, Jong-Seong;Cheon, Eun-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.9
    • /
    • pp.479-490
    • /
    • 2001
  • In this paper, we consider the method of partitioning a sphere into faces with a set of spherical convex polygons $\Gamma$=${P_1...P_n}$ for determining the maximum of minimum intersection. This problem is commonly related with five geometric problems that fin the densest hemisphere containing the maximum subset of $\Gamma$, a great circle separating $\Gamma$, a great circle bisecting $\Gamma$ and a great circle intersecting the minimum or maximum subset of $\Gamma$. In order to efficiently compute the minimum or maximum intersection of spherical polygons. we take the approach of edge-based partition, in which the ownerships of edges rather than faces are manipulated as the sphere is incrementally partitioned by each of the polygons. Finally, by gathering the unordered split edges with the maximum number of ownerships. we approximately obtain the centroids of the solution faces without constructing their boundaries. Our algorithm for finding the maximum intersection is analyzed to have an efficient time complexity O(nv) where n and v respectively, are the numbers of polygons and all vertices. Furthermore, it is practical from the view of implementation, since it computes numerical values. robustly and deals with all the degenerate cases, Using the similar approach, the boundary of a general intersection can be constructed in O(nv+LlogL) time, where : is the output-senstive number of solution edges.

  • PDF

The Credit Information Feature Selection Method in Default Rate Prediction Model for Individual Businesses (개인사업자 부도율 예측 모델에서 신용정보 특성 선택 방법)

  • Hong, Dongsuk;Baek, Hanjong;Shin, Hyunjoon
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.1
    • /
    • pp.75-85
    • /
    • 2021
  • In this paper, we present a deep neural network-based prediction model that processes and analyzes the corporate credit and personal credit information of individual business owners as a new method to predict the default rate of individual business more accurately. In modeling research in various fields, feature selection techniques have been actively studied as a method for improving performance, especially in predictive models including many features. In this paper, after statistical verification of macroeconomic indicators (macro variables) and credit information (micro variables), which are input variables used in the default rate prediction model, additionally, through the credit information feature selection method, the final feature set that improves prediction performance was identified. The proposed credit information feature selection method as an iterative & hybrid method that combines the filter-based and wrapper-based method builds submodels, constructs subsets by extracting important variables of the maximum performance submodels, and determines the final feature set through prediction performance analysis of the subset and the subset combined set.

Advanced shape from focus (SFF) method by usng curved window (곡면 윈도우를 이용한 shape from focus(SFF) 방법의 개선)

  • 윤정일;최태선
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.777-780
    • /
    • 1998
  • 물체의 3차원적인 정보를 복원하는 일은 그 정보의 일련된 이용에 있어서 중요한 문제이다. 이를 위해 여러가지 방법들이 연구되고 있으며, 그 중 shape from focus(SFF) 방법은 영상의 초점이 맞는 렌즈의 위치를 찾아내어 렌즈 공식에 의해 초점이 맞는 부분의 거리 정보를 구할 수 있다. 기존의 이 방법은 초점이 맞았는지의 정도를 계산하기 위한 focus measure 값들을 카메라의 광학축에 수직인 단순한 평면으로 가정하여 그 합이 최대가 되는 위치를 찾아내었다. 이를 개선하기 위해서 focused image surface(FIS) 개념이 연구되었고 그로 인해 더욱 나아진 결과를 얻었다. 물체의 FIS는 카메라 렌즈에 의해 초점이 맞게된 물체의 점들의 집합으로 이루어진 공간상의 면이다. 기하광학에 의해 물체의 모양과 FIS 상이에는 일대일 대응 관계가 있고 FIS의 형태를 구하는것이 결국은 물체의 모양을 복원하는것이다. FIS 개념을 처음 적용할 때는 물체의 모양이 부분적으로 영상 탐지기(image detector)와 같은 평면으로 가정하여 3차원 공간상에서 가능한 모든 방향의 평면에 대한 focus measure를 구하여 그 값이 최대가 되는 렌즈의 위치를 구하였다. 그러나 이러한 방법은 focus measure의 합이 정사각형의 윈도우에서 계산되기 때문에 곡면으로 이루어진 실제 물체에서는 오차르 ㄹ가지게 된다. 본 논문에서는 이와는 달이 평면이 아닌 곡면에 대한 focus measure의 합이 최대가 되는 렌즈의 위치를 구하여 이전의 방법들 보다 정확한 복원이 가능함을 보인다.

  • PDF

CCR : Tree-pattern based Code-clone Detector (CCR : 트리패턴 기반의 코드클론 탐지기)

  • Lee, Hyo-Sub;Do, Kyung-Goo
    • Journal of Software Assessment and Valuation
    • /
    • v.8 no.2
    • /
    • pp.13-27
    • /
    • 2012
  • This paper presents a tree-pattern based code-clone detector as CCR(Code Clone Ransacker) that finds all clusterd dulpicate pattern by comparing all pair of subtrees in the programs. The pattern included in its entirely in another pattern is ignored since only the largest duplicate patterns are interesed. Evaluation of CCR is high precision and recall. The previous tree-pattern based code-clone detectors are known to have good precision and recall because of comparing program structure. CCR is still high precision and the maximum 5 times higher recall than Asta and about 1.9 times than CloneDigger. The tool also include the majority of Bellon's reference corpus.

A Study on the Ordered Subsets Expectation Maximization Reconstruction Method Using Gibbs Priors for Emission Computed Tomography (Gibbs 선행치를 사용한 배열된부분집합 기대값최대화 방출단층영상 재구성방법에 관한 연구)

  • Im, K. C.;Choi, Y.;Kim, J. H.;Lee, S. J.;Woo, S. K.;Seo, H. K.;Lee, K. H.;Kim, S. E.;Choe, Y. S.;Park, C. C;Kim, B. T.
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.5
    • /
    • pp.441-448
    • /
    • 2000
  • 방출단층영상 재구성을 위한 최대우도 기대값최대화(maximum likelihood expectation maximization, MLEM) 방법은 영상 획득과정을 통계학적으로 모델링하여 영상을 재구성한다. MLEM은 일반적으로 사용하여 여과후역투사(filtered backprojection)방법에 비해 많은 장점을 가지고 있으나 반복횟수 증가에 따른 발산과 재구성 시간이 오래 걸리는 단점을 가지고 있다. 이 논문에서는 이러한 단점을 보완하기 위해 계산시간을 현저히 단축시킨 배열된부분집합 기대값최대화(ordered subsets expectation maximization. OSEM)에 Gibbs 선행치인 membrance (MM) 또는 thin plate(TP)을 첨가한 OSEM-MAP (maximum a posteriori)을 구현함으로써 알고리즘의 안정성 및 재구성된 영상의 질을 향상시키고자 g나다. 실험에서 알고리즘의 수렴시간을 가속화하기 위해 투사 데이터를 16개의 부분집합으로 분할하여 반복연산을 수행하였으며, 알고리즘의 성능을 비교하기 위해 소프트웨어 모형(원숭이 뇌 자가방사선, 수학적심장흉부)을 사용한 영상재구성 결과를 제곱오차로 비교하였다. 또한 알고리즘의 사용 가능성을 평가하기 위해 물리모형을 사용하여 PET 기기로부터 획득한 실제 투사 데이터를 사용하였다.

  • PDF