• Title/Summary/Keyword: 최대 부분 집합

Search Result 30, Processing Time 0.022 seconds

A New Formulation of the Reconstruction Problem in Neutronics Nodal Methods Based on Maximum Entropy Principle (노달방법의 중성자속 분포 재생 문제에의 최대 엔트로피 원리에 의한 새로운 접근)

  • Na, Won-Joon;Cho, Nam-Zin
    • Nuclear Engineering and Technology
    • /
    • v.21 no.3
    • /
    • pp.193-204
    • /
    • 1989
  • This paper develops a new method for reconstructing neutron flux distribution, that is based on the maximum entropy Principle in information theory. The Probability distribution that maximizes the entropy Provides the most unbiased objective Probability distribution within the known partial information. The partial information are the assembly volume-averaged neutron flux, the surface-averaged neutron fluxes and the surface-averaged neutron currents, that are the results of the nodal calculation. The flux distribution on the boundary of a fuel assembly, which is the boundary condition for the neutron diffusion equation, is transformed into the probability distribution in the entropy expression. The most objective boundary flux distribution is deduced using the results of the nodal calculation by the maximum entropy method. This boundary flux distribution is then used as the boundary condition in a procedure of the imbedded heterogeneous assembly calculation to provide detailed flux distribution. The results of the new method applied to several PWR benchmark problem assemblies show that the reconstruction errors are comparable with those of the form function methods in inner region of the assembly while they are relatively large near the boundary of the assembly. The incorporation of the surface-averaged neutron currents in the constraint information (that is not done in the present study) should provide better results.

  • PDF

Dynamic Subspace Clustering for Online Data Streams (온라인 데이터 스트림에서의 동적 부분 공간 클러스터링 기법)

  • Park, Nam Hun
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.217-223
    • /
    • 2022
  • Subspace clustering for online data streams requires a large amount of memory resources as all subsets of data dimensions must be examined. In order to track the continuous change of clusters for a data stream in a finite memory space, in this paper, we propose a grid-based subspace clustering algorithm that effectively uses memory resources. Given an n-dimensional data stream, the distribution information of data items in data space is monitored by a grid-cell list. When the frequency of data items in the grid-cell list of the first level is high and it becomes a unit grid-cell, the grid-cell list of the next level is created as a child node in order to find clusters of all possible subspaces from the grid-cell. In this way, a maximum n-level grid-cell subspace tree is constructed, and a k-dimensional subspace cluster can be found at the kth level of the subspace grid-cell tree. Through experiments, it was confirmed that the proposed method uses computing resources more efficiently by expanding only the dense space while maintaining the same accuracy as the existing method.

A Caching Mechanism for Knowledge Maps (지식 맵을 위한 캐슁 기법)

  • 정준원;민경섭;김형주
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.3
    • /
    • pp.282-291
    • /
    • 2004
  • There has been many researches in TopicMap and RDF which are approach to handle data efficiently with metadata. However, No researches has been performed to service and implement except for presentation and description. In this paper, We suggest the caching mechanism to support an efficient access of knowledgemap and practical knowledgemap service with implementation of TopicMap system. First, We propose a method to navigate Knowledgemap efficiently that includes advantage of former methods. Then, To transmit TopicMap efficiently, We suggest caching mechanism for knowledgemap. This method is that user will be able to navigate knowledgemap efficiently in the viewpoint of human, not application. Therefor the mechanism doesn't cash topics by logical or physical locality but clustering by information and characteristic value of TopicMap. Lastly, we suggest replace mechanism by using graph structure of TopicMap for efficiency of transmission.

An Algorithm for Detecting Gemetric Symmetry in a Plannar Graph (평면 그래프의 기하학적 대칭성 탐지 알고리즘)

  • Hong, Seok-Hui;Lee, Sang-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.1
    • /
    • pp.107-116
    • /
    • 1999
  • 대칭성(symmetry)은 그래프의 구조와 특성을 시각적으로 표현할 때 중요한 미적 기준 중의 하나이다. 또한 대칭성을 보여주는 드로잉은 전체 그래프가 크기가 작은 부그래트들로부터 반복적으로 구성됨을 보여줌으로써 전체 그래프에 대한 이해를 쉽게 푸는 해주는 장점이 있다. 하지만 일반적인 그래프에서 기하하적 대칭성(geometric symmetry)을 탐지하는 문제는 이미 NP-complete 임이 증명되었으므로 이에 대한 연구는 평면 그래프(planar graph)의 극히 제한적인 부분집합인 트리, 외부 평면 그래프, 임베딩된 (embedded) 평면 그래프 등에 초점이 맞추어져 왔다. 본 논문에서는 평면 그래프에서의 기하학적 대칭성 문제를 연구하였다. 평면 그래프를 이중 연결 성분들로 분할한 다음 이를 각각 다시 삼중 연결 성분들로 분할하여 트리를 구성하고 축소(reduction)개념을 도입함으로써 기하학적 대칭성을 탐지하는 O(n2)시간 알고리즘을 제시하였다. 여기서 n은 그래프의 정점의 개수이다. 이 알고리즘은 평면 그래프를 최대한 대칭적으로 드로잉하는 알고리즘 개발에 이용될 수 있다.

Building an Ensemble Machine by Constructive Selective Learning Neural Networks (건설적 선택학습 신경망을 이용한 앙상블 머신의 구축)

  • Kim, Seok-Jun;Jang, Byeong-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.12
    • /
    • pp.1202-1210
    • /
    • 2000
  • 본 논문에서는 효과적인 앙상블 머신의 구축을 위한 새로운 방안을 제시한다. 효과적인 앙상블의 구축을 위해서는 앙상블 멤버들간의 상관관계가 아주 낮아야 하며 또한 각 앙상블 멤버들은 전체 문제를 어느 정도는 정확하게 학습하면서도 서로들간의 불일치 하는 부분이 존재해야 한다는 것이 여러 논문들에 발표되었다. 본 논문에서는 주어진 문제의 다양한 면을 학습한 다수의 앙상블 후보 네트웍을 생성하기 위하여 건설적 학습 알고리즘과 능동 학습 알고리즘을 결합한 형태의 신경망 학습 알고리즘을 이용한다. 이 신경망의 학습은 최소 은닉 노드에서 최대 은닉노드까지 점진적으로 은닉노드를 늘려나감과 동시에 후보 데이타 집합에서 학습에 사용할 훈련 데이타를 점진적으로 선택해 나가면서 이루어진다. 은닉 노드의 증가시점에서 앙상블의 후부 네트웍이 생성된다. 이러한 한 차례의 학습 진행을 한 chain이라 정의한다. 다수의 chain을 통하여 다양한 형태의 네트웍 크기와 다양한 형태의 데이타 분포를 학습한 후보 내트웍들이 생성된다. 이렇게 생성된 후보 네트웍들은 확률적 비례 선택법에 의해 선택된 후 generalized ensemble method (GEM)에 의해 결합되어 최종적인 앙상블 성능을 보여준다. 제안된 알고리즘은 한개의 인공 데이타와 한 개의 실세계 데이타에 적용되었다. 실험을 통하여 제안된 알고리즘에 의해 구성된 앙상블의 최대 일반화 성능은 다른 알고리즘에 의한 그것보다 우수함을 알 수 있다.

  • PDF

Test Data Compression for SoC Testing (SoC 테스트를 위한 테스트 데이터 압축)

  • Kim Yun-Hong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.6
    • /
    • pp.515-520
    • /
    • 2004
  • Core-based system-on-a-chip (SoC) designs present a number of test challenges. Two major problems that are becoming increasingly important are long application time during manufacturing test and high volume of test data. Highly efficient compression techniques have been proposed to reduce storage and application time for high volume data by exploiting the repetitive nature of test vectors. This paper proposes a new test data compression technique for SoC testing. In the proposed technique, compression is achieved by partitioning the test vector set and removing repeating segment. This process has $O(n^{-2})$ time complexity for compression with a simple hardware decoding circuitry. It is shown that the efficiency of the proposed compression technique is comparable with sophisticated software compression techniques with the advantage of easy and fast decoding.

  • PDF

The Optimization of Reconstruction Method Reducing Partial Volume Effect in PET/CT 3D Image Acquisition (PET/CT 3차원 영상 획득에서 부분용적효과 감소를 위한 재구성법의 최적화)

  • Hong, Gun-Chul;Park, Sun-Myung;Kwak, In-Suk;Lee, Hyuk;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.13-17
    • /
    • 2010
  • Purpose: Partial volume effect (PVE) is the phenomenon to lower the accuracy of image due to low estimate, which is to occur from PET/CT 3D image acquisition. The more resolution is declined and the lesion is small, the more it causes a big error. So that it can influence the test result. Studied the optimum image reconstruction method by using variation of parameter, which can influence the PVE. Materials and Methods: It acquires the image in each size spheres which is injected $^{18}F$-FDG to hot site and background in the ratio 4:1 for 10 minutes by using NEMA 2001 IEC phantom in GE Discovey STE 16. The iterative reconstruction is used and gives variety to iteration 2-50 times, subset number 1-56. The analysis's fixed region of interest in detail part of image and compute % difference and signal to noise ratio (SNR) using $SUV_{max}$. Results: It's measured that $SUV_{max}$ of 10 mm spheres, which is changed subset number to 2, 5, 8, 20, 56 in fixed iteration to times, SNR is indicated 0.19, 0.30, 0.40, 0.48, 0.45. As well as each sphere's of total SNR is measured 2.73, 3.38, 3.64, 3.63, 3.38. Conclusion: In iteration 6th to 20th, it indicates similar value in % difference and SNR ($3.47{\pm}0.09$). Over 20th, it increases the phenomenon, which is placed low value on $SUV_{max}$ through the influence of noise. In addition, the identical iteration, it indicates that SNR is high value in 8th to 20th in variation of subset number. Therefore, to reduce partial volume effect of small lesion, it can be declined the partial volume effect in iteration 6 times, subset number 8~20 times, considering reconstruction time.

  • PDF

GAGPC : An Algorithm to Optimize Multiple Continuous Queries on Data Streams (GAGPC : 데이타 스트림에 대한 다중 연속 질의의 최적화 알고리즘)

  • Suh Young-Kyoon;Son Jin-Hyun;Kim Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.409-422
    • /
    • 2006
  • In general, there can be many reusable intermediate results due to the overlapped windows and periodic execution intervals among Multiple Continuous Queries (MCQ) on data streams. In this regard, we propose an efficient greedy algorithm for a global query plan construction, called GAGPC. GAGPC first decides an execution cycle and finds the maximal Set(s) of Related execution Points (SRP). Next, GAGPC constructs a global execution plan to make MCQ share common join-fragments with the highest benefit in each SRP. The algorithm suggests that the best plan of the same continuous queries may be different according to not only the existence of common expressions, but the size of overlapped windows related to them. It also reflects to reuse not only the whole but partial intermediate results unlike previous work. Finally, we show experimental results for the validation of GAGPC.

Simple Algorithm for Baseball Elimination Problem (야구 배제 문제의 단순 알고리즘)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.3
    • /
    • pp.147-152
    • /
    • 2020
  • The baseball elimination problem(BEP) is eliminates teams that finishes the season in the early stage without play the remaining games because of the team never most wins even though all wins of remaining games. This problem solved by max-flow/min-cut theorem. But the max-flow/min-cut method has a shortcoming of iterative constructs the network for all of team and decides the min-cut for each network. This paper suggests ascending sort in wins game plus remaining games for each team, then the candidate eliminating team set K with lower 1/2 rank and most easy, simple, and fast computes the existence or not of subset R that a team elimination decision. As a result of various experimental data, this algorithm can be find all of elimination teams for whole data with fast and correct.

A Heuristic-Based Algorithm for Maximum k-Club Problem (MkCP (Maximum k-Club Problem)를 위한 휴리스틱 기반 알고리즘)

  • Kim, SoJeong;Kim, ChanSoo;Han, KeunHee
    • Journal of Digital Convergence
    • /
    • v.19 no.10
    • /
    • pp.403-410
    • /
    • 2021
  • Given an undirected simple graph, k-club is one of the proposed structures to model social groups that exist in various types in Social Network Analysis (SNA). Maximum k-Club Problem (MkCP) is to find a k-club of maximum cardinality in a graph. This paper introduces a Genetic Algorithm called HGA+DROP which can be used to approximate maximum k-club in graphs. Our algorithm modifies the existing k-CLIQUE & DROP algorithm and utilizes Heuristic Genetic Algorithms (HGA) to obtain multiple k-clubs. We experiment on DIMACS graphs for k = 2, 3, 4 and 5 to compare the performance of the proposed algorithm with existing algorithms.