• Title/Summary/Keyword: Merged set

Search Result 85, Processing Time 0.031 seconds

Disruption time scale of merged halos in a dense cluster environment

  • Shin, Jihye;Taylor, James E.;Peng, Eric
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.60.1-60.1
    • /
    • 2016
  • To obtain a reliable estimate of the cold dark matter (CDM) substructure mass function in a dense cluster environment, one needs to understand how long a merged halo can survive within the host halo. Measuring disruption time scale of merged halos in a dense cluster environment, we attempt to construct the realistic CDM mass function that can be compared with stellar mass functions to get a stellar-to-halo mass ratio. For this, we performed a set of high-resolution simulations of cold dark matter halos with properties similar to the Virgo cluster. Field halos outside the main halo are detected using a Friend-of-Friend algorithm with a linking length of 0.02. To trace the sub-halo structures even after the merging with the main halo, we use their core structures that are defined to be the most 10% bound particles.

  • PDF

Feature-Based Multi-Resolution Modeling of Solids Using History-Based Boolean Operations - Part II : Implementation Using a Non-Manifold Modeling System -

  • Lee Sang Hun;Lee Kyu-Yeul;Woo Yoonwhan;Lee Kang-Soo
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.558-566
    • /
    • 2005
  • We propose a feature-based multi-resolution representation of B-rep solid models using history-based Boolean operations based on the merge-and-select algorithm. Because union and subtraction are commutative in the history-based Boolean operations, the integrity of the models at various levels of detail (LOD) is guaranteed for the reordered features regardless of whether the features are subtractive or additive. The multi-resolution solid representation proposed in this paper includes a non-manifold topological merged-set model of all feature primitives as well as a feature-modeling tree reordered consistently with a given LOD criterion. As a result, a B-rep solid model for a given LOD can be provided quickly, because the boundary of the model is evaluated without any geometric calculation and extracted from the merged set by selecting the entities contributing to the LOD model shape.

An Empirical Analysis On The Effects Of M&A Between The Merging Firms And The Merged Firms

  • Kim, Dong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.4
    • /
    • pp.428-433
    • /
    • 2003
  • In this study. we empirically compared and investigated the impacts and effects of M&A between the merging firms and the merged firms during the period from 1990 to 1997 which the developed countries' market principles were adopted and more autonomous and competitive M&A market were activated. For this purpose, this paper has set hypothesis and tested by analyzing those AAR, and CARs employing both market model and market adjusted model. The empirical results revealed in this research show that the CAR is more positive for merged firms than merging firms which are contrast with results of previous studies researched in 1980s.

  • PDF

Sentence design for speech recognition database

  • Zu Yiqing
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.472-472
    • /
    • 1996
  • The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]

  • PDF

A Thrombus Growth Model Based on Level Set Methods

  • Ma, Chaoqing;Gwun, Oubong
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.137-142
    • /
    • 2016
  • In this paper, a multi-scale model is applied to the simulation of thrombus growth. This model includes macroscale model and microscale model. The former is used to model the plasma flow with Navier-Stokes equations, and the latter is used to model the platelets adhesion and aggregation, thrombus motion, and the surface expansion of thrombus. The force acting on platelets and thrombus from plasma is modeled by the drag force, and the forces from biochemical reactions are modeled by the adhesion force and the aggregation force. As more platelets are merged into the thrombus, the thrombus surface expands. We proposed a thrombus growth model for simulating the expansion of thrombus surface and tracking the surface by Level Set Methods. We implemented the computational model. The model performs well, and the experimental results show that the shape of thrombus in level set expansion form is similar with the thrombus in clinical test.

Data Based Lower-Order Controller Design: Moment Matching Approach (데이터 기반 저차제어기 설계: 모멘트 정합 기법)

  • Kim, Young Chol;Jin, Lihua
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.12
    • /
    • pp.1903-1910
    • /
    • 2012
  • This paper presents a data based low-order controller design algorithm for a linear time-invariant process with a time delay. The algorithm is composed by combining an identification step based on open loop pulse test with a low-order controller design step to obtain the entire set of controllers achieving multiple performance specifications. The initial information necessary for this algorithm are merely the width and amplitude of a rectangular pulse, a controller of four types (PI, PD, PID, first-order), and design objectives. Various parametric approaches that have been developed are merged in the controller design algorithm. The resulting controller set satisfying the design objectives are displayed on the 2D and 3D graphics and thus it is very easy for us to pick a controller inside the admissible set because we can check the corresponding closed-loop performances visually.

Distributed Algorithm for Maximal Weighted Independent Set Problem in Wireless Network (무선통신망의 최대 가중치 독립집합 문제에 관한 분산형 알고리즘)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.5
    • /
    • pp.73-78
    • /
    • 2019
  • This paper proposes polynomial-time rule for maximum weighted independent set(MWIS) problem that is well known NP-hard. The well known distributed algorithm selects the maximum weighted node as a element of independent set in a local. But the merged independent nodes with less weighted nodes have more weights than maximum weighted node are frequently occur. In this case, existing algorithm fails to get the optimal solution. To deal with these problems, this paper constructs maximum weighted independent set in local area. Application result of proposed algorithm to various networks, this algorithm can be get the optimal solution that fail to existing algorithm.

A Study on Feature-Based Multi-Resolution Modelling - Part II: System Implementation and Criteria for Level of Detail (특징형상기반 다중해상도 모델링에 관한 연구 - Part II: 시스템 구현 및 상세수준 판단기준)

  • Lee K.Y.;Lee S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.10 no.6
    • /
    • pp.444-454
    • /
    • 2005
  • Recently, the requirements of multi-resolution models of a solid model, which represent an object at multiple levels of feature detail, are increasing for engineering tasks such as analysis, network-based collaborative design, and virtual prototyping and manufacturing. The research on this area has focused on several topics: topological frameworks for representing multi-resolution solid models, criteria for the level of detail (LOD), and generation of valid models after rearrangement of features. As a solution to the feature rearrangement problem, the new concept of the effective zone of a feature is introduced in the former part of the paper. In this paper, we propose a feature-based non-manifold modeling system to provide multi-resolution models of a feature-based solid or non-manifold model on the basis of the effective feature zones. To facilitate the implementation, we introduce the class of the multi-resolution feature whose attributes contain all necessary information to build a multi-resolution solid model and extract LOD models from it. In addition, two methods are introduced to accelerate the extraction of LOD models from the multi-resolution modeling database: the one is using an NMT model, known as a merged set, to represent multi-resolution models, and the other is storing differences between adjacent LOD models to accelerate the transition to the other LOD. We also suggest the volume of the feature, regardless of feature type, as a criterion for the LOD. This criterion can be used in a wide range of applications, since there is no distinction between additive and subtractive features unlike the previous method.

A Real-Time Data Mining for Stream Data Sets (연속발생 데이터를 위한 실시간 데이터 마이닝 기법)

  • Kim Jinhwa;Min Jin Young
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.29 no.4
    • /
    • pp.41-60
    • /
    • 2004
  • A stream data is a data set that is accumulated to the data storage from a data source over time continuously. The size of this data set, in many cases. becomes increasingly large over time. To mine information from this massive data. it takes much resource such as storage, memory and time. These unique characteristics of the stream data make it difficult and expensive to use this large size data accumulated over time. Otherwise. if we use only recent or part of a whole data to mine information or pattern. there can be loss of information. which may be useful. To avoid this problem. we suggest a method that efficiently accumulates information. in the form of rule sets. over time. It takes much smaller storage compared to traditional mining methods. These accumulated rule sets are used as prediction models in the future. Based on theories of ensemble approaches. combination of many prediction models. in the form of systematically merged rule sets in this study. is better than one prediction model in performance. This study uses a customer data set that predicts buying power of customers based on their information. This study tests the performance of the suggested method with the data set alone with general prediction methods and compares performances of them.

Effects of Hyper-parameters and Dataset on CNN Training

  • Nguyen, Huu Nhan;Lee, Chanho
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.14-20
    • /
    • 2018
  • The purpose of training a convolutional neural network (CNN) is to obtain weight factors that give high classification accuracies. The initial values of hyper-parameters affect the training results, and it is important to train a CNN with a suitable hyper-parameter set of a learning rate, a batch size, the initialization of weight factors, and an optimizer. We investigate the effects of a single hyper-parameter while others are fixed in order to obtain a hyper-parameter set that gives higher classification accuracies and requires shorter training time using a proposed VGG-like CNN for training since the VGG is widely used. The CNN is trained for four datasets of CIFAR10, CIFAR100, GTSRB and DSDL-DB. The effects of the normalization and the data transformation for datasets are also investigated, and a training scheme using merged datasets is proposed.