• Title/Summary/Keyword: Data Partition Algorithm

Search Result 128, Processing Time 0.023 seconds

Multiple Symbol Detection of Trellis coded Differential space-time modulation for OFDM (OFDM에서 트렐리스 부호화된 차동 시공간 변조의 다중 심벌 검파)

  • 유항열;한상필;김진용;김성열;김종일
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.3
    • /
    • pp.223-229
    • /
    • 2004
  • Recently, OFDM and STC techniques have been considered to be candidate to support multimedia services in the next generation mobile radio communications and have been developed the many communications systems in order to achieve the high data rates. In this paper, we propose the Trellis-Coded Differential Space Time Modulation-OFDM system with multiple symbol detection. The Trellis-code performs the set partition with unitary group codes. The Viterbi decoder containing new branch metrics is introduced in order to improve the bit error rate (BER) in the differential detection of the unitary differential space time modulation. Also, we describe the Viterbi algorithm in order to use this branch metrics. Our study shows that such a Viterbl decoder improves BER performance without sacrificing bandwidth and power efficiency.

  • PDF

Domain Decomposition Strategy for Pin-wise Full-Core Monte Carlo Depletion Calculation with the Reactor Monte Carlo Code

  • Liang, Jingang;Wang, Kan;Qiu, Yishu;Chai, Xiaoming;Qiang, Shenglong
    • Nuclear Engineering and Technology
    • /
    • v.48 no.3
    • /
    • pp.635-641
    • /
    • 2016
  • Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

Dynamic Block Reassignment for Load Balancing of Block Centric Graph Processing Systems (블록 중심 그래프 처리 시스템의 부하 분산을 위한 동적 블록 재배치 기법)

  • Kim, Yewon;Bae, Minho;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.5
    • /
    • pp.177-188
    • /
    • 2018
  • The scale of graph data has been increased rapidly because of the growth of mobile Internet applications and the proliferation of social network services. This brings upon the imminent necessity of efficient distributed and parallel graph processing approach since the size of these large-scale graphs are easily over a capacity of a single machine. Currently, there are two popular parallel graph processing approaches, vertex-centric graph processing and block centric processing. While a vertex-centric graph processing approach can easily be applied to the parallel processing system, a block-centric graph processing approach is proposed to compensate the drawbacks of the vertex-centric approach. In these systems, the initial quality of graph partition affects to the overall performance significantly. However, it is a very difficult problem to divide the graph into optimal states at the initial phase. Thus, several dynamic load balancing techniques have been studied that suggest the progressive partitioning during the graph processing time. In this paper, we present a load balancing algorithms for the block-centric graph processing approach where most of dynamic load balancing techniques are focused on vertex-centric systems. Our proposed algorithm focus on an improvement of the graph partition quality by dynamically reassigning blocks in runtime, and suggests block split strategy for escaping local optimum solution.

User Bandwidth Demand Centric Soft-Association Control in Wi-Fi Networks

  • Sun, Guolin;Adolphe, Sebakara Samuel Rene;Zhang, Hangming;Liu, Guisong;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.709-730
    • /
    • 2017
  • To address the challenge of unprecedented growth in mobile data traffic, ultra-dense network deployment is a cost efficient solution to offload the traffic over some small cells. The overlapped coverage areas of small cells create more than one candidate access points for one mobile user. Signal strength based user association in IEEE 802.11 results in a significantly unbalanced load distribution among access points. However, the effective bandwidth demand of each user actually differs vastly due to their different preferences for mobile applications. In this paper, we formulate a set of non-linear integer programming models for joint user association control and user demand guarantee problem. In this model, we are trying to maximize the system capacity and guarantee the effective bandwidth demand for each user by soft-association control with a software defined network controller. With the fact of NP-hard complexity of non-linear integer programming solver, we propose a Kernighan Lin Algorithm based graph-partitioning method for a large-scale network. Finally, we evaluated the performance of the proposed algorithm for the edge users with heterogeneous bandwidth demands and mobility scenarios. Simulation results show that the proposed adaptive soft-association control can achieve a better performance than the other two and improves the individual quality of user experience with a little price on system throughput.

Mode Selection Technique Between Antenna Grouping and Beamforming for MIMO Communication Systems (다중 입출력 시스템에서 안테나 그룹화와 빔 형성 사이의 모드 선택 기법)

  • Kim, Kyung-Chul;Lee, Jung-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.2A
    • /
    • pp.147-154
    • /
    • 2009
  • Antenna grouping algorithm is hybrid of beamforming and spatial multiplexing. In antenna grouping system, we partition $N_t$ transmit antennas into $N_r$ groups and use beamforming in a group, spatial multiplexing between groups. We can transmit $N_r$ data streams in the $N_t{\times}N_r$ antenna grouping system. With antenna grouping, we can achieve diversity gain through beamforming, and high spectral efficiency through spatial multiplexing. But if channel is ill-conditioned or there are some correlations between antennas, the performance of antenna grouping is seriously degraded and in that case, beamforming is the best transmit strategy. By selecting the antenna grouping mode when channel is well-conditioned and by selecting the beamforming mode when channel is ill-conditioned, we can prevent serious fluctuation of BER performance caused by varying channel condition and achieve the best BER performance. In this paper, we investigate mode selection algorithm which can select antenna grouping mode or beamforming mode. we also propose a simple mode selection criterion.

Characteristics of Gas Furnace Process by Means of Partition of Input Spaces in Trapezoid-type Function (사다리꼴형 함수의 입력 공간분할에 의한 가스로공정의 특성분석)

  • Lee, Dong-Yoon
    • Journal of Digital Convergence
    • /
    • v.12 no.4
    • /
    • pp.277-283
    • /
    • 2014
  • Fuzzy modeling is generally using the given data and the fuzzy rules are established by the input variables and the space division by selecting the input variable and dividing the input space for each input variables. The premise part of the fuzzy rule is presented by selection of the input variables, the number of space division and membership functions and in this paper the consequent part of the fuzzy rule is identified by polynomial functions in the form of linear inference and modified quadratic. Parameter identification in the premise part devides input space Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the hard clusters. The identification of the consequence parameters, namely polynomial coefficients, of each rule are carried out by the standard least square method. In this paper, membership function of the premise part is dividing input space by using trapezoid-type membership function and by using gas furnace process which is widely used in nonlinear process we evaluate the performance.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

Design and Implementation of a Large-Scale Spatial Reasoner Using MapReduce Framework (맵리듀스 프레임워크를 이용한 대용량 공간 추론기의 설계 및 구현)

  • Nam, Sang Ha;Kim, In Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.397-406
    • /
    • 2014
  • In order to answer the questions successfully on behalf of the human in DeepQA environments such as Jeopardy! of the American quiz show, the computer is required to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a scalable spatial reasoning algorithm for deriving efficiently new directional and topological relations using the MapReduce framework, one of well-known parallel distributed computing environments. The proposed reasoning algorithm assumes as input a large-scale spatial knowledge base including CSD-9 directional relations and RCC-8 topological relations. To infer new directional and topological relations from the given spatial knowledge base, it performs the cross-consistency checks as well as the path-consistency checks on the knowledge base. To maximize the parallelism of reasoning computations according to the principle of the MapReduce framework, we design the algorithm to partition effectively the large knowledge base into smaller ones and distribute them over multiple computing nodes at the map phase. And then, at the reduce phase, the algorithm infers the new knowledge from distributed spatial knowledge bases. Through experiments performed on the sample knowledge base with the MapReduce-based implementation of our algorithm, we proved the high performance of our large-scale spatial reasoner.

Characteristics of Input-Output Spaces of Fuzzy Inference Systems by Means of Membership Functions and Performance Analyses (소속 함수에 의한 퍼지 추론 시스템의 입출력 공간 특성 및 성능 분석)

  • Park, Keon-Jun;Lee, Dong-Yoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.74-82
    • /
    • 2011
  • To do fuzzy modelling of a nonlinear process needs to analyze the characteristics of input-output of fuzzy inference systems according to the division of entire input spaces and the fuzzy reasoning methods. For this, fuzzy model is expressed by identifying the structure and parameters of the system by means of input variables, fuzzy partition of input spaces, and consequence polynomial functions. In the premise part of the fuzzy rules Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the clusters are used for identification of fuzzy model and membership functions are used as a series of triangular, gaussian-like, trapezoid-type membership functions. In the consequence part of the fuzzy rules fuzzy reasoning is conducted by two types of inferences such as simplified and linear inference. The identification of the consequence parameters, namely polynomial coefficients, of each rule are carried out by the standard least square method. And lastly, using gas furnace process which is widely used in nonlinear process we evaluate the performance and the system characteristics.

A Study on Fuzzy Set-based Polynomial Neural Networks Based on Evolutionary Data Granulation (Evolutionary Data Granulation 기반으로한 퍼지 집합 다항식 뉴럴 네트워크에 관한 연구)

  • 노석범;안태천;오성권
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.433-436
    • /
    • 2004
  • In this paper, we introduce a new Fuzzy Polynomial Neural Networks (FPNNS)-like structure whose neuron is based on the Fuzzy Set-based Fuzzy Inference System (FS-FIS) and is different from that of FPNNS based on the Fuzzy relation-based Fuzzy Inference System (FR-FIS) and discuss the ability of the new FPNNS-like structure named Fuzzy Set-based Polynomial Neural Networks (FSPNN). The premise parts of their fuzzy rules are not identical, while the consequent parts of the both Networks (such as FPNN and FSPNN) are identical. This difference results from the angle of a viewpoint of partition of input space of system. In other word, from a point of view of FS-FIS, the input variables are mutually independent under input space of system, while from a viewpoint of FR-FIS they are related each other. The proposed design procedure for networks architecture involves the selection of appropriate nodes with specific local characteristics such as the number of input variables, the order of the polynomial that is constant, linear, quadratic, or modified quadratic functions being viewed as the consequent part of fuzzy rules, and a collection of the specific subset of input variables. On the parameter optimization phase, we adopt Information Granulation (IC) based on HCM clustering algorithm and a standard least square method-based learning. Through the consecutive process of such structural and parametric optimization, an optimized and flexible fuzzy neural network is generated in a dynamic fashion. To evaluate the performance of the genetically optimized FSPNN (gFSPNN), the model is experimented with using the time series dataset of gas furnace process.

  • PDF