• Title/Summary/Keyword: Size-based selection

Search Result 496, Processing Time 0.026 seconds

Energy-Efficiency of Distributed Antenna Systems Relying on Resource Allocation

  • Huang, Xiaoge;Zhang, Dongyu;Dai, Weipeng;Tang, She
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1325-1344
    • /
    • 2019
  • Recently, to satisfy mobile users' increasing data transmission requirement, energy efficiency (EE) resource allocation in distributed antenna systems (DASs) has become a hot topic. In this paper, we aim to maximize EE in DASs subject to constraints of the minimum data rate requirement and the maximum transmission power of distributed antenna units (DAUs) with different density distributions. Virtual cell is defined as DAUs selected by the same user equipment (UE) and the size of virtual cells is dependent on the number of subcarriers and the transmission power. Specifically, the selection rule of DAUs is depended on different scenarios. We develop two scenarios based on the density of DAUs, namely, the sparse scenario and the dense scenario. In the sparse scenario, each DAU can only be selected by one UE to avoid co-channel interference. In order to make the original non-convex optimization problem tractable, we transform it into an equivalent fractional programming and solve by the following two sub-problems: optimal subcarrier allocation to find suitable DAUs; optimal power allocation for each subcarrier. Moreover, in the dense scenario, we consider UEs could access the same channel and generate co-channel interference. The optimization problem could be transformed into a convex form based on interference upper bound and fractional programming. In addition, an energy-efficient DAU selection scheme based on the large scale fading is developed to maximize EE. Finally, simulation results demonstrate the effectiveness of the proposed algorithm for both sparse and dense scenarios.

Estimation of co-variance components, genetic parameters, and genetic trends of reproductive traits in community-based breeding program of Bonga sheep in Ethiopia

  • Areb, Ebadu;Getachew, Tesfaye;Kirmani, MA;G.silase, Tegbaru;Haile, Aynalem
    • Animal Bioscience
    • /
    • v.34 no.9
    • /
    • pp.1451-1459
    • /
    • 2021
  • Objective: The objectives of the study were to evaluate reproductive performance and selection response through genetic trend of community-based breeding programs (CBBPs) of Bonga sheep. Methods: Reproduction traits data were collected between 2012 and 2018 from Bonga sheep CBBPs. Phenotypic performance was analyzed using the general linear model procedures of Statistical Analysis System. Genetic parameters were estimated by univariate animal model for age at first lambing (AFL) and repeatability models for lambing interval (LI), litter size (LS), and annual reproductive rate (ARR) traits using restricted maximum likelihood method of WOMBAT. For correlations bivariate animal model was used. Best model was chosen based on likelihood ratio test. The genetic trends were estimated by the weighted regression of the average breeding value of the animals on the year of birth/lambing. Results: The overall least squares mean±standard error of AFL, LI, LS, and ARR were 375±12.5, 284±9.9, 1.45±0.010, and 2.31±0.050, respectively. Direct heritability estimates for AFL, LI, LS, and ARR were 0.07±0.190, 0.06±0.120, 0.18±0.070, and 0.25±0.203, respectively. The low heritability for both AFL and LI showed that these traits respond little to selection programs but rather highly depend on animal management options. The annual genetic gains were -0.0281 days, -0.016 days, -0.0002 lambs and 0.0003 lambs for AFL, LI, LS, and ARR, respectively. Conclusion: Implications of the result to future improvement programs were improving management of animals, conservation of prolific flocks and out scaling the CBBP to get better results.

Automatic selection method of ROI(region of interest) using land cover spatial data (토지피복 공간정보를 활용한 자동 훈련지역 선택 기법)

  • Cho, Ki-Hwan;Jeong, Jong-Chul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.2
    • /
    • pp.171-183
    • /
    • 2018
  • Despite the rapid expansion of satellite images supply, the application of imagery is often restricted due to unautomated image processing. This paper presents the automated process for the selection of training areas which are essential to conducting supervised image classification. The training areas were selected based on the prior and cover information. After the selection, the training data were used to classify land cover in an urban area with the latest image and the classification accuracy was valuated. The automatic selection of training area was processed with following steps, 1) to redraw inner areas of prior land cover polygon with negative buffer (-15m) 2) to select the polygons with proper size of area ($2,000{\sim}200,000m^2$) 3) to calculate the mean and standard deviation of reflectance and NDVI of the polygons 4) to select the polygons having characteristic mean value of each land cover type with minimum standard deviation. The supervised image classification was conducted using the automatically selected training data with Sentinel-2 images in 2017. The accuracy of land cover classification was 86.9% ($\hat{K}=0.81$). The result shows that the process of automatic selection is effective in image processing and able to contribute to solving the bottleneck in the application of imagery.

Hyper-Rectangle Based Prototype Selection Algorithm Preserving Class Regions (클래스 영역을 보존하는 초월 사각형에 의한 프로토타입 선택 알고리즘)

  • Baek, Byunghyun;Euh, Seongyul;Hwang, Doosung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.83-90
    • /
    • 2020
  • Prototype selection offers the advantage of ensuring low learning time and storage space by selecting the minimum data representative of in-class partitions from the training data. This paper designs a new training data generation method using hyper-rectangles that can be applied to general classification algorithms. Hyper-rectangular regions do not contain different class data and divide the same class space. The median value of the data within a hyper-rectangle is selected as a prototype to form new training data, and the size of the hyper-rectangle is adjusted to reflect the data distribution in the class area. A set cover optimization algorithm is proposed to select the minimum prototype set that represents the whole training data. The proposed method reduces the time complexity that requires the polynomial time of the set cover optimization algorithm by using the greedy algorithm and the distance equation without multiplication. In experimented comparison with hyper-sphere prototype selections, the proposed method is superior in terms of prototype rate and generalization performance.

Convergence Characteristics of Ant Colony Optimization with Selective Evaluation in Feature Selection (특징 선택에서 선택적 평가를 사용하는 개미 군집 최적화의 수렴 특성)

  • Lee, Jin-Seon;Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.41-48
    • /
    • 2011
  • In feature selection, the selective evaluation scheme for Ant Colony Optimization(ACO) has recently been proposed, which reduces computational load by excluding unnecessary or less promising candidate solutions from the actual evaluation. Its superiority was supported by experimental results. However the experiment seems to be not statistically sufficient since it used only one dataset. The aim of this paper is to analyze convergence characteristics of the selective evaluation scheme and to make the conclusion more convincing. We chose three datasets related to handwriting, medical, and speech domains from UCI repository whose feature set size ranges from 256 to 617. For each of them, we executed 12 independent runs in order to obtain statistically stable data. Each run was given 72 hours to observe the long-time convergence. Based on analysis of experimental data, we describe a reason for the superiority and where the scheme can be applied.

Bandwidth selections based on cross-validation for estimation of a discontinuity point in density (교차타당성을 이용한 확률밀도함수의 불연속점 추정의 띠폭 선택)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.765-775
    • /
    • 2012
  • The cross-validation is a popular method to select bandwidth in all types of kernel estimation. The maximum likelihood cross-validation, the least squares cross-validation and biased cross-validation have been proposed for bandwidth selection in kernel density estimation. In the case that the probability density function has a discontinuity point, Huh (2012) proposed a method of bandwidth selection using the maximum likelihood cross-validation. In this paper, two forms of cross-validation with the one-sided kernel function are proposed for bandwidth selection to estimate the location and jump size of the discontinuity point of density. These methods are motivated by the least squares cross-validation and the biased cross-validation. By simulated examples, the finite sample performances of two proposed methods with the one of Huh (2012) are compared.

Classification Prediction Error Estimation System of Microarray for a Comparison of Resampling Methods Based on Multi-Layer Perceptron (다층퍼셉트론 기반 리 샘플링 방법 비교를 위한 마이크로어레이 분류 예측 에러 추정 시스템)

  • Park, Su-Young;Jeong, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.534-539
    • /
    • 2010
  • In genomic studies, thousands of features are collected on relatively few samples. One of the goals of these studies is to build classifiers to predict the outcome of future observations. There are three inherent steps to build classifiers: a significant gene selection, model selection and prediction assessment. In the paper, with a focus on prediction assessment, we normalize microarray data with quantile-normalization methods that adjust quartile of all slide equally and then design a system comparing several methods to estimate 'true' prediction error of a prediction model in the presence of feature selection and compare and analyze a prediction error of them. LOOCV generally performs very well with small MSE and bias, the split sample method and 2-fold CV perform with small sample size very pooly. For computationally burdensome analyses, 10-fold CV may be preferable to LOOCV.

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Effects on the Use of Two Textbooks for Four Types of Classes in a South Korean University

  • Ramos, Ian Done D.
    • International Journal of Advanced Culture Technology
    • /
    • v.1 no.2
    • /
    • pp.24-32
    • /
    • 2013
  • This paper determined students' ranks of difficulty on the use of materials in terms of 1) understanding the layout of the learning materials, 2) reading comprehension of the learning materials, and 3) realization on relevance to needs of the learning materials. It also determined students' 4) rank and frequency of attitude on the materials. With the data gathered through 128 survey questionnaires, 7 focused group discussions, and 10 interviews, the results were found out that there was an inappropriate assessment procedure set by this particular university. The researcher concludes that: 1) design of four types of classes by just using the two textbooks with their respective workbooks is grammar-based with limited conversation activities; 2) placement for these students in one big class size was implemented without considering their common interest and motivation and language levels; and, 3) qualification of teachers teaching these EFL students did not support students' real needs and the language program itself. Content professors who were made to teach may have the ability to input learning, but their teaching styles may differ from the ones who are real English teachers. This paper then recommends that teachers and school administration should have an appropriate placement exam before students attend the class, especially in a big class size. There could only be a few problems among students in one big class size when students' level of competence is proportioned. With this, topics and conversation activities can even be more flexible with the maneuver of art of questioning, various dimensions of thinking, strategic competence, learning attitude or behavior, etc. to ensure sustenance of communicative mode and level of interest and motivation in the classroom. Grammar-based instruction can only be taught when a need arises. Thus, the course description of each class will be able to transact the objectives ready for developing students' communication competence. Moreover, proper measurement can be utilized to validly assess the amount of students' learning and the progress of language curriculum design in terms of materials selection and teaching approach.

  • PDF

Maximizing Information Transmission for Energy Harvesting Sensor Networks by an Uneven Clustering Protocol and Energy Management

  • Ge, Yujia;Nan, Yurong;Chen, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1419-1436
    • /
    • 2020
  • For an energy harvesting sensor network, when the network lifetime is not the only primary goal, maximizing the network performance under environmental energy harvesting becomes a more critical issue. However, clustering protocols that aim at providing maximum information throughput have not been thoroughly explored in Energy Harvesting Wireless Sensor Networks (EH-WSNs). In this paper, clustering protocols are studied for maximizing the data transmission in the whole network. Based on a long short-term memory (LSTM) energy predictor and node energy consumption and supplement models, an uneven clustering protocol is proposed where the cluster head selection and cluster size control are thoroughly designed for this purpose. Simulations and results verify that the proposed scheme can outperform some classic schemes by having more data packets received by the cluster heads (CHs) and the base station (BS) under these energy constraints. The outcomes of this paper also provide some insights for choosing clustering routing protocols in EH-WSNs, by exploiting the factors such as uneven clustering size, number of clusters, multiple CHs, multihop routing strategy, and energy supplementing period.