• Title/Summary/Keyword: Accuracy of Selection

Search Result 1,156, Processing Time 0.033 seconds

Relay Selection Scheme Based on Quantum Differential Evolution Algorithm in Relay Networks

  • Gao, Hongyuan;Zhang, Shibo;Du, Yanan;Wang, Yu;Diao, Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3501-3523
    • /
    • 2017
  • It is a classical integer optimization difficulty to design an optimal selection scheme in cooperative relay networks considering co-channel interference (CCI). In this paper, we solve single-objective and multi-objective relay selection problem. For the single-objective relay selection problem, in order to attain optimal system performance of cooperative relay network, a novel quantum differential evolutionary algorithm (QDEA) is proposed to resolve the optimization difficulty of optimal relay selection, and the proposed optimal relay selection scheme is called as optimal relay selection based on quantum differential evolutionary algorithm (QDEA). The proposed QDEA combines the advantages of quantum computing theory and differential evolutionary algorithm (DEA) to improve exploring and exploiting potency of DEA. So QDEA has the capability to find the optimal relay selection scheme in cooperative relay networks. For the multi-objective relay selection problem, we propose a novel non-dominated sorting quantum differential evolutionary algorithm (NSQDEA) to solve the relay selection problem which considers two objectives. Simulation results indicate that the proposed relay selection scheme based on QDEA is superior to other intelligent relay selection schemes based on differential evolutionary algorithm, artificial bee colony optimization and quantum bee colony optimization in terms of convergence speed and accuracy for the single-objective relay selection problem. Meanwhile, the simulation results also show that the proposed relay selection scheme based on NSQDEA has a good performance on multi-objective relay selection.

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Implementation of genomic selection in Hanwoo breeding program (유전체정보활용 한우개량효율 증진)

  • Lee, Seung Hwan;Cho, Yong Min;Lee, Jun Heon;Oh, Seong Jong
    • Korean Journal of Agricultural Science
    • /
    • v.42 no.4
    • /
    • pp.397-406
    • /
    • 2015
  • Quantitative traits are mostly controlled by a large number of genes. Some of these genes tend to have a large effect on quantitative traits in cattle and are known as major genes primarily located at quantitative trait loci (QTL). The genetic merit of animals can be estimated by genomic selection, which uses genome-wide SNP panels and statistical methods that capture the effects of large numbers of SNPs simultaneously. In practice, the accuracy of genomic predictions will depend on the size and structure of reference and training population, the effective population size, the density of marker and the genetic architecture of the traits such as number of loci affecting the traits and distribution of their effects. In this review, we focus on the structure of Hanwoo reference and training population in terms of accuracy of genomic prediction and we then discuss of genetic architecture of intramuscular fat(IMF) and marbling score(MS) to estimate genomic breeding value in real small size of reference population.

Self-adaptive and Bidirectional Dynamic Subset Selection Algorithm for Digital Image Correlation

  • Zhang, Wenzhuo;Zhou, Rong;Zou, Yuanwen
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.305-320
    • /
    • 2017
  • The selection of subset size is of great importance to the accuracy of digital image correlation (DIC). In the traditional DIC, a constant subset size is used for computing the entire image, which overlooks the differences among local speckle patterns of the image. Besides, it is very laborious to find the optimal global subset size of a speckle image. In this paper, a self-adaptive and bidirectional dynamic subset selection (SBDSS) algorithm is proposed to make the subset sizes vary according to their local speckle patterns, which ensures that every subset size is suitable and optimal. The sum of subset intensity variation (${\eta}$) is defined as the assessment criterion to quantify the subset information. Both the threshold and initial guess of subset size in the SBDSS algorithm are self-adaptive to different images. To analyze the performance of the proposed algorithm, both numerical and laboratory experiments were performed. In the numerical experiments, images with different speckle distribution, different deformation and noise were calculated by both the traditional DIC and the proposed algorithm. The results demonstrate that the proposed algorithm achieves higher accuracy than the traditional DIC. Laboratory experiments performed on a substrate also demonstrate that the proposed algorithm is effective in selecting appropriate subset size for each point.

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

Comparison of Feature Selection Methods Applied on Risk Prediction for Hypertension (고혈압 위험 예측에 적용된 특징 선택 방법의 비교)

  • Khongorzul, Dashdondov;Kim, Mi-Hye
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.107-114
    • /
    • 2022
  • In this paper, we have enhanced the risk prediction of hypertension using the feature selection method in the Korean National Health and Nutrition Examination Survey (KNHANES) database of the Korea Centers for Disease Control and Prevention. The study identified various risk factors correlated with chronic hypertension. The paper is divided into three parts. Initially, the data preprocessing step of removes missing values, and performed z-transformation. The following is the feature selection (FS) step that used a factor analysis (FA) based on the feature selection method in the dataset, and feature importance (FI) and multicollinearity analysis (MC) were compared based on FS. Finally, in the predictive analysis stage, it was applied to detect and predict the risk of hypertension. In this study, we compare the accuracy, f-score, area under the ROC curve (AUC), and mean standard error (MSE) for each model of classification. As a result of the test, the proposed MC-FA-RF model achieved the highest accuracy of 80.12%, MSE of 0.106, f-score of 83.49%, and AUC of 85.96%, respectively. These results demonstrate that the proposed MC-FA-RF method for hypertension risk predictions is outperformed other methods.

A Method for Selecting Software Reliability Growth Models Using Trend and Failure Prediction Ability (트렌드와 고장 예측 능력을 반영한 소프트웨어 신뢰도 성장 모델 선택 방법)

  • Park, YongJun;Min, Bup-Ki;Kim, Hyeon Soo
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1551-1560
    • /
    • 2015
  • Software Reliability Growth Models (SRGMs) are used to quantitatively evaluate software reliability and to determine the software release date or additional testing efforts using software failure data. Because a single SRGM is not universally applicable to all kinds of software, the selection of an optimal SRGM suitable to a specific case has been an important issue. The existing methods for SRGM selection assess the goodness-of-fit of the SRGM in terms of the collected failure data but do not consider the accuracy of future failure predictions. In this paper, we propose a method for selecting SRGMs using the trend of failure data and failure prediction ability. To justify our approach, we identify problems associated with the existing SRGM selection methods through experiments and show that our method for selecting SRGMs is superior to the existing methods with respect to the accuracy of future failure prediction.

Association-based Unsupervised Feature Selection for High-dimensional Categorical Data (고차원 범주형 자료를 위한 비지도 연관성 기반 범주형 변수 선택 방법)

  • Lee, Changki;Jung, Uk
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.3
    • /
    • pp.537-552
    • /
    • 2019
  • Purpose: The development of information technology makes it easy to utilize high-dimensional categorical data. In this regard, the purpose of this study is to propose a novel method to select the proper categorical variables in high-dimensional categorical data. Methods: The proposed feature selection method consists of three steps: (1) The first step defines the goodness-to-pick measure. In this paper, a categorical variable is relevant if it has relationships among other variables. According to the above definition of relevant variables, the goodness-to-pick measure calculates the normalized conditional entropy with other variables. (2) The second step finds the relevant feature subset from the original variables set. This step decides whether a variable is relevant or not. (3) The third step eliminates redundancy variables from the relevant feature subset. Results: Our experimental results showed that the proposed feature selection method generally yielded better classification performance than without feature selection in high-dimensional categorical data, especially as the number of irrelevant categorical variables increase. Besides, as the number of irrelevant categorical variables that have imbalanced categorical values is increasing, the difference in accuracy between the proposed method and the existing methods being compared increases. Conclusion: According to experimental results, we confirmed that the proposed method makes it possible to consistently produce high classification accuracy rates in high-dimensional categorical data. Therefore, the proposed method is promising to be used effectively in high-dimensional situation.

Utility of Structural Information to Predict Drug Clearance from in Vitro Data

  • Lee, So-Young;Kim, Dong-Sup
    • Interdisciplinary Bio Central
    • /
    • v.2 no.2
    • /
    • pp.3.1-3.4
    • /
    • 2010
  • In the present research, we assessed the utility of the structural information of drugs for predicting human in vivo intrinsic clearance from in vitro intrinsic clearance data obtained by human hepatic microsome experiment. To compare with the observed intrinsic clearance, human intrinsic clearance values for 51 drugs were estimated by the classical methods using in vivo-in vitro scale-up and by the new methods using the in vitro experimental data and selected molecular descriptors of drugs by the forward selection technique together. The results showed that taking consideration of molecular descriptors into prediction from in vitro experimental data could improve the prediction accuracy. The in vitro experiment is very useful when the data can estimate in vivo data accurately since it can reduce the cost of drug development. Improvement of prediction accuracy in the present approach can enhance the utility of in vitro data.

Analysis of the Combined Positioning Accuracy using GPS and GLONASS Navigation Satellites

  • Choi, Byung-Kyu;Roh, Kyoung-Min;Lee, Sang Jeong
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.2 no.2
    • /
    • pp.131-137
    • /
    • 2013
  • In this study, positioning results that combined the code observation information of GPS and GLONASS navigation satellites were analyzed. Especially, the distribution of GLONASS satellites observed in Korea and the combined GPS/GLONASS positioning results were presented. The GNSS data received at two reference stations (GRAS in Europe and KOHG in Goheung, Korea) during a day were processed, and the mean value and root mean square (RMS) value of the position error were calculated. The analysis results indicated that the combined GPS/GLONASS positioning did not show significantly improved performance compared to the GPS-only positioning. This could be due to the inter-system hardware bias for GPS/GLONASS receivers, the selection of transformation parameters between reference coordinate systems, the selection of a confidence level for error analysis, or the number of visible satellites at a specific time.