• Title/Summary/Keyword: metric learning

Search Result 128, Processing Time 0.03 seconds

Pixel-based crack image segmentation in steel structures using atrous separable convolution neural network

  • Ta, Quoc-Bao;Pham, Quang-Quang;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Structural Monitoring and Maintenance
    • /
    • v.9 no.3
    • /
    • pp.289-303
    • /
    • 2022
  • In this study, the impact of assigned pixel labels on the accuracy of crack image identification of steel structures is examined by using an atrous separable convolution neural network (ASCNN). Firstly, images containing fatigue cracks collected from steel structures are classified into four datasets by assigning different pixel labels based on image features. Secondly, the DeepLab v3+ algorithm is used to determine optimal parameters of the ASCNN model by maximizing the average mean-intersection-over-union (mIoU) metric of the datasets. Thirdly, the ASCNN model is trained for various image sizes and hyper-parameters, such as the learning rule, learning rate, and epoch. The optimal parameters of the ASCNN model are determined based on the average mIoU metric. Finally, the trained ASCNN model is evaluated by using 10% untrained images. The result shows that the ASCNN model can segment cracks and other objects in the captured images with an average mIoU of 0.716.

DSL: Dynamic and Self-Learning Schedule Method of Multiple Controllers in SDN

  • Li, Junfei;Wu, Jiangxing;Hu, Yuxiang;Li, Kan
    • ETRI Journal
    • /
    • v.39 no.3
    • /
    • pp.364-372
    • /
    • 2017
  • For the reliability of controllers in a software defined network (SDN), a dynamic and self-learning schedule method (DSL) is proposed. This method is original and easy to deploy, and optimizes the combination of multiple controllers. First, we summarize multiple controllers' combinations and schedule problems in an SDN and analyze its reliability. Then, we introduce the architecture of the schedule method and evaluate multi-controller reliability, the DSL method, and its optimized solution. By continually and statistically learning the information about controller reliability, this method treats it as a metric to schedule controllers. Finally, we compare and test the method using a given testing scenario based on an SDN network simulator. The experiment results show that the DSL method can significantly improve the total reliability of an SDN compared with a random schedule, and the proposed optimization algorithm has higher efficiency than an exhaustive search.

An MILP Approach to a Nonlinear Pattern Classification of Data (혼합정수 선형계획법 기반의 비선형 패턴 분류 기법)

  • Kim, Kwangsoo;Ryoo, Hong Seo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.32 no.2
    • /
    • pp.74-81
    • /
    • 2006
  • In this paper, we deal with the separation of data by concurrently determined, piecewise nonlinear discriminant functions. Toward the end, we develop a new $l_1$-distance norm error metric and cast the problem as a mixed 0-1 integer and linear programming (MILP) model. Given a finite number of discriminant functions as an input, the proposed model considers the synergy as well as the individual role of the functions involved and implements a simplest nonlinear decision surface that best separates the data on hand. Hence, exploiting powerful MILP solvers, the model efficiently analyzes any given data set for its piecewise nonlinear separability. The classification of four sets of artificial data demonstrates the aforementioned strength of the proposed model. Classification results on five machine learning benchmark databases prove that the data separation via the proposed MILP model is an effective supervised learning methodology that compares quite favorably to well-established learning methodologies.

Instance Based Learning Revisited: Feature Weighting and its Applications

  • Song Doo-Heon;Lee Chang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.6
    • /
    • pp.762-772
    • /
    • 2006
  • Instance based learning algorithm is the best known lazy learner and has been successfully used in many areas such as pattern analysis, medical analysis, bioinformatics and internet applications. However, its feature weighting scheme is too naive that many other extensions are proposed. Our version of IB3 named as eXtended IBL (XIBL) improves feature weighting scheme by backward stepwise regression and its distance function by VDM family that avoids overestimating discrete valued attributes. Also, XIBL adopts leave-one-out as its noise filtering scheme. Experiments with common artificial domains show that XIBL is better than the original IBL in terms of accuracy and noise tolerance. XIBL is applied to two important applications - intrusion detection and spam mail filtering and the results are promising.

  • PDF

Injection of Cultural-based Subjects into Stable Diffusion Image Generative Model

  • Amirah Alharbi;Reem Alluhibi;Maryam Saif;Nada Altalhi;Yara Alharthi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.1-14
    • /
    • 2024
  • While text-to-image models have made remarkable progress in image synthesis, certain models, particularly generative diffusion models, have exhibited a noticeable bias to- wards generating images related to the culture of some developing countries. This paper introduces an empirical investigation aimed at mitigating the bias of image generative model. We achieve this by incorporating symbols representing Saudi culture into a stable diffusion model using the Dreambooth technique. CLIP score metric is used to assess the outcomes in this study. This paper also explores the impact of varying parameters for instance the quantity of training images and the learning rate. The findings reveal a substantial reduction in bias-related concerns and propose an innovative metric for evaluating cultural relevance.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Learning Probabilistic Kernel from Latent Dirichlet Allocation

  • Lv, Qi;Pang, Lin;Li, Xiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2527-2545
    • /
    • 2016
  • Measuring the similarity of given samples is a key problem of recognition, clustering, retrieval and related applications. A number of works, e.g. kernel method and metric learning, have been contributed to this problem. The challenge of similarity learning is to find a similarity robust to intra-class variance and simultaneously selective to inter-class characteristic. We observed that, the similarity measure can be improved if the data distribution and hidden semantic information are exploited in a more sophisticated way. In this paper, we propose a similarity learning approach for retrieval and recognition. The approach, termed as LDA-FEK, derives free energy kernel (FEK) from Latent Dirichlet Allocation (LDA). First, it trains LDA and constructs kernel using the parameters and variables of the trained model. Then, the unknown kernel parameters are learned by a discriminative learning approach. The main contributions of the proposed method are twofold: (1) the method is computationally efficient and scalable since the parameters in kernel are determined in a staged way; (2) the method exploits data distribution and semantic level hidden information by means of LDA. To evaluate the performance of LDA-FEK, we apply it for image retrieval over two data sets and for text categorization on four popular data sets. The results show the competitive performance of our method.

A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra

  • Galib, S.M.;Bhowmik, P.K.;Avachat, A.V.;Lee, H.K.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4072-4079
    • /
    • 2021
  • This article presents a study on the state-of-the-art methods for automated radioactive material detection and identification, using gamma-ray spectra and modern machine learning methods. The recent developments inspired this in deep learning algorithms, and the proposed method provided better performance than the current state-of-the-art models. Machine learning models such as: fully connected, recurrent, convolutional, and gradient boosted decision trees, are applied under a wide variety of testing conditions, and their advantage and disadvantage are discussed. Furthermore, a hybrid model is developed by combining the fully-connected and convolutional neural network, which shows the best performance among the different machine learning models. These improvements are represented by the model's test performance metric (i.e., F1 score) of 93.33% with an improvement of 2%-12% than the state-of-the-art model at various conditions. The experimental results show that fusion of classical neural networks and modern deep learning architecture is a suitable choice for interpreting gamma spectra data where real-time and remote detection is necessary.

FedGCD: Federated Learning Algorithm with GNN based Community Detection for Heterogeneous Data

  • Wooseok Shin;Jitae Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Federated learning (FL) is a ground breaking machine learning paradigm that allow smultiple participants to collaboratively train models in a cloud environment, all while maintaining the privacy of their raw data. This approach is in valuable in applications involving sensitive or geographically distributed data. However, one of the challenges in FL is dealing with heterogeneous and non-independent and identically distributed (non-IID) data across participants, which can result in suboptimal model performance compared to traditionalmachine learning methods. To tackle this, we introduce FedGCD, a novel FL algorithm that employs Graph Neural Network (GNN)-based community detection to enhance model convergence in federated settings. In our experiments, FedGCD consistently outperformed existing FL algorithms in various scenarios: for instance, in a non-IID environment, it achieved an accuracy of 0.9113, a precision of 0.8798,and an F1-Score of 0.8972. In a semi-IID setting, it demonstrated the highest accuracy at 0.9315 and an impressive F1-Score of 0.9312. We also introduce a new metric, nonIIDness, to quantitatively measure the degree of data heterogeneity. Our results indicate that FedGCD not only addresses the challenges of data heterogeneity and non-IIDness but also sets new benchmarks for FL algorithms. The community detection approach adopted in FedGCD has broader implications, suggesting that it could be adapted for other distributed machine learning scenarios, thereby improving model performance and convergence across a range of applications.

Implementation of Image Enhancement Algorithm using Learning User Preferences (선호도 학습을 통한 이미지 개선 알고리즘 구현)

  • Lee, YuKyong;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.1
    • /
    • pp.71-75
    • /
    • 2018
  • Image enhancement is a necessary end essential step after taking a picture with a digital camera. Many different photo software packages attempt to automate this process with various auto enhancement techniques. This paper provides and implements a system that can learn a user's preferences and apply the preferences into the process of image enhancement. Five major components are applied to the implemented system, which are computing a distance metric, finding a training set, finding an optimal parameter set, training and finally enhancing the input image. To estimate the validity of the method, we carried out user studies, and the fact that the implemented system was preferred over the method without learning user preferences.