DOI QR코드

DOI QR Code

Transfer Learning for Caladium bicolor Classification: Proof of Concept to Application Development

  • Porawat Visutsak (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Xiabi Liu (School of Computer Science and Technology, Beijing Institute Technology) ;
  • Keun Ho Ryu (Database/Bioinformatics Laboratory Chungbuk National University) ;
  • Naphat Bussabong (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Nicha Sirikong (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Preeyaphorn Intamong (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Warakorn Sonnui (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Siriwan Boonkerd (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Jirawat Thongpiem (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Maythar Poonpanit (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Akarasate Homwiseswongsa (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Kittipot Hirunwannapong (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Chaimongkol Suksomsong (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB) ;
  • Rittikait Budrit (Department of Computer and Information Science, Faculty of Applied Science, KMUTNB)
  • Received : 2023.04.13
  • Accepted : 2023.12.25
  • Published : 2024.01.31

Abstract

Caladium bicolor is one of the most popular plants in Thailand. The original species of Caladium bicolor was found a hundred years ago. Until now, there are more than 500 species through multiplication. The classification of Caladium bicolor can be done by using its color and shape. This study aims to develop a model to classify Caladium bicolor using a transfer learning technique. This work also presents a proof of concept, GUI design, and web application deployment using the user-design-center method. We also evaluated the performance of the following pre-trained models in this work, and the results are as follow: 87.29% for AlexNet, 90.68% for GoogleNet, 93.59% for XceptionNet, 93.22% for MobileNetV2, 89.83% for RestNet18, 88.98% for RestNet50, 97.46% for RestNet101, and 94.92% for InceptionResNetV2. This work was implemented using MATLAB R2023a.

Keywords

1. Introduction

During the pandemic of COVID-19, people have been disrupted their lives by working from home and living with the limitation of day-to-day life. This disruption has impacted many populations across the world including those in Thailand. The restriction of social practice has led to national mental crisis. To reduce stress and mental health problems, families spend more time together for gardening and planting. Caladium bicolor is one of the most popular indoor plants in Thailand. Actually, the original species of Caladium bicolor was found a hundred years ago. In the present day, there are more than 500 species through multiplication, with various leaf shapes and colors [1], [2], [3], and [4]. Buying and tending Caladium bicolor is now not finding joy, but it has already become a big business in Thailand. The prices of some Caladium bicolor are triple because of its high demand, and it is rarely found in some species.

In this study, we proposed a transfer learning technique to classify Caladium bicolor by using the shape features of Caladium bicolor. We gathered images of Caladium bicolor from Google images and used an augmentation technique to increase our image dataset. The efficient 8 well-known pre-trained models were chosen for evaluation in our work (AlexNet, GoogleNet, XceptionNet, MobileNetV2, RestNet18, RestNet50, RestNet101, and InceptionResNetV2). This work was implemented using MATLAB R2023a. After gaining the experimental results (proof of concept), we selected the best candidate for these pre-trained models to be embedded in the application of Caladium bicolor classification. We designed a GUI for this application using the UX/UI concept. To evaluate which design is the most appropriate for use in the classification of Caladium bicolor in practice, we designed two variants of the GUI to test during the user-testing process. This paper is organized as follows: Section 2 describes the problem statement and literature review. The implementation is shown in Section 3. Section 4 describes the experimental results and discussion. The conclusions are mentioned in Section 5.

2. Problem Statement and Preliminaries

There are more than 500 species of Caladium bicolor found in Thailand. Most of them are new species, which were produced through multiplication across the original one; therefore, each of them has different characteristics, especially in the color pattern and leaf shape [3] and [4]. To classify Caladium bicolor from the color pattern is very difficult because there are many diversities in color within a leaf, even though they are of the same breed. This paper presents the classification of Caladium bicolor using shape features with a transfer learning technique. This work also presents a method of using A/B testing to conduct the assessment of GUI design used to deploy the application of the Caladium bicolor classification.

Our previous work used the transfer learning technique with the Harr-like feature extracted from images to classify the Thai Buddha amulet. We gathered Buddha amulet images from Google and used the data augmentation method to increase our images for training the model. The Haar-like feature provided robustness to the training model since it was very helpful in capturing the boundary feature of the Buddha amulet with low dimensions to generate the classification model. Our previous work obtained the results of 0.9367, 0.9379, and 0.9373 for precision, recall, and F1 score, respectively [5]. To understand the revolution of plants, the study used the ML technique to learn a large, complex dataset of plant genotyping and phenotyping. The study used ML to extract the feature of the gene structure of plants to learn, analyze, and predict the mutation in plant breeding [6]. A later study claimed that the work was the first plant image dataset in a natural scene collected using a mobile phone. The dataset contains 10000 images of 100 plant breeds captured on the Beijing Forestry University campus. The work also claimed that the model yielded 91.78% of a recognition rate [7]. The extraction of plant leaves and the usage of leaf features as training data were introduced in [8]. The study showed how to segment the plant leaf and extract the shape feature apart from plant texture to train a deep model. The evaluation of the model was also exhibited and compared with the KNN-based neighborhood classification, Kohonen network based on a self-organizing feature mapping algorithm, and SVM-based support vector machine. A simple KNN-based classifier for extracting the surface of plant was also introduced in [9]. Unlike other works, this work focuses on the surface features of plants to use in a robotized vision framework for harvesting and farming objectives. Dried plant images have also been used to study whether the plant organ could be observed if the water component was extracted [10]. The objective of this work was to focus on plant development under a restricted water condition, especially during the dry season. The methods used in this work were SVM, ANN, and some image-processing techniques to highlight the dry texture of a plant. In [11], the evaluation of eight effective approaches (e.g., Random Forest, SVM, RestNet50, CNN, VGG16, VGG19, PNN, and KNN) was done on the Flavia plant-leaf image dataset. By aiming that the results of the study can help save plants worldwide, this work also used plant leaves as an important feature to improve information on plant classification and recognition. Recent works on plant disease classification based on ML were reviewed in [12] and [13]. Similar to plant classification applications, the major contribution of plant disease classification is the preparation of plant disease datasets. The gathering of plant disease images is an important step because researchers must trade off which plants are important to the market, and the image gathering must be taken only in the harvesting seasons; otherwise, they must wait until the next year. The popular models used to classify plant disease are GoogleNet, RestNet, VGG, KNN, etc.

More works on plant disease detection and flower classification using image processing are introduced in [14], [15], [16], [17], and [18]. In [14], the deep learning technique was applied to improve plant health and food crop productivity in the ecology-based farming system. By using CNN-based computer vision and remote sensing techniques, the farmer can access the system and observe the farming environment via the cloud. The disease experts can give their advice, and the system can also give the basic recommendation of plant diseases. The image dataset includes the major diseases of rice, potato, and tomato, with more than 7500 images. The classification results gave 0.98% of accuracy. In [15] and [16], these recent works proposed the feature extraction techniques for flower images. These features were also fed to the well-known pre-trained models such as DenseNet121, ResNet 50, ResNet 101, Inception ResNet V2, Inception V2, NAS, and MobileNet V2. The classification analysis was compared in these works. In [17], the segmentation technique was applied to leaf images to segment a leaf region from the overlapped background image. The dataset contains 2500 leaf images of 15 species with a complicated background. The method used 2000 images to train a model for leaf segmentation by using a mask Region-based convolutional neural network (Mask R-CNN). By fine-tuning the hyperparameters of segmentation, the method gained 91.5% accuracy with 1.15% of misclassification error. In [18], this work proposed the modified VGG16 model to classify five categories of common flowers, namely, daisy, dandelion, sunflower, rose and tulip flowers. The model was trained using 3520 flower images with 3×3 filters. This simple model gained an accuracy of 0.95 on the Kaggle dataset.

3. Implementation

We collected Caladium bicolor images from Google images by using the Google custom image search and the JavaScript console in Chrome and retrieved a bunch of Caladium bicolor images in Thai local names according to five classes as follows: 1) Bon Bai Klom, 129 images; 2) Bon Bai Kab, 107 images; 3) Bon Bai Thai, 125 images; 4) Bon Bai Pai, 104 images; and 5) Bon Bai Yao, 120 images (total images collected from Google = 585 images). Some examples of Caladium bicolor images are shown in Table 1. We implemented our work using MATLAB R2023a (9.14.0.2286388). We created an image data store according to the folder labeled by the class name with the MATLAB command imageDatastore. We separated our data store into a training set (80% of the data store) and a validation set (20% of the data store) using the MATLAB command splitEachlabel. The major problem with our work was the small dataset; therefore, we had to increase the number of our images. We used the MATLAB command augmentedImageDatastore for both training and validating datasets. MATLAB data augmentation provides some useful image processing techniques, e.g., rotating, flipping, resizing, and translating, to increase the number of Caladium bicolor images. Fig. 1 shows the MATLAB augmentation results.

Table 1. Caladium bicolor images from Google custom image search

E1KOBZ_2024_v18n1_126_t0001.png 이미지

E1KOBZ_2024_v18n1_126_f0001.png 이미지

Fig. 1. Data augmentation for Caladium bicolor images.

To train our model, we used efficient pre-trained models to reduce the computational cost of training a brand-new deep learning model. In our experiment, we chose AlexNet, GoogleNet, XceptionNet, MobileNetV2, RestNet18, RestNet50, RestNet101, and InceptionResNetV2 as the pre-trained models to solve our new classification problem. We used a MATLAB network designer and adopted some parameters for our experiment. Table 2 provides some important parameters, which we used in our training process.

Table 2. Parameter configuration for the MATLAB network designer

E1KOBZ_2024_v18n1_126_t0002.png 이미지

Fig. 2 shows the system block diagram and the proposed model. Briefly, the contribution of this work includes: 1) creating the image data store (Caladium image dataset), 2) augmentation of the image dataset using MATLAB image processing, 3) training eight pre-trained models, 4) model evaluation/selection, 5) design MATLAB GUI based on the user-centered concept, 6) conducting the A/B testing and user testing, and 7) deployment of the model.

E1KOBZ_2024_v18n1_126_f0002.png 이미지

Fig. 2. System model.

We set the output size to five, according to the five classes of Caladium bicolor images. We set the network architecture with the following input and output layers: 1) the input layer (Caladium bicolor input image): the input = 224×224×3, and generated the output = 7×7×64; 2) the bypass layer 1: the input = 1×1×64, and generated the output = 1×1×256; 3) the bypass layer 2: the input = 1×1×128, and generated the output = 1×1×512; 4) the bypass layer 3: the input = 1×1×256, and generated the output = 1 × 1 × 1024; 5) the bypass layer 4: the input = 1 × 1 × 512, and generated the output = 1 × 1 × 2048; 6) the output layer: we used the average pooling and five layers of the fully connected layer. The training results and parameter comparisons are shown in Section 4.

4. Results and discussion

In this section, we compare the training parameters gained from the MATLAB network designer, and we choose the best candidate for the pre-trained model to deploy in the Caladium bicolor classification application. We also show a user-testing process based on the UX/UI concept to guarantee that our application meets user requirements. Table 3 shows the experimental results.

Table 3. Training process and parameters

E1KOBZ_2024_v18n1_126_t0003.png 이미지

The experimental results are shown: AlexNet yielded 87.29% accuracy at 50 epochs, GoogleNet yielded 90.68% accuracy at 30 epochs, XceptionNet yielded 93.59% accuracy at 8 epochs, MobileNetV2 yielded 93.22% accuracy at 50 epochs, RestNet18 yielded 89.83% accuracy at 30 epochs, RestNet50 yielded 88.98% accuracy at 8 epochs, RestNet101 yielded 97.46% accuracy at 30 epochs, and InceptionResNetV2 yielded 94.92% accuracy at 50 epochs. The accuracy and loss of all pre-trained models, together with the overall training time, are shown in Fig. 3 and Table 4, respectively. As the results of our experiment, we focused on the two best candidates among the eight pre-trained models, which were RestNet101 and InceptionResNetV2. RestNet101 gained a validation accuracy of 97.46% and validation loss of 0.01% with an overall running time of 34 minutes and 23 seconds, whereas InceptionResNetV2 gained a validation accuracy of 94.92% and validation loss of 0.28% with an overall running time of 344 minutes and 5 seconds. Therefore, we considered RestNet101 as the classification model to the application deployment in the next step.

E1KOBZ_2024_v18n1_126_f0003.png 이미지

Fig. 3. Accuracy and loss of pre-trained models.

Table 4. Comparisons of training parameters

E1KOBZ_2024_v18n1_126_t0004.png 이미지

We also evaluated our pre-trained models in terms of precision, recall, accuracy, and F1 score using equations (1)–(4):

\(\begin{align}\text {Precision}=\frac{T P}{(T P+F P)}\end{align}\)       (1)

\(\begin{align}\text {Recall}=\frac{T P}{T P+F N}\end{align}\)       (2)

\(\begin{align}\text {Accuracy}=\frac{\text {Correct classification}}{\text {Number of entire instance set}}\end{align}\)       (3)

\(\begin{align}\text {F1 score}=\frac{ 2\times (\text {Precision} \times \text {Recall})} {\text {(Precision + Recall)}}\end{align}\)       (4)

where TP is true positive and defined as the correct classified type of activities. TN is true negative. FP is false positive. FN is false negative. Table 5 shows the classification results of the proposed method.

Table 5. Classification results

E1KOBZ_2024_v18n1_126_t0005.png 이미지

The confusion charts for RestNet101 and InceptionResNetV2 are also shown in Fig. 4 and Fig. 5, respectively. These charts show the classes that networks were not able to classify accurately; therefore, we can recheck our data and retrain networks with improved data to see whether they can be corrected. We observe that in InceptionResNetV2, the class Bon Bai Klom is the most misclassified class (11.5% of misclassification). In this case, we can improve the mini-batch accuracy of InceptionResNetV2 using the MATLAB command knnsearch, which finds the nearest neighbors of Bon Bai Klom images using the K-D Tree search algorithm. The K-D Tree algorithm is a simple but efficient search algorithm that browses split nodes in two directions (same as binary search Tree). Using the median as the split criterion, the search result can be obtained quickly according to a balanced structure [19] and [20]. Fig. 6 shows the modified layer and corrected classification of the class Bon Bai Klom.

E1KOBZ_2024_v18n1_126_f0004.png 이미지

Fig. 4. Confusion chart of RestNet101.

E1KOBZ_2024_v18n1_126_f0005.png 이미지

Fig. 5. Confusion chart of InceptionResNetV2.

E1KOBZ_2024_v18n1_126_f0006.png 이미지

Fig. 6. Result of the MATLAB command knnsearch to modify the selected layer.

As seen in Fig. 4 and Fig. 5, RestNet101 gave us better classification results compared to InceptionResNetV2. Therefore, we chose to deploy RestNet101 for our application. In the next step, we deployed RestNet101 to the Caladium bicolor application. To gather the user requirements, we listed some MATAB GUI (e.g., panel, button, axes, edit text, etc.) that were required for designing the application as choices for selection. After the layout design, we made two variants of the application GUI design and conducted A/B testing to figure out which GUI was the most appropriate and met to the user need [21]. Fig. 7 shows the two variants used for A/B testing. In variant no.1, the GUI consists of axis no.1 for loading an image to be classified by pressing a push button, the results of classification will be shown in the application by appearing in three edit text objects. In variant no.2, the GUI consists of two axes: the first one is used to load an image for classification, and the second one is used to show the result in a graph-like format. Both of the two GUIs also provide the results of the class label and % confidence level.

E1KOBZ_2024_v18n1_126_f0007.png 이미지

Fig. 7. Two variants GUI for running A/B testing.

A/B testing was conducted in an HCI course (040613349), which included 24 students in this class, and 98% of the students reported that they were familiar with the concept of UX/UI and the user testing process. To run the test without any bias, we did this test as a partial of the user-testing class. After a class lecture, we gave all the information necessary for performing A/B testing, together with instructions to do the user testing and the expectations of the experiment. To evaluate A/B testing, all students were asked to complete a questionnaire “System Usability & UX Questionnaire.” We designed this questionnaire using five questions, which were adopted based on the usability criteria defined by ISO 9241-11 and the UX criteria [22]. All the questions were designed to cover the necessary topics of the UX/UI concept (utility, functionality, ease of use, consistency, and satisfaction). We designed a questionnaire with five response options ranging from strongly disagree to strongly agree (1–5). The questionnaire also included an open question to provide more specific feedback.

The evaluation results of the two variants are shown in Table 6 (average and standard deviation). The responses were statistically significant for one variable: ease of use (p0.05). The p-values for the five variables are also shown in Table 6. Therefore, we chose a GUI design for variant B to deploy the RestNet101 model. Fig. 8 shows the final GUI of the Caladium bicolor classification application, where an input image can be loaded on the left side of the panel, and the result of the classification is shown on the right side of the panel. The predicted label is shown together with the confidence scores of the RestNet101 prediction.

Table 6. Average rating of two variants and p-value (*sig)

E1KOBZ_2024_v18n1_126_t0006.png 이미지

E1KOBZ_2024_v18n1_126_f0008.png 이미지

Fig. 8. Screen capture of the application.

5. Conclusion

The contribution of this work is the classification of Caladium bicolor. There are more than 500 breeds of Caladium bicolor in Thailand. Caladium bicolor is one of the most popular plants used for gardening and planting during the COVID-19 pandemic. In this study, we proposed a transfer learning technique to classify Caladium bicolor using the shape features of Caladium bicolor. We gathered Caladium bicolor images from Google images using Google custom image search and the JavaScript console in Chrome, and we also increased the number of images using the MATLAB augmentation process to generate more images for training the model. Eight efficient and well-known pre-trained models were conducted in our experiments, which were AlexNet, GoogleNet, XceptionNet, MobileNetV2, RestNet18, RestNet50, RestNet101, and InceptionResNetV2. The experimental results showed that RestNet101 and InceptionNetV2 yielded the best results among the eight pre-trained models. RestNet101 gained a validation accuracy of 97.46% with an overall running time of 34 minutes and 23 seconds, whereas InceptionResNetV2 gained a validation accuracy of 94.92% with an overall running time of 344 minutes and 5 seconds. Therefore, we selected RestNet101 as the model to deploy into the Caladium bicolor classification application. The GUI of the application was designed based on the UX/UI concept; we conducted an A/B testing process to search for the best appropriate GUI design for the application using the questionnaire. The questionnaire was designed based on ISO 9241-11 and UX criteria (utility, functionality, ease of use, consistency, and satisfaction). The statistical responses showed that there was a statistical difference in one UX criterion: ease of use (p<0.05). The screen captured for the classification application is shown in Fig. 8, where the classification of the Caladium bicolor image gives the correct label (Bon Bai Kab) with 99.99% confidence scores.

Acknowledgment

This research was funded by the Faculty of Applied Science, King Mongkut’s University of Technology, North Bangkok, Thailand (Contract No. 662145).

This research partly used computational resources from the China Scholarship Council (CSC) for the Senior Visiting Scholarship Program provided by the School of Computer Science and Technology, Beijing Institute of Technology (BIT).

References

  1. A. C. Maia and C. Schlindwein, "Caladium bicolor (Araceae) and Cyclocephala celata (Coleoptera, Dynastinae): A Well-established Pollination System in the Northern Atlantic Rainforest of Pernambuco, Brazil," Plant Biology, 8(4), pp. 529-534, 2006. https://doi.org/10.1055/s-2006-924045
  2. E. U. Ahmed, T. Hayashi, Y. Zhu, M. Hosokawa and S. Yazawa, "Lower Incidence of Variants in Caladium bicolor Ait. Plants Propagated by Culture of Explants from Younger Tissue," Scientia Horticulturae, 96(1-4), pp. 187-194, 2002. https://doi.org/10.1016/S0304-4238(02)00092-4
  3. W. Hetterscheid, J. Bogner and J. Boos, "Two New Caladium Species. Aroideana," Journal of the International Aroid Society, 32, pp. 126-131, 2009.
  4. C. Ekeke and I. O. Agbagwa, "Anatomical Characteristics of Nigerian Variants of Caladium bicolor (Aiton) Vent. (Araceae)," African Journal of Plant Science, 10(7), pp. 121-129, 2016. https://doi.org/10.5897/AJPS2016.1416
  5. P. Visutsak, T. Kuarkamphun and N. Samleepant, "Thai Buddha Amulet Classification Using Discrete Wavelet Transform and Transfer Learning," ICIC Express Letter, 16(11), pp. 1205-1214, 2022.
  6. A. D. J. V. Dijk, G. Kootstra, W. Kruijer and D. D. Ridder, "Machine Learning in Plant Science and Plant Breeding," iScience, 24(1), 2021.
  7. Y. Sun, Y. Liu, G. Wang and H. Zhang, "Deep Learning for Plant Identification in Natural Environment," Computational Intelligence and Neuroscience, vol. 2017, 6 pages, 2017.
  8. J. Huixian, "The Analysis of Plants Image Recognition Based on Deep Learning and Artificial Neural Network," IEEE Access, 8, pp. 68828-68841, 2020. https://doi.org/10.1109/ACCESS.2020.2986946
  9. G. Valarmathi, S.U. Suganthi, V. Subashini, R. Janaki, R. Sivasankari and S. Dhanasekar, "CNN Algorithm for Plant Classification in Deep Learning," Materials Today: Proceedings, 46(9), pp. 3684-3689, 2021. https://doi.org/10.1016/j.matpr.2021.01.847
  10. K. Shobana and P. Perumal, "Plants Classification Using Machine Learning Algorithm," in Proc. of 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 96-100, 2020.
  11. S. Ghosh and A. Singh, "The Analysis of Plants Image Classification Based on Machine Learning Approaches," in Emergent Converging Technologies and Biomedical Systems, pp. 133-148, 2022.
  12. D. Gosai, B. Kaka, D. Garg, R. Patel and A. Ganatra, "Plant Disease Detection and Classification Using Machine Learning Algorithm," in Proc. of 2022 International Conference for Advancement in Technology (ICONAT), Goa, India, pp. 1-6, 2022.
  13. T. S. Xian and R. Ngadiran, "Plant Diseases Classification Using Machine Learning," Journal of Physics: Conference Series, 1962(1), p. 012024, 2021.
  14. R. Sharma, A. Singh, N. Z. Jhanjhi, M. Masud, E. S. Jaha and S. Verma, "Plant Disease Diagnosis and Image Classification Using Deep Learning," Computers, Materials & Continua, 71(2), 2022.
  15. N. Alipour, O. Tarkhaneh, M. Awrangjeb and H. Tian, "Flower Image Classification Using Deep Convolutional Neural Network," in Proc. of 2021 7th International Conference on Web Research (ICWR), pp. 1-4, 2021.
  16. I. Patel and S. Patel, "An Optimized Deep Learning Model for Flower Classification Using NASFPN and Faster R-CNN," International Journal of Scientific & Technology Research, 9(03), pp. 5308-5318, 2020.
  17. K. Yang, Z. Weizhen and L. Fengguo, "Leaf Segmentation and Classification with a Complicated Background Using Deep Learning," Agronomy, 10(11), p. 1721, 2020.
  18. S. Giraddi, S. Seeri, P. S. Hiremath and J. G. N, "Flower Classification Using Deep Learning models," in Proc. of 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, India, pp. 130-133, 2020.
  19. M. A. Jabbar, B. L. Deekshatulu and P. Chandra, "Classification of Heart Disease Using K-Nearest Neighbor and Genetic Algorithm," Procedia Technology, 10, pp. 85-94, 2013. https://doi.org/10.1016/j.protcy.2013.12.340
  20. P. Thanh Noi and M. Kappas, "Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery," Sensors (Basel), 18(1), p. 18, 2018.
  21. C. Kamolsin, F. Pensiri, K. H. Ryu and P. Visutsak, "The Evaluation of GUI Design Using Questionnaire and Multivariate Testing," in Proc. of 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), pp. 191-195, 2022.
  22. M. Speicher, "What is Usability? A Characterization Based on ISO 9241-11 and ISO/IEC 25010," arXiv preprint arXiv:1502.06792, 2015.