• Title/Summary/Keyword: variable complexity model

Search Result 91, Processing Time 0.023 seconds

Mobile shopping intentions: Do trustworthiness and culture Matter?

  • GARROUCH, Karim;TIMOULALI, ElHabib
    • Journal of Distribution Science
    • /
    • v.18 no.11
    • /
    • pp.69-77
    • /
    • 2020
  • Purpose: This research aims to verify the role of mobile shopping attributes, trustworthiness, and cultural dimensions on mobile shopping intentions in Saudi Arabia. The originality of the model stems from the verification of the moderating impact of cultural variables, namely collectivism and masculinity, and from the integration of trustworthiness as a variable depending on mobile shopping attributes. Research design, data and methodology: A survey was distributed to 233 consumers with different nationalities living in the Kingdom of Saudi Arabia. Structural equation modeling and multi-group analysis were carried out to verify the conceptual model and the moderating variables. Results: The findings support the influence of several innovation attributes, namely complexity and trialability on behavioral intentions, while relative advantage has a direct impact on trustworthiness. A few paths are moderated by masculinity and collectivism. Conclusions: Culture and mobile commerce attributes need to be thought out by managers as factors influencing mobile commerce segmentation for expatriates and locals. Trustworthiness is also a key factor of mobile shopping adoption. Limitations and future research ideas are presented to enrich the proposed model and improve its predictive validity.

Use of partial least squares analysis in concrete technology

  • Tutmez, Bulent
    • Computers and Concrete
    • /
    • v.13 no.2
    • /
    • pp.173-185
    • /
    • 2014
  • Multivariate analysis is a statistical technique that investigates relationship between multiple predictor variables and response variable and it is a very commonly used statistical approach in cement and concrete industry. During model building stage, however, many predictor variables are included in the model and possible collinearity problems between these predictors are generally ignored. In this study, use of partial least squares (PLS) analysis for evaluating the relationships among the cement and concrete properties is investigated. This regression method is known to decrease the model complexity by reducing the number of predictor variables as well as to result in accurate and reliable predictions. The experimental studies showed that the method can be used in the multivariate problems of cement and concrete industry effectively.

THE ACCURACY OF MEASUREMENTS DURING MODEL SURGERY FOR ORTHOGNATHIC PLANNING (악교정 수술을 위한 석고모형 수술시의 계측오차)

  • Lee, Sang-Hwy;Lee, Seung-Hoon;Ju, Hyeon-Ho;Won, Dong-Hwan
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.27 no.1
    • /
    • pp.37-45
    • /
    • 2001
  • The errors in orthognathic surgery can occur during the preoperative preparations including the model surgery, but till now there's been some lack of reserches about them. So we wanted to verify the accuracies in measurements used in model surgery. We compared the accuracy of measurements by vernier calipers, which has been the main measurement tool for conventional model surgery, and that by height gauge, which is recently claimed to be more accurate, with 3 dimensional coordinate analyzer. We could have following results and have a plan to use them for the invention of new model surgery techniques. 1. The measurement errors in Group 1, which mean the difference between "the measurements by 3-D analyzer"and "the measurements by height gauge", were small enough with the range of $0.1{\sim}0.2mm$ in all planes. 2. The mean error in Group 2, which is the differences between the measurements of 3-D analyzer and those of vernier calipers, was 1.1mm. 3. The measurement errors in Group 2 were variable according to the factors including the differences of individuality and expertness of each measurers. But in case of Group 1, they were small and not variable by the expertness. 4. The measurements were more accurate at the points in anterior teeth than in molar teeth in Group 1 and 2. 5. The errors after model surgery increased remarkably, compared with those before surgery in Group 2. And the situation was different in Group 1 in that errors decreased after surgery. According to these results, it assumed that the measurements with height gauge during the model surgery for orthognathic surgery are accurate enough and can be maintained, regardless of complexity of models, individuality, or expertness of measurers.

  • PDF

A maximum likelihood approach to infer demographic models

  • Chung, Yujin
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.385-395
    • /
    • 2020
  • We present a new maximum likelihood approach to estimate demographic history using genomic data sampled from two populations. A demographic model such as an isolation-with-migration (IM) model explains the genetic divergence of two populations split away from their common ancestral population. The standard probability model for an IM model contains a latent variable called genealogy that represents gene-specific evolutionary paths and links the genetic data to the IM model. Under an IM model, a genealogy consists of two kinds of evolutionary paths of genetic data: vertical inheritance paths (coalescent events) through generations and horizontal paths (migration events) between populations. The computational complexity of the IM model inference is one of the major limitations to analyze genomic data. We propose a fast maximum likelihood approach to estimate IM models from genomic data. The first step analyzes genomic data and maximizes the likelihood of a coalescent tree that contains vertical paths of genealogy. The second step analyzes the estimated coalescent trees and finds the parameter values of an IM model, which maximizes the distribution of the coalescent trees after taking account of possible migration events. We evaluate the performance of the new method by analyses of simulated data and genomic data from two subspecies of common chimpanzees in Africa.

Multivariate quantile regression tree (다변량 분위수 회귀나무 모형에 대한 연구)

  • Kim, Jaeoh;Cho, HyungJun;Bang, Sungwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.3
    • /
    • pp.533-545
    • /
    • 2017
  • Quantile regression models provide a variety of useful statistical information by estimating the conditional quantile function of the response variable. However, the traditional linear quantile regression model can lead to the distorted and incorrect results when analysing real data having a nonlinear relationship between the explanatory variables and the response variables. Furthermore, as the complexity of the data increases, it is required to analyse multiple response variables simultaneously with more sophisticated interpretations. For such reasons, we propose a multivariate quantile regression tree model. In this paper, a new split variable selection algorithm is suggested for a multivariate regression tree model. This algorithm can select the split variable more accurately than the previous method without significant selection bias. We investigate the performance of our proposed method with both simulation and real data studies.

Enhancing prediction accuracy of concrete compressive strength using stacking ensemble machine learning

  • Yunpeng Zhao;Dimitrios Goulias;Setare Saremi
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.233-246
    • /
    • 2023
  • Accurate prediction of concrete compressive strength can minimize the need for extensive, time-consuming, and costly mixture optimization testing and analysis. This study attempts to enhance the prediction accuracy of compressive strength using stacking ensemble machine learning (ML) with feature engineering techniques. Seven alternative ML models of increasing complexity were implemented and compared, including linear regression, SVM, decision tree, multiple layer perceptron, random forest, Xgboost and Adaboost. To further improve the prediction accuracy, a ML pipeline was proposed in which the feature engineering technique was implemented, and a two-layer stacked model was developed. The k-fold cross-validation approach was employed to optimize model parameters and train the stacked model. The stacked model showed superior performance in predicting concrete compressive strength with a correlation of determination (R2) of 0.985. Feature (i.e., variable) importance was determined to demonstrate how useful the synthetic features are in prediction and provide better interpretability of the data and the model. The methodology in this study promotes a more thorough assessment of alternative ML algorithms and rather than focusing on any single ML model type for concrete compressive strength prediction.

Distributed Target Localization with Inaccurate Collaborative Sensors in Multipath Environments

  • Feng, Yuan;Yan, Qinsiwei;Tseng, Po-Hsuan;Hao, Ganlin;Wu, Nan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2299-2318
    • /
    • 2019
  • Location-aware networks are of great importance for both civil lives and military applications. Methods based on line-of-sight (LOS) measurements suffer sever performance loss in harsh environments such as indoor scenarios, where sensors can receive both LOS and non-line-of-sight (NLOS) measurements. In this paper, we propose a data association (DA) process based on the expectation maximization (EM) algorithm, which enables us to exploit multipath components (MPCs). By setting the mapping relationship between the measurements and scatters as a latent variable, coefficients of the Gaussian mixture model are estimated. Moreover, considering the misalignment of sensor position, we propose a space-alternating generalized expectation maximization (SAGE)-based algorithms to jointly update the target localization and sensor position information. A two dimensional (2-D) circularly symmetric Gaussian distribution is employed to approximate the probability density function of the sensor's position uncertainty via the minimization of the Kullback-Leibler divergence (KLD), which enables us to calculate the expectation step with low computational complexity. Moreover, a distributed implementation is derived based on the average consensus method to improve the scalability of the proposed algorithm. Simulation results demonstrate that the proposed centralized and distributed algorithms can perform close to the Monte Carlo-based method with much lower communication overhead and computational complexity.

A Study on the Dimension of Quality Metrics for Information Systems Development and Success : An Application of Information Processing Theory

  • An, Joon M.
    • The Journal of Information Technology and Database
    • /
    • v.3 no.2
    • /
    • pp.97-118
    • /
    • 1996
  • Information systems quality engineering is one of the most problematic areas in practice and research, and needs cooperative efforts between practice and theory [Glass, 1996]. A model for evaluating the quality of system development process and ensuing success is proposed based on information processing theory of project unit design. A nomological net among a set of quality variables is identified from prior research in the areas of organization science, software engineering, and management information systems. More specifically, system development success was modelled as a function of project complexity, system development modelling environment, user participation, project unit structure, resource availability, and the level of iterative nature of development methodology. Based on the model developed from the information processing theory of project unit design in organization science. appropriate quality metrics for each variable in the proposed model are matched. In this way, a framework of relevant systems development and success quality metrics for controlling systems development processes and ensuing success is proposed. The causal relationships among the constructs in the proposed model are proposed as future empirical research for academicians and as managerial tools for quality managers. The framework and propositions help quality manager to select more parsimonious quality metrics for controlling information systems development processes and project success in an integrated way. Also this model can be utilized for evaluating software quality assurance programmes, which are developed and marketed by many vendors.

  • PDF

Prediction Model for Unpaid Customers Using Big Data (빅 데이터 기반의 체납 수용가 예측 모델)

  • Jeong, Jaean;Lee, Kyouhwan;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.827-833
    • /
    • 2020
  • In this paper, to reduce the unpaid rate of local governments, the internal data elements affecting the arrears in Water-INFOS are searched through interviews with meter readers in certain local governments. Candidate data affecting arrears from national statistical data were derived. The influence of the independent variable on the dependent variable was sampled by examining the disorder of the dependent variable in the data set called information gain. We also evaluated the higher prediction rates of decision tree and logistic regression using n-fold cross-validation. The results confirmed that the decision tree can find more accurate customer payment patterns than logistic regression. In the process of developing an analysis algorithm model using machine learning, the optimal values of two environmental variables, the minimum number of data and the maximum purity, which directly affect the complexity and accuracy of the decision tree, are derived to improve the accuracy of the algorithm.

Evaluation of Evapotranspiration and Soil Moisture of SWAT Simulation for Mixed Forest in the Seolmacheon Catchment (설마천유역 혼효림에서 실측된 증발산과 토양수분을 이용한 SWAT모형의 적용성 평가)

  • Joh, Hyung-Kyung;Lee, Ji-Wan;Shin, Hyung-Jin;Park, Geun-Ae;Kim, Seong-Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • Common practice of Soil Water Assessment Tool (SWAT) model validation is to use a single variable (i.e., streamlfow) to calibrate SWAT model due to the paucity of actual hydrological measurement data in Korea. This approach, however, often causes errors in the simulated results because of numerous sources of uncertainty and complexity of SWAT model. We employed multi-variables (i.e., streamflow, evapotranspiration, and soil moisture), which were measured at mixed forest in Seolmacheon catchment ($8.54\;km^2$), in order to assess the performance and reduce the uncertainties of SWAT model output. Meteorological and surface topographical data of the catchment were obtained as basic input variables and SWAT model was calibrated using daily data of streamflow (Jan. - Dec.), evapotranspiration (Sep. - Dec.), and soil moisture (Jun. - Dec.) collected in 2007. The model performance was assessed by comparing its results with the observation (i.e., streamflow of 2003 to 2008 and evapotranspiration and soil moisture of 2008). When the multi-variable measurements were used to calibrate the SWAT model, the model results showed better agreement with the measurements compared to those using a single variable measurement by showing increases in coefficient of determination ($R^2$) from 0.72 to 0.76 for streamflow, from 0.49 to 0.59 for soil moisture, and from 0.52 to 0.59 for evapotranspiration. The findings highlight the importance of reliable and accurate collective observation data for improving performance of SWAT model and promote its facilitation for estimating more realistic hydrological cycles at catchment scale.