• Title/Summary/Keyword: explanation model

Search Result 576, Processing Time 0.03 seconds

A Study on the TAM(Technology Acceptance Model) in Different IT Environments (이질적인 정보기술 사용 환경 하에서의 기술수용모델(TAM)에 대한 연구)

  • Kim, Jun-Woo;Moon, Hyoung-Do
    • Journal of Information Technology Applications and Management
    • /
    • v.14 no.4
    • /
    • pp.175-198
    • /
    • 2007
  • Technology Acceptance Model (TAM) has been a basis model for testing technology use. Post researches of TAM have been conducted with the updating the TAM by adding new independent variables in order to increase the explanation power of the model. However, one problem is that different independent variables have to be introduced to keep the explanation power whenever applying to particular technology. This reduces the generality of the research model. Thus in order to increase the generality of the model, this study reviewed the previous researches and collected the independent variables used, and regrouped them into three basic independent constructs. New research model was designed with three basic independent constructs with four constructs selected for the mandatory IT environment and voluntary IT environment, and the structured equations analysis(AMOS) was applied to find the significant causal effect relationships between constructs in addition to the explanation power of the model. Finally, this study concluded that new TAM could be used to explain the users' adopting new technology without any adding new particular independent variables.

  • PDF

The Development and Validation of Eating Behavior Test Form for Infants and Young Children (영유아 식행동 검사도구 개발 및 타당도 검정)

  • Han, Youngshin;Kim, Su An;Lee, Yoonna;Kim, Jeongmee
    • Korean Journal of Community Nutrition
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2015
  • Objectives: This study was conducted to develop and validate Eating Behaviors Test form (EBT) for infants and young children, including eating behaviors of their parents and parental feeding practices. Methods: Draft version of EBT form was developed after a pretest on 83 mothers. It was consisted of 42 questions including 3 components; eating behavior of children, eating behavior of parents, and parental feeding practices. Using these questionnaires, the first survey was conducted on 320 infants and children, 1 to 6 year old, for exploratory factor analysis, and the second survey was collected on 731 infants and children for confirmatory factor analysis. Results: Exploratory factor analysis on 42 questions of EBT form resulted in 3 factor model for children's eating behavior, 3 factor model for parents' eating behavior, and 1 factor model for parental feeding practices. Three factors for children's eating behavior could be explained as follows; factor 1, pickiness (reliability ${\alpha}=0.89$; explanation of variance=27.79), factor 2, over activity (${\alpha}=0.80$, explanation of variance=16.51), and factor 3, irregularity (${\alpha}=0.59$, explanation of variance=10.01). Three factors for mother's eating behavior could be explained as follows; factor 1,irregularities (${\alpha}=0.73$, explanation of variance=21.73), factor 2, pickiness (${\alpha}=0.65$, explanation of variance= 20.16), and factor 3, permissiveness (${\alpha}=0.60$, explanation of variance=19.13). Confirmatory factor analysis confirmed an acceptance fit for these models. Internal consistencies for these factors were above 0.6. Conclusions: Our results indicated that EBT form is a valid tool to measure comprehensive eating and feeding behaviors for infants and young children.

A New Explanation of Some Leiden Ranking Graphs Using Exponential Functions

  • Egghe, Leo
    • Journal of Information Science Theory and Practice
    • /
    • v.1 no.3
    • /
    • pp.6-11
    • /
    • 2013
  • A new explanation, using exponential functions, is given for the S-shaped functional relation between the mean citation score and the proportion of top 10% (and other percentages) publications for the 500 Leiden Ranking universities. With this new model we again obtain an explanation for the concave or convex relation between the proportion of top $100{\theta}%$ publications, for different fractions of ${\theta}$.

Understanding Enzyme Structure and Function in Terms of the Shifting Specificity Model

  • Britt, Billy Mark
    • BMB Reports
    • /
    • v.37 no.4
    • /
    • pp.394-401
    • /
    • 2004
  • The purpose of this paper is to suggest that the prominence of Haldane's explanation for enzyme catalysis significantly hinders investigations in understanding enzyme structure and function. This occurs despite the existence of much evidence that the Haldane model cannot embrace. Some of the evidence, in fact, disproves the model. A brief history of the explanation of enzyme catalysis is presented. The currently accepted view of enzyme catalysis -- the Haldane model -- is examined in terms of its strengths and weaknesses. An alternate model for general enzyme catalysis (the Shifting Specificity model) is reintroduced and an assessment of why it may be superior to the Haldane model is presented. Finally, it is proposed that a re-examination of many current aspects in enzyme structure and function (specifically, protein folding, x-ray and NMR structure analyses, enzyme stability curves, enzyme mimics, catalytic antibodies, and the loose packing of enzyme folded forms) in terms of the new model may offer crucial insights.

Comparison of the Bass Model and the Logistic Model from the Point of the Diffusion Theory (확산이론 관점에서 로지스틱 모형과 Bass 모형의 비교)

  • Hong, Jung-Sik;Koo, Hoon-Young
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.37 no.2
    • /
    • pp.113-125
    • /
    • 2012
  • The logistic model and the Bass model have diverse names and formulae in diffusion theory. This diversity makes users or readers confused while it also contributes to the flexibility of modeling. The method of handling the integration constant, which is generated in process of deriving the closed form solution of the differential equation for a diffusion model, results in two different 'actual' models. We rename the actual four models and propose the usage of the models with respect to the purpose of model applications. The application purpose would be the explanation of historical diffusion pattern or the forecasting of future demand. Empirical validation with 86 historical diffusion data shows that misuse of the models can draw improper conclusions for the explanation of historical diffusion pattern.

The Concept Understanding of Infinity and Infinite Process and Reflective Abstraction (무한 개념이해 수준의 발달과 반성적 추상)

  • 전명남
    • The Mathematical Education
    • /
    • v.42 no.3
    • /
    • pp.303-325
    • /
    • 2003
  • This study sought to provide an explanation of university students' concept understanding on the infinity and infinite process and utilized a psychological constructivist perspective to examine the differences in transitions that students make from static concept of limit to actualized infinity stage in context of problems. Open-ended questions were used to gather data that were used to develop an explanation concerning student understanding. 47 university students answered individually and were asked to solve 16 tasks developed by Petty(1996). Microgenetic method with two cases from the expert-novice perspective were used to develop and substantiate an explanation regarding students' transitions from static concept of limit to actualized infinity stage. The protocols were analyzed to document student conceptions. Cifarelli(1988)'s levels of reflective abstraction and Robert(1982) and Sierpinska(1985)'s three-stage concept development model of infinity and infinite process provided a framework for this explanation. Students who completed a transition to actualized infinity operated higher levels of reflective abstraction than students who was unable to complete such a transition. Developing this ability was found to be critical in achieving about understanding the concept of infinity and infinite process.

  • PDF

A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

  • Chaehyeon Kim;Hyewon Ryu;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.803-816
    • /
    • 2023
  • Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.

Domain Knowledge Incorporated Local Rule-based Explanation for ML-based Bankruptcy Prediction Model (머신러닝 기반 부도예측모형에서 로컬영역의 도메인 지식 통합 규칙 기반 설명 방법)

  • Soo Hyun Cho;Kyung-shik Shin
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.105-123
    • /
    • 2022
  • Thanks to the remarkable success of Artificial Intelligence (A.I.) techniques, a new possibility for its application on the real-world problem has begun. One of the prominent applications is the bankruptcy prediction model as it is often used as a basic knowledge base for credit scoring models in the financial industry. As a result, there has been extensive research on how to improve the prediction accuracy of the model. However, despite its impressive performance, it is difficult to implement machine learning (ML)-based models due to its intrinsic trait of obscurity, especially when the field requires or values an explanation about the result obtained by the model. The financial domain is one of the areas where explanation matters to stakeholders such as domain experts and customers. In this paper, we propose a novel approach to incorporate financial domain knowledge into local rule generation to provide explanations for the bankruptcy prediction model at instance level. The result shows the proposed method successfully selects and classifies the extracted rules based on the feasibility and information they convey to the users.

Development of Individualization Wrong Answer Note Model Using Collective Intelligence (집단지성을 이용한 개별화 오답노트 모형 개발)

  • Ha, Jin-Seok;Kim, Chang-Suk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.218-223
    • /
    • 2009
  • This dissertation about the wrong answer note model development which is individualized investigates a problem. The method which is used from here uses a group sincerity and adds wrong answer analysis leads and to the wrong answer person explanation note of the pattern which is similar refers a wrong answer note explanation. The result which this dissertation is principal will reach to the wrong answer where is not the explanation about right answer and the process which is incorrect and a wrong answer will seek will be able to arrange. There is a possibility of finding the solution which existing wrong answer note system is improved with the method which is proposed.

Visual Explanation of Black-box Models Using Layer-wise Class Activation Maps from Approximating Neural Networks (신경망 근사에 의한 다중 레이어의 클래스 활성화 맵을 이용한 블랙박스 모델의 시각적 설명 기법)

  • Kang, JuneGyu;Jeon, MinGyeong;Lee, HyeonSeok;Kim, Sungchan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.145-151
    • /
    • 2021
  • In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information of the black box model will be transferred to a white box model that we aim to create through the KD. The white box model will learn the representation of the black-box model. Second, the white-box model generates attention maps for each of its layers using Grad-CAM. Then we combine the attention maps of different layers using the pixel-wise summation to generate a final saliency map that contains information from all layers of the model. The experiments show that the proposed technique found important layers and explained which part of the input is important. Saliency maps generated by the proposed technique performed better than those of Grad-CAM in deletion game.