• Title/Summary/Keyword: mathematical models of cognitive system

Search Result 6, Processing Time 0.018 seconds

The Mathematical Foundations of Cognitive Science (인지과학의 수학적 기틀)

  • Hyun, Woo-Sik
    • Journal for History of Mathematics
    • /
    • v.22 no.3
    • /
    • pp.31-44
    • /
    • 2009
  • Anyone wishing to understand cognitive science, a converging science, need to become familiar with three major mathematical landmarks: Turing machines, Neural networks, and $G\ddot{o}del's$ incompleteness theorems. The present paper aims to explore the mathematical foundations of cognitive science, focusing especially on these historical landmarks. We begin by considering cognitive science as a metamathematics. The following parts addresses two mathematical models for cognitive systems; Turing machines as the computer system and Neural networks as the brain system. The last part investigates $G\ddot{o}del's$ achievements in cognitive science and its implications for the future of cognitive science.

  • PDF

Optimal Cognitive System Modeling Using the Stimulus-Response Matrix (자극-반응 행렬을 이용한 인지 시스템 최적화 모델)

  • Choe, Gyeong-Hyeon;Park, Min-Yong;Im, Eun-Yeong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.11-22
    • /
    • 2000
  • In this research report, we are presenting several optimization models for cognitive systems by using stimulus-response matrix (S-R Matrix). Stimulus-response matrices are widely used for tabulating results from various experiments and cognition systems design in which the recognition and confusability of stimuli. This paper is relevant to analyze the optimization/mathematical programming models. The weakness and restrictions of the existing models are resolved by generalization considering average confusion of each subset of stimuli. Also, clustering strategies are used in the extended model to obtain centers of cluster in terms of minimal confusion as well as the character of each cluster.

  • PDF

The Effects of Tasks Setting for Mathematical Modelling in the Complex Real Situation (실세계 상황에서 수학적 모델링 과제설정 효과)

  • Shin, Hyun-Sung;Lee, Myeong-Hwa
    • Journal of the Korean School Mathematics Society
    • /
    • v.14 no.4
    • /
    • pp.423-442
    • /
    • 2011
  • The purpose of this study was to examine the effects of tasks setting for mathematical modelling in the complex real situations. The tasks setting(MMa, MeA) in mathematical modelling was so important that we can't ignore its effects to develop meaning and integrate mathematical ideas. The experimental setting were two groups ($N_1=103$, $N_2=103$) at public high school and non-experimental setting was one group($N_3=103$). In mathematical achievement, we found meaningful improvement for MeA group on modelling tasks, but no meaningful effect on information processing tasks. The statistical method used was ACONOVA analysis. Beside their achievement, we were much concerned about their modelling approach that TSG21 had suggested in Category "Educational & cognitive Midelling". Subjects who involved in experimental works showed very interesting approach as Exploration, analysis in some situation ${\Rightarrow}$ Math. questions ${\Rightarrow}$ Setting models ${\Rightarrow}$ Problem solution ${\Rightarrow}$ Extension, generalization, but MeA group spent a lot of time on step: Exploration, analysis and MMa group on step, Setting models. Both groups integrated actively many heuristics that schoenfeld defined. Specially, Drawing and Modified Simple Strategy were the most powerful on approach step 1,2,3. It was very encouraging that those experimental setting was improved positively more than the non-experimental setting on mathematical belief and interest. In our school system, teaching math. modelling could be a answer about what kind of educational action or environment we should provide for them. That is, mathematical learning.

  • PDF

Connectivity Analysis of Cognitive Radio Ad-hoc Networks with Shadow Fading

  • Dung, Le The;An, Beongku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3335-3356
    • /
    • 2015
  • In this paper, we analyze the connectivity of cognitive radio ad-hoc networks in a log-normal shadow fading environment. Considering secondary user and primary user's locations and primary user's active state are randomly distributed according to a homogeneous Poisson process and taking into account the spectrum sensing efficiency of secondary user, we derive mathematical models to investigate the connectivity of cognitive radio ad-hoc networks in three aspects and compare with the connectivity of ad-hoc networks. First, from the viewpoint of a secondary user, we study the communication probability of that secondary user. Second, we examine the possibility that two secondary users can establish a direct communication link between them. Finally, we extend to the case of finding the probability that two arbitrary secondary users can communicate via multi-hop path. We verify the correctness of our analytical approach by comparing with simulations. The numerical results show that in cognitive radio ad-hoc networks, high fading variance helps to remarkably improve connectivity behavior in the same condition of secondary user's density and primary user's average active rate. Furthermore, the impact of shadowing on wireless connection probability dominates that of primary user's average active rate. Finally, the spectrum sensing efficiency of secondary user significantly impacts the connectivity features. The analysis in this paper provides an efficient way for system designers to characterize and optimize the connectivity of cognitive radio ad-hoc networks in practical wireless environment.

Analysis of SNE Learner's Performance Using NASA Scaling

  • Naveen, A.;Babu, Sangita
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.3
    • /
    • pp.45-51
    • /
    • 2014
  • Computer science and computing technologies are applied into mathematical, science, medical, engineering and educational applications. The models are used to solve the issues in all the domains. Educational systems are used top down, bottom up, Gap Analysis model in the educational learning system. Educational learning process integrated with Lerner, content and the methodology. The Learners and content are same in the educational system or similar courses but the teaching methodologies are differing one with another. The determinations of teaching methodologies are based on the factors related to that particular model or subject. The learning model influencing determinations are made by the surveys, analysis and observation of data to maximize the learning outcome. This paper attempted to evaluate the SNE learners cognitive using NASA Scaling.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.