• Title/Summary/Keyword: faculty of language

Search Result 198, Processing Time 0.021 seconds

Modularity and Modality in ‘Second’ Language Learning: The Case of a Polyglot Savant

  • Smith, Neil
    • Korean Journal of English Language and Linguistics
    • /
    • v.3 no.3
    • /
    • pp.411-426
    • /
    • 2003
  • I report on the case of a polyglot ‘savant’ (C), who is mildly autistic, severely apraxic, and of limited intellectual ability; yet who can read, write, speak and understand about twenty languages. I outline his abilities, both verbal and non-verbal, noting the asymmetry between his linguistic ability and his general intellectual inability and, within the former, between his unlimited morphological and lexical prowess as opposed to his limited syntax. I then spell out the implications of these findings for modularity. C's unique profile suggested a further project in which we taught him British Sign Language. I report on this work, paying particular attention to the learning and use of classifiers, and discuss its relevance to the issue of modality: whether the human language faculty is preferentially tied to the oral domain, or is ‘modality-neutral’ as between the spoken and the visual modes.

  • PDF

Contextual Modeling in Context-Aware Conversation Systems

  • Quoc-Dai Luong Tran;Dinh-Hong Vu;Anh-Cuong Le;Ashwin Ittoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1396-1412
    • /
    • 2023
  • Conversation modeling is an important and challenging task in the field of natural language processing because it is a key component promoting the development of automated humanmachine conversation. Most recent research concerning conversation modeling focuses only on the current utterance (considered as the current question) to generate a response, and thus fails to capture the conversation's logic from its beginning. Some studies concatenate the current question with previous conversation sentences and use it as input for response generation. Another approach is to use an encoder to store all previous utterances. Each time a new question is encountered, the encoder is updated and used to generate the response. Our approach in this paper differs from previous studies in that we explicitly separate the encoding of the question from the encoding of its context. This results in different encoding models for the question and the context, capturing the specificity of each. In this way, we have access to the entire context when generating the response. To this end, we propose a deep neural network-based model, called the Context Model, to encode previous utterances' information and combine it with the current question. This approach satisfies the need for context information while keeping the different roles of the current question and its context separate while generating a response. We investigate two approaches for representing the context: Long short-term memory and Convolutional neural network. Experiments show that our Context Model outperforms a baseline model on both ConvAI2 Dataset and a collected dataset of conversational English.

Efficient Sign Language Recognition and Classification Using African Buffalo Optimization Using Support Vector Machine System

  • Karthikeyan M. P.;Vu Cao Lam;Dac-Nhuong Le
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.8-16
    • /
    • 2024
  • Communication with the deaf has always been crucial. Deaf and hard-of-hearing persons can now express their thoughts and opinions to teachers through sign language, which has become a universal language and a very effective tool. This helps to improve their education. This facilitates and simplifies the referral procedure between them and the teachers. There are various bodily movements used in sign language, including those of arms, legs, and face. Pure expressiveness, proximity, and shared interests are examples of nonverbal physical communication that is distinct from gestures that convey a particular message. The meanings of gestures vary depending on your social or cultural background and are quite unique. Sign language prediction recognition is a highly popular and Research is ongoing in this area, and the SVM has shown value. Research in a number of fields where SVMs struggle has encouraged the development of numerous applications, such as SVM for enormous data sets, SVM for multi-classification, and SVM for unbalanced data sets.Without a precise diagnosis of the signs, right control measures cannot be applied when they are needed. One of the methods that is frequently utilized for the identification and categorization of sign languages is image processing. African Buffalo Optimization using Support Vector Machine (ABO+SVM) classification technology is used in this work to help identify and categorize peoples' sign languages. Segmentation by K-means clustering is used to first identify the sign region, after which color and texture features are extracted. The accuracy, sensitivity, Precision, specificity, and F1-score of the proposed system African Buffalo Optimization using Support Vector Machine (ABOSVM) are validated against the existing classifiers SVM, CNN, and PSO+ANN.

Stock Reaction to the Implementation of Extensible Business Reporting Language

  • JUNUS, Onong;IRWANTO, Andry
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.1
    • /
    • pp.675-685
    • /
    • 2021
  • The purpose of this study is to examine the reaction of stock prices on the implementation of Extensible Business Reporting Language (XBRL) in companies listed on the Indonesia Stock Exchange (IDX). Using the event study method and calculating abnormal returns of the 2015 financial statements of 462 companies listed on the IDX, findings showed that 49 companies have not applied the XBRL format in their financial statements. Based on the results of the Average Abnormal Return (AAR) and Cumulative Average Abnormal Return (CAAR) values, using the one-sample test, investors react to shares in companies that have not implemented XBRL and who have implemented XBRL; however, based on the independent t-test based on average values there are differences between companies that have not applied XBRL and those who have implemented XBRL. This research only looks at the one-year implementation of XBRL in financial reporting (2015), then the research does not separate which companies are on time in the delivery of financial statements to the public through the IDX website. Our research contributes to the understanding of the use of XBRL in corporate financial reporting because before the XBRL financial reporting format was published, the company had published a financial statement format based on the legal provisions of financial statements in Indonesia.

Web-Based Question Bank System using Artificial Intelligence and Natural Language Processing

  • Ahd, Aljarf;Eman Noor, Al-Islam;Kawther, Al-shamrani;Nada, Al-Sufyini;Shatha Tariq, Bugis;Aisha, Sharif
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.132-138
    • /
    • 2022
  • Due to the impacts of the current pandemic COVID-19 and the continuation of studying online. There is an urgent need for an effective and efficient education platform to help with the continuity of studying online. Therefore, the question bank system (QB) is introduced. The QB system is designed as a website to create a single platform used by faculty members in universities to generate questions and store them in a bank of questions. In addition to allowing them to add two types of questions, to help the lecturer create exams and present the results of the students to them. For the implementation, two languages were combined which are PHP and Python to generate questions by using Artificial Intelligence (AI). These questions are stored in a single database, and then these questions could be viewed and included in exams smoothly and without complexity. This paper aims to help the faculty members to reduce time and efforts by using the Question Bank System by using AI and Natural Language Processing (NLP) to extract and generate questions from given text. In addition to the tools used to create this function such as NLTK and TextBlob.

Protein Ontology: Semantic Data Integration in Proteomics

  • Sidhu, Amandeep S.;Dillon, Tharam S.;Chang, Elizabeth;Sidhu, Baldev S.
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.388-391
    • /
    • 2005
  • The Protein Structural and Functional Conservation need a common language for data definition. With the help of common language provided by Protein Ontology the high level of sequence and functional conservation can be extended to all organisms with the likelihood that proteins that carry out core biological processes will again be probable orthologues. The structural and functional conservation in these proteins presents both opportunities and challenges. The main opportunity lies in the possibility of automated transfer of protein data annotations from experimentally traceable model organisms to a less traceable organism based on protein sequence similarity. Such information can be used to improve human health or agriculture. The challenge lies in using a common language to transfer protein data annotations among different species of organisms. First step in achieving this huge challenge is producing a structured, precisely defined common vocabulary using Protein Ontology. The Protein Ontology described in this paper covers the sequence, structure and biological roles of Protein Complexes in any organism.

  • PDF

University Faculty's Perspectives on Implementing ChatGPT in their Teaching

  • Pyong Ho Kim;Ji Won Yoon;Hye Yoon Kim
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.56-61
    • /
    • 2023
  • The present study explored a comprehensive investigation of university professors' perspectives on the implementation of ChatGPT - an artificial intelligence-powered language model - in their teaching practices. A diverse group of 30 university professors responded to a questionnaire about the level of their interest in implementing the tool, willingness to apply it, and concerns they have regarding the intervention of ChatGPT in higher education setting. The results showed that the participants are highly interested in employing the tool into their teaching practice, and find that the students are likely to benefit from using ChatGPT in classroom settings. On the other hand, they displayed concerns regarding high depandency on data, privacy-related issues, lack of supports required, and technical contraints. In today's fast-paced society, educators are urged to mindfully apply this inevitable generative AI means with thoughtfulness and ethical considerations to and for their learners. Relevant topics are discussed to successfully intervene AI tools in teaching practices in higher education.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.