• Title/Summary/Keyword: Language Network

Search Result 1,226, Processing Time 0.029 seconds

Innovative Technology of Teaching Moodle in Higher Pedagogical Education: from Theory to Pactice

  • Iryna, Rodionova;Serhii, Petrenko;Nataliia, Hoha;Kushevska, Natalia;Tetiana, Siroshtan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.153-162
    • /
    • 2022
  • Relevance. Innovative activities in education should be aimed at ensuring the comprehensive development of the individual and professional development of students. The main idea of modular technology is that the student should learn by himself, and the teacher manages his learning activities. The advantage of modular technology is the ability of the teacher to design the study of the material in the most interesting and accessible forms for this part of the study group and at the same time achieve the best learning results. Innovative Moodle technology. it is gaining popularity every day, significantly expanding the space of teaching and learning, allowing students to study inter-faculty university programs in depth. The purpose of this study is to assess the quality of implementation of the e-learning system Moodle. The study was conducted at the South Ukrainian National Pedagogical University named after K. D. Ushinsky in order to identify barriers to the effective implementation of innovative distance learning technologies Moodle and introduce a new model that will have a positive impact on the development of e-learning. Methodology. The paper used a combination of theoretical and empirical research methods. These include: scientific analysis of sources on this issue, which allowed us to formulate the initial provisions of the study; analysis of the results of students 'educational activities; pedagogical experiment; questionnaires; monitoring of students' activities in practical classes. Results. This article evaluates the implementation of the principles of distance learning in the process of teaching and learning at the University in terms of quality. The experiment involved 1,250 students studying at the South Ukrainian National Pedagogical University named after K. D. Ushinsky. The survey helped to identify the main barriers to the effective implementation of modern distance learning technologies in the educational process of the University: the lack of readiness of teachers and parents, the lack of necessary skills in applying computer systems of online learning, the inability to interact with the teaching staff and teachers, the lack of a sufficient number of academic consultants online. In addition, internal problems are investigated: limited resources, unevenly distributed marketing advantages, inappropriate administrative structure, and lack of innovative physical capabilities. The article allows us to solve these problems by gradually implementing a distance learning model that is suitable for any university, regardless of its specialization. The Moodle-based e-learning system proposed in this paper was designed to eliminate the identified barriers. Models for implementing distance learning in the learning process were built according to the CAPDM methodology, which helps universities and other educational service providers develop and manage world-class online distance learning programs. Prospects for further research focus on evaluating students' knowledge and abilities over the next six months after the introduction of the proposed Moodle-based program.

The Role of Tolerance to Promote the Improving the Quality of Training the Specialists in the Information Society

  • Oleksandr, Makarenko;Inna, Levenok;Valentyna, Shakhrai;Liudmyla, Koval;Tetiana, Tyulpa;Andrii, Shevchuk;Olena, Bida
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.63-70
    • /
    • 2022
  • The essence of the definition of "tolerance" is analyzed. Motivational, knowledge and behavioral criteria for tolerance of future teachers are highlighted. Indicators of the motivational criterion are the formation of value orientations, motivational orientation, and the development of empathy. Originality and productivity of thoughts and judgments, tact of dialogue, pedagogical ethics and tact are confirmed as indicators of the knowledge criterion. The behavioral criterion includes social activity as a life position, emotional and volitional endurance, and self-control of one's own position. The formation of tolerance is influenced by a number of factors: the social environment, the information society, existing stereotypes and ideas in society, the system of education and relationships between people, and the system of values. The main factors that contribute to the education of tolerance in future teachers are highlighted. Analyzing the structure of tolerance, it is necessary to distinguish the following functions of tolerance: - motivational (determines the composition and strength of motivation for social activity and behavior, promotes the development of life experience, because it allows the individual to accept other points of view and vision of the solution; - informational (understanding the situation, the personality of another person); - regulatory (tolerance has a close connection with the strong - willed qualities of a person: endurance, selfcontrol, self-regulation, which were formed in the process of Education); - adaptive (allows the individual to develop in the process of joint activity a positive, emotional, stable attitude to the activity itself, which the individual carries out, to the object and subject of joint relations). The implementation of pedagogical functions in the information society: educational, organizational, predictive, informational, communicative, controlling, etc. provides grounds to consider pedagogical tolerance as an integrative personal quality of a representative of any profession in the field of "person-person". The positions that should become conditions for the formation of tolerance of the future teacher in the information society are listed.

A Study on Deep Learning Model for Discrimination of Illegal Financial Advertisements on the Internet

  • Kil-Sang Yoo; Jin-Hee Jang;Seong-Ju Kim;Kwang-Yong Gim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.21-30
    • /
    • 2023
  • The study proposes a model that utilizes Python-based deep learning text classification techniques to detect the legality of illegal financial advertising posts on the internet. These posts aim to promote unlawful financial activities, including the trading of bank accounts, credit card fraud, cashing out through mobile payments, and the sale of personal credit information. Despite the efforts of financial regulatory authorities, the prevalence of illegal financial activities persists. By applying this proposed model, the intention is to aid in identifying and detecting illicit content in internet-based illegal financial advertisining, thus contributing to the ongoing efforts to combat such activities. The study utilizes convolutional neural networks(CNN) and recurrent neural networks(RNN, LSTM, GRU), which are commonly used text classification techniques. The raw data for the model is based on manually confirmed regulatory judgments. By adjusting the hyperparameters of the Korean natural language processing and deep learning models, the study has achieved an optimized model with the best performance. This research holds significant meaning as it presents a deep learning model for discerning internet illegal financial advertising, which has not been previously explored. Additionally, with an accuracy range of 91.3% to 93.4% in a deep learning model, there is a hopeful anticipation for the practical application of this model in the task of detecting illicit financial advertisements, ultimately contributing to the eradication of such unlawful financial advertisements.

Comparing the 2015 with the 2022 Revised Primary Science Curriculum Based on Network Analysis (2015 및 2022 개정 초등학교 과학과 교육과정에 대한 비교 - 네트워크 분석을 중심으로 -)

  • Jho, Hunkoog
    • Journal of Korean Elementary Science Education
    • /
    • v.42 no.1
    • /
    • pp.178-193
    • /
    • 2023
  • The aim of this study was to investigate differences in the achievement standards from the 2015 to the 2022 revised national science curriculum and to present the implications for science teaching under the revised curriculum. Achievement standards relevant to primary science education were therefore extracted from the national curriculum documents; conceptual domains in the two curricula were analyzed for differences; various kinds of centrality were computed; and the Louvain algorithm was used to identify clusters. These methods revealed that, in the revised compared with the preceding curriculum, the total number of nodes and links had increased, while the number of achievement standards had decreased by 10 percent. In the revised curriculum, keywords relevant to procedural skills and behavior received more emphasis and were connected to collaborative learning and digital literacy. Observation, survey, and explanation remained important, but varied in application across the fields of science. Clustering revealed that the number of categories in each field of science remained mostly unchanged in the revised compared with the previous curriculum, but that each category highlighted different skills or behaviors. Based on those findings, some implications for science instruction in the classroom are discussed.

Ensuring the Quality of Higher Education in Ukraine

  • Olha Oseredchuk;Mykola Mykhailichenko;Nataliia Rokosovyk;Olha Komar;Valentyna Bielikova;Oleh Plakhotnik;Oleksandr Kuchai
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.142-148
    • /
    • 2023
  • The National Agency for Quality Assurance in Higher Education plays a crucial role in education in Ukraine, as an independent entity creates and ensures quality standards of higher education, which allow to properly implement the educational policy of the state, develop the economy and society as a whole.The purpose of the article: to reveal the crucial role of the National Agency for Quality Assurance in Higher Education to create quality management of higher education institutions, to show its mechanism as an independent entity that creates and ensures quality standards of higher education. and society as a whole. The mission of the National Agency for Quality Assurance in Higher Education is to become a catalyst for positive changes in higher education and the formation of a culture of its quality. The strategic goals of the National Agency are implemented in three main areas: the quality of educational services, recognition of the quality of scientific results, ensuring the systemic impact of the National Agency. The National Agency for Quality Assurance in Higher Education exercises various powers, which can be divided into: regulatory, analytical, accreditation, control, communication.The effectiveness of the work of the National Agency for Quality Assurance in Higher Education for 2020 has been proved. The results of a survey conducted by 183 higher education institutions of Ukraine conducted by the National Agency for Quality Assurance in Higher Education are shown. Emphasis was placed on the development of "Recommendations of the National Agency for Quality Assurance in Higher Education regarding the introduction of an internal quality assurance system." The international activity and international recognition of the National Agency for Quality Assurance in Higher Education are shown.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

Analysis of User′s Satisfaction to the Small Urban Spaces by Environmental Design Pattern Language (환경디자인 패턴언어를 통해 본 도심소공간의 이용만족도 분석에 관한 연구)

  • 김광래;노재현;장동주
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.16 no.3
    • /
    • pp.21-37
    • /
    • 1989
  • Environmental design pattern of the nine Small Urban Spaces at C.B.D. in City of Seoul are surveyed and analyzed for user's satisfaction and behavior under the environmental design evaluation by using Christopher Alexander's Pattern Language. Small Urban Spaces as a part of streetscape are formed by physical factors as well as visual environment and interacting user's behavior. Therefore, user's satisfaction and behavior at the nine Urban Small Spaces were investigated under the further search for some possibilities of application of those Pattern Languages. A pattern language has a structure of a network. It is used in sequence, going through the patterns, moving always from large patterns to smaller, always from the ones which create comes simply from the observation that most of the wonderful places of the city were not blade by architects but by the people. It defines the limited number of arrangements of spaces that make sense in any given culture. And it actually gives us the power to generate these coherent arrangement of space. As a results, 'Plaza', 'Seats'and 'Aecessibility' related design Patterns are highly evaluated by Pattern Frequency, Pattern Interaction and their Composition ranks, thus reconfirm Whyte's Praise of urban Small Spaces in our inner city design environments. According to the multiple regression analysis of user's evaluation, the environmental functions related to the satisfaction were 'Plaza', 'Accessibility' and 'Paving'. According to the free response, user's prefer such visually pleasing environmental design object as 'Waterscape' and 'Setting'. In addition to, the basic needs in Urban Small Spaces are amenity facilities as bench, drinking water and shade for rest.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.