• Title/Summary/Keyword: E-Learning software

Search Result 162, Processing Time 0.027 seconds

Deep Learning-Based Personalized Recommendation Using Customer Behavior and Purchase History in E-Commerce (전자상거래에서 고객 행동 정보와 구매 기록을 활용한 딥러닝 기반 개인화 추천 시스템)

  • Hong, Da Young;Kim, Ga Yeong;Kim, Hyon Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.237-244
    • /
    • 2022
  • In this paper, we present VAE-based recommendation using online behavior log and purchase history to overcome data sparsity and cold start. To generate a variable for customers' purchase history, embedding and dimensionality reduction are applied to the customers' purchase history. Also, Variational Autoencoders are applied to online behavior and purchase history. A total number of 12 variables are used, and nDCG is chosen for performance evaluation. Our experimental results showed that the proposed VAE-based recommendation outperforms SVD-based recommendation. Also, the generated purchase history variable improves the recommendation performance.

A Comparative Study on Reservoir Level Prediction Performance Using a Deep Neural Network with ASOS, AWS, and Thiessen Network Data

  • Hye-Seung Park;Hyun-Ho Yang;Ho-Jun Lee; Jongwook Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.67-74
    • /
    • 2024
  • In this paper, we present a study aimed at analyzing how different rainfall measurement methods affect the performance of reservoir water level predictions. This work is particularly timely given the increasing emphasis on climate change and the sustainable management of water resources. To this end, we have employed rainfall data from ASOS, AWS, and Thiessen Network-based measures provided by the KMA Weather Data Service to train our neural network models for reservoir yield predictions. Our analysis, which encompasses 34 reservoirs in Jeollabuk-do Province, examines how each method contributes to enhancing prediction accuracy. The results reveal that models using rainfall data based on the Thiessen Network's area rainfall ratio yield the highest accuracy. This can be attributed to the method's accounting for precise distances between observation stations, offering a more accurate reflection of the actual rainfall across different regions. These findings underscore the importance of precise regional rainfall data in predicting reservoir yields. Additionally, the paper underscores the significance of meticulous rainfall measurement and data analysis, and discusses the prediction model's potential applications in agriculture, urban planning, and flood management.

An analysis of changing interests in mathematics and strategic thinking reflected in small group drawing activities using graphs and inequations - With Grafeq software - (그래프와 부등식 영역의 소집단 그림그리기 활동에서 나타나는 수학에 대한 흥미변화 및 전략적 사고분석 -Grafeq 활용을 중심으로-)

  • Shin, In-Sun;Park, Kyung-Min
    • Communications of Mathematical Education
    • /
    • v.26 no.2
    • /
    • pp.177-203
    • /
    • 2012
  • The purpose of this research was to look at whether small group drawing activities can be applied to learning content that combine mathematics and art, by analyzing the changes in $10^{th}$ grade students' interests in mathematics and particular features of their strategic thinking that were reflected in small group drawing activities using graphs and inequations. The results of the study are as follows: 1. The small group drawing activity using graphs and inequations demonstrated that students interests in mathematics could experience positive changes. 2. The small group drawing activity using graphs and inequations was effective in stimulating the students' strategic thinking skills, which are higher level thinking activities necessary for creating problem solving. As the students went through the whole process of accomplishing a complete goal, the students engaged in integrated thinking activities that brought understandings of basic graphs and inequations together, and were also found to use such higher level thinking functions needed in achieving creative problem solving such as critical thinking, flexible thinking, development-oriented thinking, and inferential thinking. 3. The small group drawing activity using graphs and in equations could be expected to constitute learning content that integrate mathematics and art, and is an effective solution in boosting students' strengths in mathematics by way of activities that consider students' unique cognitive and qualitative peculiarities and through integration with art.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

Extracting Minimized Feature Input And Fuzzy Rules Using A Fuzzy Neural Network And Non-Overlap Area Distribution Measurement Method (퍼지신경망과 비중복면적 분산 측정법을 이용한 최소의 특징입력 및 퍼지규칙의 추출)

  • Lim Joon-Shik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.599-604
    • /
    • 2005
  • This paper presents fuzzy rules to predict diagnosis of Wisconsin breast cancer with minimized number of feature in put using the neural network with weighted fuzzy membership functions (NEWFM) and the non-overlap area distribution measurement method. NEWFM is capable of self-adapting weighted membership functions from the given the Wisconsin breast cancer clinical training data. n set of small, medium, and large weighted triangular membership functions in a hyperbox are used for representing n set of featured input. The membership functions are randomly distributed and weighted initially, and then their positions and weights are adjusted during learning. After learning, prediction rules are extracted directly from n set of enhanced bounded sums of n set of small, medium, and large weighted fuzzy membership functions. Then, the non-overlap area distribution measurement method is applied to select important features by deleting less important features. Two sets of prediction rules extracted from NEWFM using the selected 4 input features out of 9 features outperform to the current published results in number of set of rules, number of input features, and accuracy with 99.71%.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Tracing the Development and Spread Patterns of OSS using the Method of Netnography - The Case of JavaScript Frameworks - (네트노그라피를 이용한 공개 소프트웨어의 개발 및 확산 패턴 분석에 관한 연구 - 자바스크립트 프레임워크 사례를 중심으로 -)

  • Kang, Heesuk;Yoon, Inhwan;Lee, Heesan
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.131-150
    • /
    • 2017
  • The purpose of this study is to observe the spread pattern of open source software (OSS) while establishing relations with surrounding actors during its operation period. In order to investigate the change pattern of participants in the OSS, we use a netnography on the basis of online data, which can trace the change patterns of the OSS depending on the passage of time. For this, the cases of three OSSs (e.g. jQuery, MooTools, and YUI), which are JavaScript frameworks, were compared, and the corresponding data were collected from the open application programming interface (API) of GitHub as well as blog and web searches. This research utilizes the translation process of the actor-network theory to categorize the stages of the change patterns on the OSS translation process. In the project commencement stage, we identified the type of three different OSS-related actors and defined associated relationships among them. The period, when a master commences a project at first, is refined through the course for the maintenance of source codes with persons concerned (i.e. project growth stage). Thereafter, the period when the users have gone through the observation and learning period by being exposed to promotion activities and codes usage respectively, and becoming to active participants, is regarded as the 'leap of participants' stage. Our results emphasize the importance of promotion processes in participants' selection of the OSS for participation and confirm the crowding-out effect that the rapid speed of OSS development retarded the emergence of participants.

  • PDF

User Access Patterns Discovery based on Apriori Algorithm under Web Logs (웹 로그에서의 Apriori 알고리즘 기반 사용자 액세스 패턴 발견)

  • Ran, Cong-Lin;Joung, Suck-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.681-689
    • /
    • 2019
  • Web usage pattern discovery is an advanced means by using web log data, and it's also a specific application of data mining technology in Web log data mining. In education Data Mining (DM) is the application of Data Mining techniques to educational data (such as Web logs of University, e-learning, adaptive hypermedia and intelligent tutoring systems, etc.), and so, its objective is to analyze these types of data in order to resolve educational research issues. In this paper, the Web log data of a university are used as the research object of data mining. With using the database OLAP technology the Web log data are preprocessed into the data format that can be used for data mining, and the processing results are stored into the MSSQL. At the same time the basic data statistics and analysis are completed based on the processed Web log records. In addition, we introduced the Apriori Algorithm of Web usage pattern mining and its implementation process, developed the Apriori Algorithm program in Python development environment, then gave the performance of the Apriori Algorithm and realized the mining of Web user access pattern. The results have important theoretical significance for the application of the patterns in the development of teaching systems. The next research is to explore the improvement of the Apriori Algorithm in the distributed computing environment.

Target Speaker Speech Restoration via Spectral bases Learning (주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원)

  • Park, Sun-Ho;Yoo, Ji-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.179-186
    • /
    • 2009
  • This paper proposes a target speech extraction which restores speech signal of a target speaker form noisy convolutive mixture of speech and an interference source. We assume that the target speaker is known and his/her utterances are available in the training time. Incorporating the additional information extracted from the training utterances into the separation, we combine convolutive blind source separation(CBSS) and non-negative decomposition techniques, e.g., probabilistic latent variable model. The nonnegative decomposition is used to learn a set of bases from the spectrogram of the training utterances, where the bases represent the spectral information corresponding to the target speaker. Based on the learned spectral bases, our method provides two postprocessing steps for CBSS. Channel selection step finds a desirable output channel from CBSS, which dominantly contains the target speech. Reconstruct step recovers the original spectrogram of the target speech from the selected output channel so that the remained interference source and background noise are suppressed. Experimental results show that our method substantially improves the separation results of CBSS and, as a result, successfully recovers the target speech.

A Study on Social Media Sentiment Analysis for Exploring Public Opinions Related to Education Policies (교육정책관련 여론탐색을 위한 소셜미디어 감정분석 연구)

  • Chung, Jin-Myeong;Yoo, Ki-Young;Koo, Chan-Dong
    • Informatization Policy
    • /
    • v.24 no.4
    • /
    • pp.3-16
    • /
    • 2017
  • With the development of social media services in the era of Web 2.0, the public opinion formation site has been partially shifted from the traditional mass media to social media. This phenomenon is continuing to expand, and public opinions on government polices created and shared on social media are attracting more attention. It is particularly important to grasp public opinions in policy formulation because setting up educational policies involves a variety of stakeholders and conflicts. The purpose of this study is to explore public opinions about education-related policies through an empirical analysis of social media documents on education policies using opinion mining techniques. For this purpose, we collected the education policy-related documents by keyword, which were produced by users through the social media service, tokenized and extracted sentimental qualities of the documents, and scored the qualities using sentiment dictionaries to find out public preferences for specific education policies. As a result, a lot of negative public opinions were found regarding the smart education policies that use the keywords of digital textbooks and e-learning; while the software education policies using coding education and computer thinking as the keywords had more positive opinions. In addition, the general policies having the keywords of free school terms and creative personality education showed more negative public opinions. As much as 20% of the documents were unable to extract sentiments from, signifying that there are still a certain share of blog posts or tweets that do not reflect the writers' opinions.