• Title/Summary/Keyword: 컴퓨터 활용학습

Search Result 2,069, Processing Time 0.028 seconds

Analysis of Competency-Based In-service Training Programs for Informatics Teachers (정보교사의 역량에 기반한 소프트웨어교육 교원 직무 연수과정 분석)

  • Ock, Jihyun;Ahn, Seongjin
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.1
    • /
    • pp.43-50
    • /
    • 2018
  • The 2015 Revised National Curriculum emphasizes software education to develop creative and convergent talents in preparation of the Fourth Industrial Revolution. Accordingly, it is necessary to develop competency-based training programs for informatics teachers in a rapidly changing educational environment. In this background, this study selects a framework to analyze the content of in-service training for informatics teachers through review of previous studies. By analyzing the current training programs to strengthen competencies required for informatics teachers in secondary schools, the study aims to develop implications for future in-service training programs. To this end, the study conducted a questionnaire survey of experts who participated in the development of in-service training textbooks and consulted them, then analyzed the elements of competency-based training program content and the relative importance of each competency element using the analytical hierarchy process (AHP). According to the results of the analysis, the content was relatively concentrated on the competency of "Understanding and Reconstructing the National Curriculum" required for general and informatics teachers as well, which accounted for 47% of all, or 7 hours out of the total 15 hours. In contrast, the content structure lacked the competency of highly relative importance by competency element "Establishing and Using Teaching-Learning Strategies for Informatics," which took up the highest portion of 27%. These findings will be used as basic data for understanding and reflecting the areas that fall short of the development of in-service training programs for informatics teachers.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

A Study on Production Pipeline for Third Person Virtual Reality Contents Based on Hand Interface (손 인터페이스 기반 3인칭 가상현실 콘텐츠 제작 공정에 관한 연구)

  • Jeon, Changyu;Kim, Mingyu;Lee, Jiwon;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.9-17
    • /
    • 2017
  • This study proposes a third-person virtual reality content production pipeline to provide users with a new experience and presence in a new virtual reality environment. For this purpose, we first create third-person virtual reality content, which includes a story, fun factors, and game characteristic. It consists of a tutorial scene in which a user can pre-learn the proposed interface suitable for a third person different from existing virtual reality content and a content scene that achieves its purpose by using game factors based on the background story. Next, we design an interface suitable for the third-person virtual reality content. This study proposes an interface in which users can interact with a virtual environment or object by using their hand. The proposed interface consists of three steps: character movement, virtual object selection with multiple selection, and 3D menu control using virtual space. Finally, through the survey experiment, third-person virtual reality content produced based on the proposed interface was confirmed to be easily controlled while ensuring high satisfaction.

Analysis of Borrows Demand for Books in Public Libraries Considering Cultural Characteristics (문화적 특성을 고려한 공공도서관 도서 대출수요 분석 : 대구광역시 시립도서관을 사례로)

  • Oh, Min-Ki;Kim, Kyung-Rae;Jeong, Won-Oong;Kim, Keun-Wook
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.55-64
    • /
    • 2021
  • Public libraries are a space where residents learn a wide range of knowledge and ideologies, and as they are directly connected to life, various related studies have been conducted. In most previous studies, variables such as population, traffic accessibility, and environment were found to be highly relevant to library use. In this study, it can be said that the difference from previous studies is that the book borrow demand and relevance were analyzed by reflecting the variables of cultural characteristics based on the book borrow history (1,820,407 cases) and member information (297,222 persons). As a result of the analysis, it was analyzed that as the increase in borrows for social science and literature books compared to technical science books, the demand for book borrows increased. In addition, various descriptive statistical analyzes were used to analyze the characteristics of library book borrow demand, and policy implications and limitations of the study were also presented based on the analysis results. and considering that cultural characteristics change depending on the location and time of day, it is believed that related research should be continued in the future.

Analysis of news bigdata on 'Gather Town' using the Bigkinds system

  • Choi, Sui
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.53-61
    • /
    • 2022
  • Recent years have drawn a great attention to generation MZ and Metaverse, due to 4th industrial revolution and the development of digital environment that blurs the boundary between reality and virtual reality. Generation MZ approaches the information very differently from the existing generations and uses distinguished communication methods. In terms of learning, they have different motivations, types, skills and build relationships differently. Meanwhile, Metaverse is drawing a great attention as a teaching method that fits traits of gen MZ. Thus, the current research aimed to investigate how to increase the use of Metaverse in Educational Technology. Specifically, this research examined the antecedents of popularity of Gather Town, a platform of Metaverse. Big data of news articles have been collected and analyzed using the Bigkinds system provided by Korea Press Foundation. The analysis revealed, first, a rapid increasing trend of media exposure of Gather Town since July 2021. This suggests a greater utilization of Gather Town in the field of education after the COVID-19 pandemic. Second, Word Association Analysis and Word Cloud Analysis showed high weights on education related words such as 'remote', 'university', and 'freshman', while words like 'Metaverse', 'Metaverse platform', 'Covid19', and 'Avatar' were also emphasized. Third, Network Analysis extracted 'COVID19', 'Avatar', 'University student', 'career', 'YouTube' as keywords. The findings also suggest potential value of Gather Town as an educational tool under COVID19 pandemic. Therefore, this research will contribute to the application and utilization of Gather Town in the field of education.

Analysis of Satisfaction of Pre-service and In-service Elementary Teachers with Artificial Intelligence Education using App Inventor

  • Junghee, Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.189-196
    • /
    • 2023
  • This paper analyzes the level of satisfaction of two groups of teachers who were educated about artificial intelligence using App Inventor. The participants were 13 pre-service and 9 in-service elementary school teachers and the data was collected using a questionnaire. As a result of the study, in-service teachers were all more satisfied than pre-service teachers in terms of interest, difficulty, and participation in the education. In addition, the questions investigating whether education helped motivate learning of artificial intelligence and whether there is a willingness to apply it to elementary classes in the future were also more positive for in-service teachers than for pre-service teachers. In general, pre-service teachers had somewhat more negative views than in-service teachers, but they were more positive than in-service teachers in terms of whether the education helped improve their understanding of artificial intelligence and whether they were willing to participate in additional education. Analysis of the Mann-Whitney test to see if there was a significant difference in satisfaction between the two groups showed no significance. This may be because most of the students in the two groups already had block-type or text-type programming experience, so they were able to participate in the education without any special resistance or difficulty with App Inventor, resulting in high levels of satisfaction from both groups. The results of this study can provide basic data for the future development and operation of programs for artificial intelligence education for both pre-service and in-service elementary school teachers.

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Implementation of Git's Commit Message Classification Model Using GPT-Linked Source Change Data

  • Ji-Hoon Choi;Jae-Woong Kim;Seong-Hyun Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.123-132
    • /
    • 2023
  • Git's commit messages manage the history of source changes during project progress or operation. By utilizing this historical data, project risks and project status can be identified, thereby reducing costs and improving time efficiency. A lot of research related to this is in progress, and among these research areas, there is research that classifies commit messages as a type of software maintenance. Among published studies, the maximum classification accuracy is reported to be 95%. In this paper, we began research with the purpose of utilizing solutions using the commit classification model, and conducted research to remove the limitation that the model with the highest accuracy among existing studies can only be applied to programs written in the JAVA language. To this end, we designed and implemented an additional step to standardize source change data into natural language using GPT. This text explains the process of extracting commit messages and source change data from Git, standardizing the source change data with GPT, and the learning process using the DistilBERT model. As a result of verification, an accuracy of 91% was measured. The proposed model was implemented and verified to ensure accuracy and to be able to classify without being dependent on a specific program. In the future, we plan to study a classification model using Bard and a management tool model helpful to the project using the proposed classification model.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

The study of Defense Artificial Intelligence and Block-chain Convergence (국방분야 인공지능과 블록체인 융합방안 연구)

  • Kim, Seyong;Kwon, Hyukjin;Choi, Minwoo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.81-90
    • /
    • 2020
  • The purpose of this study is to study how to apply block-chain technology to prevent data forgery and alteration in the defense sector of AI(Artificial intelligence). AI is a technology for predicting big data by clustering or classifying it by applying various machine learning methodologies, and military powers including the U.S. have reached the completion stage of technology. If data-based AI's data forgery and modulation occurs, the processing process of the data, even if it is perfect, could be the biggest enemy risk factor, and the falsification and modification of the data can be too easy in the form of hacking. Unexpected attacks could occur if data used by weaponized AI is hacked and manipulated by North Korea. Therefore, a technology that prevents data from being falsified and altered is essential for the use of AI. It is expected that data forgery prevention will solve the problem by applying block-chain, a technology that does not damage data, unless more than half of the connected computers agree, even if a single computer is hacked by a distributed storage of encrypted data as a function of seawater.