• Title/Summary/Keyword: e-Learning development cost

Search Result 25, Processing Time 0.026 seconds

Development and Evaluation of e-EBPP(Evidence-Based Practice Protocol) System for Evidence-Based Dementia Nursing Practice (근거중심 치매 간호실무를 위한 e-EBPP 시스템 개발 및 평가)

  • Park, Myonghwa
    • Korean Journal of Adult Nursing
    • /
    • v.17 no.3
    • /
    • pp.411-424
    • /
    • 2005
  • Purpose: The purpose of this study was to develop and evaluate e-EBPP(Evidence-based Practice Protocol) system for nursing care for patients with dementia to facilitate the best evidence-based decision in their dementia care settings. Method: The system was developed based on system development life cycle and software prototyping using the following 5 processes: Analysis, Planning, Developing, Program Operation, and Final Evaluation. Result: The system consisted of modules for evidence-based nursing and protocol, guide for developing protocol, tool for saving, revising, and deleting the protocol, interface tool among users, and tool for evaluating users' satisfaction of the system. On the main page, there were 7 menu bars that consisted of Introduction of site, EBN info, Dementia info, Evidence Based Practice Protocol, Protocol Bank, Community, and Site Link. In the operation of the system, HTML, JavaScript, and Flash were utilized and the content consisted of text content, interactive content, animation, and quiz. Conclusion: This system can support nurses' best and cost-effective clinical decision using sharable standardized protocols consisting of the best evidence in dementia care. In addition, it can be utilized as an e-learning program for nurses and nursing students to learn use of evidence based information.

  • PDF

Recent Trends of Hyperspectral Imaging Technology (초분광 이미징 기술동향)

  • Lee, M.S.;Kim, K.S.;Min, G.;Son, D.H.;Kim, J.E.;Kim, S.C.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.1
    • /
    • pp.86-97
    • /
    • 2019
  • Over the past 30 years, significant developments have been made in hyperspectral imaging (HSI) technologies that can provide end users with rich spectral, spatial, and temporal information. Owing to the advances in miniaturization, cost reduction, real-time processing, and analytical methods, HSI technologies have a wide range of applications from remote-sensing to healthcare, military, and the environment. In this study, we focus on the latest trends of HSI technologies, analytical methods, and their applications. In particular, improved machine learning techniques, such as deep learning, allows the full use of HSI technologies in classification, clustering, and spectral mixture algorithms. Finally, we describe the status of HSI technology development for skin diagnostics.

C-language Learning Contents Supporting Web-based Compiling and Running (웹기반 컴파일과 실행을 지원하는 C언어 교육콘텐츠 개발)

  • Kim, Seong-Hyun;Kim, Young-Kuk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.796-800
    • /
    • 2006
  • In this paper, we developed an e-loaming contents for C programming language using Linux and open source software, not using commercial integrated development tool like Microsoft's Visual Studio. In most programming language courses, students study or practice the programming language by editing source code compiling and running the executable code by commercial software like Visual Studio which installed on each PC. This way of learning has some difficulties in total cost of purchasing software and using other PCs which donot have proper software installed. To overcome this situation and enable loaming anywhere, with any device, at anytime, we propose a way of utilizing Linux and open source software in Web-based learning environment. In this environment students can input their source code on the form of broweser and get the result instantly from the server.

  • PDF

Development and testing of a composite system for bridge health monitoring utilising computer vision and deep learning

  • Lydon, Darragh;Taylor, S.E.;Lydon, Myra;Martinez del Rincon, Jesus;Hester, David
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.723-732
    • /
    • 2019
  • Globally road transport networks are subjected to continuous levels of stress from increasing loading and environmental effects. As the most popular mean of transport in the UK the condition of this civil infrastructure is a key indicator of economic growth and productivity. Structural Health Monitoring (SHM) systems can provide a valuable insight to the true condition of our aging infrastructure. In particular, monitoring of the displacement of a bridge structure under live loading can provide an accurate descriptor of bridge condition. In the past B-WIM systems have been used to collect traffic data and hence provide an indicator of bridge condition, however the use of such systems can be restricted by bridge type, assess issues and cost limitations. This research provides a non-contact low cost AI based solution for vehicle classification and associated bridge displacement using computer vision methods. Convolutional neural networks (CNNs) have been adapted to develop the QUBYOLO vehicle classification method from recorded traffic images. This vehicle classification was then accurately related to the corresponding bridge response obtained under live loading using non-contact methods. The successful identification of multiple vehicle types during field testing has shown that QUBYOLO is suitable for the fine-grained vehicle classification required to identify applied load to a bridge structure. The process of displacement analysis and vehicle classification for the purposes of load identification which was used in this research adds to the body of knowledge on the monitoring of existing bridge structures, particularly long span bridges, and establishes the significant potential of computer vision and Deep Learning to provide dependable results on the real response of our infrastructure to existing and potential increased loading.

XML Data Model and Interpreter Development for Authoring Interactive Convergence Contents based on HTML5 iframe (HTML5 iframe 기반 상호작용형 융합 콘텐츠 저작을 위한 XML 데이터 모형 및 해석기 개발)

  • Lee, Jun Jeong;Hong, June Seok;Kim, Wooju
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.12
    • /
    • pp.250-265
    • /
    • 2020
  • In the N-Screen environment, HTML5 standard-based content development is inevitable. However, it is still passive the development of HTML5 manipulation type contents due to high development cost and insufficient infrastructure. Therefor we propose an efficient contents development model by convergence multimedia contents (such as video and audio) with HTML5 documents that can implement dynamic manipulation for user interaction. The proposed model is designed to divide the multimedia and iframe areas in the HTML5 layout page included the player for integrated contents control. Interactive HTML5 documents are divided into screen units and provided through iframe. The integrated control player composed based on the HTML5

Case Study of Online Education Using Virtual Training Content (가상훈련 콘텐츠를 사용한 온라인 교육의 사례 연구)

  • Huh, Jun-young;Roh, Hyelan
    • Journal of Practical Engineering Education
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Virtual Training is an educational exercise in which the environment or the situation is virtually implemented for specific training and proceed like a real situation. In recent years, the virtual reality technology has developed rapidly, and the demand for experiencing situation that are not directly experienced in the real world is increasing more and more in virtual reality. Particularly, there is an increasing demand of contents for hands-on training and virtual training for equipment training that replaces high-risk and high-cost industry training. The virtual training contents have been developed and utilized for the purpose of technical training. However, it is known that virtual training is more effective when it is used as a supplementary training material or combined with e-learning contents rather than replacing one training course with virtual training contents because purpose and effect are different from general technical training course. In this study, we explored the development method for effective utilization of electrohydraulic servo control process, which is the virtual reality contents developed in 2017 in combination with e-learning contents. In addition, in order to establish a teaching and learning strategy, we actually develop and operate a case studies using virtual training contents. Surveys and case studies are conducted to investigate the effects of teaching and learning strategies applied in the classroom on students and their educational usefulness.

A Study on Image Annotation Automation Process using SHAP for Defect Detection (SHAP를 이용한 이미지 어노테이션 자동화 프로세스 연구)

  • Jin Hyeong Jung;Hyun Su Sim;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.76-83
    • /
    • 2023
  • Recently, the development of computer vision with deep learning has made object detection using images applicable to diverse fields, such as medical care, manufacturing, and transportation. The manufacturing industry is saving time and money by applying computer vision technology to detect defects or issues that may occur during the manufacturing and inspection process. Annotations of collected images and their location information are required for computer vision technology. However, manually labeling large amounts of images is time-consuming, expensive, and can vary among workers, which may affect annotation quality and cause inaccurate performance. This paper proposes a process that can automatically collect annotations and location information for images using eXplainable AI, without manual annotation. If applied to the manufacturing industry, this process is thought to save the time and cost required for image annotation collection and collect relatively high-quality annotation information.

How librarians really use the network for advanced service (정보봉사의 증진을 위한 사서들의 네트워크 이용연구)

  • 한복희
    • Journal of Korean Library and Information Science Society
    • /
    • v.23
    • /
    • pp.1-27
    • /
    • 1995
  • The purpose of this study is twofold: to investigate into general characteristics of the networks in Korea as a new information technology and to discuss general directions of development of the use of the Internet. This study is designed to achieve the purpose by gathering and analysing data related to the use of Internet of librarians those who work in public libraries and research and development libraries and university libraries. The major conclusions made in this study is summarized as follows. (1) From this survey, received detailed response from 69 librarians, the majority (42) from research and development libraries. The majority (56) were from Library and Information Science subject area, half of them (37) hold advanced degrees. (2) Majority (40) have accessed Internet for one year or less, 9(17%) respondents for two years, 17(32%) spend every day Internet related activity. (3) 44.9% of the respondents taught themselves. 28.9% learned informally from a colleague. Formal training from a single one-hour class to more structured learning was available to 30.4%. (4) The most common reason respondents use the Internet are to access remote database searching(73.9%), to communicate with colleagues and friends and electronic mail(52.2%), to transfer files and data exchange(36.2%), to know the current research front(23.2%). They search OPACs for a variety of traditional task-related reasons(59.4%) and to see what other libraries are doing with their automated systems(31.9%). (5) Respondents for the most part use the functions : WWW (68. 1%), E-Mail(59.4%), FTP(52.2%), Gopher(34.8%), Wais(7.2%). (6) Respondents mentioned the following advantages : access to remote log-in database, an excellent and swift communications vehicle, reduced telecommunication cost, saving time. (7) Respondents mentioned the following disadvantages : low speed of communication, difficult of access to the relevant information and library materials, and shortage of database be distributed within Korea.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.