• Title/Summary/Keyword: Web-Based Training

Search Result 243, Processing Time 0.028 seconds

iOS-based Fitness Management System utilizing OpenWrt Server (OpenWrt 서버를 활용한 iOS기반 피트니스 케어 시스템)

  • Kye, Min-seok;Min, Joon-Ki;Yang, Seung-Eui;Park, Sang-No;Jung, Hoe-kung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.687-689
    • /
    • 2014
  • Due to the development of health promotion and health care technology of smartphones paradigm from treatment to manage change, and this is increasing the fitness in time for the user. However, fitness club, users find the right exercise law-personal trainer must be costly to pay companies describes the management system is required in order to operate the high costs strain. In this paper, such as scalable servers to OpenWrt Fonera is mainly based on the client to configure the server iOS mobile sensors collect a user's movement through the history of the user's input. This data is sent to the Web and check whether the receiving system via a trainer is to record the users through weight training and to receive feedback.

  • PDF

Multi-type object detection-based de-identification technique for personal information protection (개인정보보호를 위한 다중 유형 객체 탐지 기반 비식별화 기법)

  • Ye-Seul Kil;Hyo-Jin Lee;Jung-Hwa Ryu;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.5
    • /
    • pp.11-20
    • /
    • 2022
  • As the Internet and web technology develop around mobile devices, image data contains various types of sensitive information such as people, text, and space. In addition to these characteristics, as the use of SNS increases, the amount of damage caused by exposure and abuse of personal information online is increasing. However, research on de-identification technology based on multi-type object detection for personal information protection is insufficient. Therefore, this paper proposes an artificial intelligence model that detects and de-identifies multiple types of objects using existing single-type object detection models in parallel. Through cutmix, an image in which person and text objects exist together are created and composed of training data, and detection and de-identification of objects with different characteristics of person and text was performed. The proposed model achieves a precision of 0.724 and mAP@.5 of 0.745 when two objects are present at the same time. In addition, after de-identification, mAP@.5 was 0.224 for all objects, showing a decrease of 0.4 or more.

Design and construction method of an employment support management system for college students - A case study (대학생들을 위한 취업지원관리시스템의 설계 및 구축안-사례 연구)

  • Kim, Jae-Saeng;Kim, Kyung-Hun;Kyung, Tae-Won
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.329-338
    • /
    • 2014
  • Today, one of the important factors that determine the university rating is the employment rate. The jobseekers are using online or offline recruiting services in order to get the desired job information. There are lots of employment supporting systems as like web-based employment agencies and University's job centers, but they are focusing more on providing job information rather than on managing the employment support. In addition, those are insufficient to support business process associated with MOU companies, industry field training, mentoring processors, etc., and to continue to manage and to update the information (resume, personal statement, etc.) about the students which the companies want, and the information about the companies which jobseekers want. Therefore, it is required that the employment supporting system which not only storages the initial data (student and corporate information), but also assist the career placement. In this paper, we considered the specific employment rights management features of the existing employment support system, it could receive real-time job information in the smart phone, we presented the design and construction of the system linked to the Bachelor Information System.

Expanded Workflow Development for OSINT(Open Source Intelligence)-based Profiling with Timeline (공개정보 기반 타임라인 프로파일링을 위한 확장된 워크플로우 개발)

  • Kwon, Heewon;Jin, Seoyoung;Sim, Minsun;Kwon, Hyemin;Lee, Insoo;Lee, Seunghoon;Kim, Myuhngjoo
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.187-194
    • /
    • 2021
  • OSINT(Open Source Intelligence), rapidly increasing on the surface web in various forms, can also be used for criminal investigations by using profiling. This technique has become quite common in foreign investigative agencies such as the United States. On the other hand, in Korea, it is not used a lot, and there is a large deviation in the quantity and quality of information acquired according to the experience and knowledge level of investigator. Unlike Bazzell's most well-known model, we designed a Korean-style OSINT-based profiling technique that considers the Korean web environment and provides timeline information, focusing on the improved workflow. The database schema to improve the efficiency of profiling is also presented. Using this, we can obtain search results that guarantee a certain level of quantity and quality. And it can also be used as a standard training course. To increase the effectiveness and efficiency of criminal investigations using this technique, it is necessary to strengthen the legal basis and to introduce automation technologies.

Implement of Web-based Remote Monitoring System of Smart Greenhouse (스마트 온실 통합 모니터링 시스템 구축)

  • Dong Eok, Kim;Nou Bog, Park;Sun Jung, Hong;Dong Hyeon, Kang;Young Hoe, Woo;Jong Won, Lee;Yul Kyun, Ahn;Shin Hee, Han
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.24 no.4
    • /
    • pp.53-61
    • /
    • 2022
  • Growing agricultural products in greenhouses controlled by creating suitable climatic conditions and root zone of crop has been an important research and application subject. Appropriate environmental conditions in greenhouse are necessary for optimum plant growth improved crop yields. This study aimed to establish web-based remote monitoring system which monitors crops growth environment and status of crop on a real-time basis by applying to greenhouses IT technology connecting greenhouse equipment such as temperature sensors, soil sensors, crop sensors and camera. The measuring items were air temperature, relative humidity, solar radiation, CO2 concentration, EC and pH of nutrient solution, medium temperature, EC of medium, water content of medium, leaf temperature, sap flow, stem diameter, fruit diameter, etc. The developed greenhouse monitoring system was composed of the network system, the data collecting device with sensors, and cameras. Remote monitoring system was implemented in a server/client environment. Information on greenhouse environment and crops is stored in a database. Items on growth and environment is extracted from stored information, could be compared and analyzed. So, A integrated monitoring system for smart greenhouse would be use in application practice and understanding the environment and crop growth for smart greenhouse management. sap flow, stem diameter and pant-water relations

A Proposal of Educational 3D Modelling Software Development Type Via User Experience Analysis of Open Source 3D Modelling Software (무료공개 3D모델링 소프트웨어 사용자 경험 분석을 통한 교육용 3D모델링 소프트웨어 개발유형 제안)

  • Lee, Guk-Hee;Cho, Jaekyung
    • Science of Emotion and Sensibility
    • /
    • v.20 no.2
    • /
    • pp.87-102
    • /
    • 2017
  • With increasing interest in 3D printing, the interest in the 3D modelling training that should precede the 3D printing is increasing. However, the existing 3D modelling software is developed mostly by foreign brands. Thus, the interfaces are all in English. 3D modelling software training for Korean novices who are not familiar with these terms has constraints. This study aims to explore what to consider when developing a Korean model for 3D modelling educational software for 3D printing in the face of such reality. For this goal, after having novices with no experience in 3D modeling to perform a house building task using either 12D Design or Tinker CAD, we conducted a survey. It was found in the result that more users favored Tinker CAD over 123D Design, and the errors involved while working with the Tinker CAD were less than those with the 123D Design, and the ratio of people who completed the task with the Tinker CAD was higher than that with the 123D Design. In general discussion, an introductory level educational 3D modeling software development is proposed which utilize characteristics of Tinker CAD (easy modelling is possible by three-dimensional figures) and web-based method. Also, a beginner/intermediate level educational 3D modeling software development is proposed which utilize characteristics of 123D Design (with finer measurement manipulations and figure alignment) and Windows-based method.

Evaluation of Continuing Education Program to Enhance Competency for Hospice Volunteers: An Exploratory Mixed-Methods Design (호스피스 자원봉사자 역량강화를 위한 지속교육의 효과: 혼합연구방법의 적용)

  • Seo, Minjeong;Cho, Han-A;Han, Sang Mi;Ko, Youngshim;Gil, Cho-Rong
    • Journal of Hospice and Palliative Care
    • /
    • v.22 no.4
    • /
    • pp.185-197
    • /
    • 2019
  • Purpose: Hospice volunteers are serving an invisible yet pivotal role in the hospice and palliative care team. This study investigated how effectively a continuing education program could enhance hospice volunteers' competency. Methods: A total of 20 hours (four hours per week) of training was provided to 30 hospice volunteers who participated in the continuing education for hospice volunteers. Efficiency of the education was analyzed with an exploratory mixed-methods design. For quantitative analysis, the volunteers were asked, before and after the training, about their attitudes towards hospice care, what makes a meaningful life, self-efficacy and satisfaction with their volunteer service. Descriptive statistics, paired t-tests, and Wilcoxon signed-rank test were performed using SPSS Window 20.0. For qualitative research, participants were placed in three groups for a focus group interview, and data were analyzed by content analysis. Results: A quantitative study result shows that this training can significantly affect hospice volunteers' attitudes and improve their self-efficacy. A qualitative study result shows that participants wanted to receive continuous education from the physical/psychosocial/spiritual aspects to better serve end-of-life patients and their family members even though they have to spare significant time for the volunteer service. They wanted to know how to take good care of patients without getting themselves injured and how to provide spiritual care. Conclusion: The continuing education program reflecting volunteers' requests is strongly needed to improve their competency. An effective continuing education requires continuous training and support in areas where hospice volunteers are interested in. A good alternative is to combine web-based and hands-on training, thereby allowing hospice volunteers freely take training that suits their interest.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Pre-Evaluation for Prediction Accuracy by Using the Customer's Ratings in Collaborative Filtering (협업필터링에서 고객의 평가치를 이용한 선호도 예측의 사전평가에 관한 연구)

  • Lee, Seok-Jun;Kim, Sun-Ok
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.187-206
    • /
    • 2007
  • The development of computer and information technology has been combined with the information superhighway internet infrastructure, so information widely spreads not only in special fields but also in the daily lives of people. Information ubiquity influences the traditional way of transaction, and leads a new E-commerce which distinguishes from the existing E-commerce. Not only goods as physical but also service as non-physical come into E-commerce. As the scale of E-Commerce is being enlarged as well. It keeps people from finding information they want. Recommender systems are now becoming the main tools for E-Commerce to mitigate the information overload. Recommender systems can be defined as systems for suggesting some Items(goods or service) considering customers' interests or tastes. They are being used by E-commerce web sites to suggest products to their customers who want to find something for them and to provide them with information to help them decide which to purchase. There are several approaches of recommending goods to customer in recommender system but in this study, the main subject is focused on collaborative filtering technique. This study presents a possibility of pre-evaluation for the prediction performance of customer's preference in collaborative filtering before the process of customer's preference prediction. Pre-evaluation for the prediction performance of each customer having low performance is classified by using the statistical features of ratings rated by each customer is conducted before the prediction process. In this study, MovieLens 100K dataset is used to analyze the accuracy of classification. The classification criteria are set by using the training sets divided 80% from the 100K dataset. In the process of classification, the customers are divided into two groups, classified group and non classified group. To compare the prediction performance of classified group and non classified group, the prediction process runs the 20% test set through the Neighborhood Based Collaborative Filtering Algorithm and Correspondence Mean Algorithm. The prediction errors from those prediction algorithm are allocated to each customer and compared with each user's error. Research hypothesis : Two research hypotheses are formulated in this study to test the accuracy of the classification criterion as follows. Hypothesis 1: The estimation accuracy of groups classified according to the standard deviation of each user's ratings has significant difference. To test the Hypothesis 1, the standard deviation is calculated for each user in training set which is divided 80% from MovieLens 100K dataset. Four groups are classified according to the quartile of the each user's standard deviations. It is compared to test the estimation errors of each group which results from test set are significantly different. Hypothesis 2: The estimation accuracy of groups that are classified according to the distribution of each user's ratings have significant differences. To test the Hypothesis 2, the distributions of each user's ratings are compared with the distribution of ratings of all customers in training set which is divided 80% from MovieLens 100K dataset. It assumes that the customers whose ratings' distribution are different from that of all customers would have low performance, so six types of different distributions are set to be compared. The test groups are classified into fit group or non-fit group according to the each type of different distribution assumed. The degrees in accordance with each type of distribution and each customer's distributions are tested by the test of ${\chi}^2$ goodness-of-fit and classified two groups for testing the difference of the mean of errors. Also, the degree of goodness-of-fit with the distribution of each user's ratings and the average distribution of the ratings in the training set are closely related to the prediction errors from those prediction algorithms. Through this study, the customers who have lower performance of prediction than the rest in the system are classified by those two criteria, which are set by statistical features of customers ratings in the training set, before the prediction process.