• Title/Summary/Keyword: computer-based training

Search Result 1,303, Processing Time 0.031 seconds

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Improving the Performance of Deep-Learning-Based Ground-Penetrating Radar Cavity Detection Model using Data Augmentation and Ensemble Techniques (데이터 증강 및 앙상블 기법을 이용한 딥러닝 기반 GPR 공동 탐지 모델 성능 향상 연구)

  • Yonguk Choi;Sangjin Seo;Hangilro Jang;Daeung Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.211-228
    • /
    • 2023
  • Ground-penetrating radar (GPR) surveys are commonly used to monitor embankments, which is a nondestructive geophysical method. The results of GPR surveys can be complex, depending on the situation, and data processing and interpretation are subject to expert experiences, potentially resulting in false detection. Additionally, this process is time-intensive. Consequently, various studies have been undertaken to detect cavities in GPR survey data using deep learning methods. Deep-learning-based approaches require abundant data for training, but GPR field survey data are often scarce due to cost and other factors constaining field studies. Therefore, in this study, a deep- learning-based model was developed for embankment GPR survey cavity detection using data augmentation strategies. A dataset was constructed by collecting survey data over several years from the same embankment. A you look only once (YOLO) model, commonly used in computer vision for object detection, was employed for this purpose. By comparing and analyzing various strategies, the optimal data augmentation approach was determined. After initial model development, a stepwise process was employed, including box clustering, transfer learning, self-ensemble, and model ensemble techniques, to enhance the final model performance. The model performance was evaluated, with the results demonstrating its effectiveness in detecting cavities in embankment GPR survey data.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

The Influence of Online Social Networking on Individual Virtual Competence and Task Performance in Organizations (온라인 네트워킹 활동이 가상협업 역량 및 업무성과에 미치는 영향)

  • Suh, A-Young;Shin, Kyung-Shik
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.39-69
    • /
    • 2012
  • With the advent of communication technologies including electronic collaborative tools and conferencing systems provided over the Internet, virtual collaboration is becoming increasingly common in organizations. Virtual collaboration refers to an environment in which the people working together are interdependent in their tasks, share responsibility for outcomes, are geographically dispersed, and rely on mediated rather than face-to face, communication to produce an outcome. Research suggests that new sets of individual skill, knowledge, and ability (SKAs) are required to perform effectively in today's virtualized workplace, which is labeled as individual virtual competence. It is also argued that use of online social networking sites may influence not only individuals' daily lives but also their capability to manage their work-related relationships in organizations, which in turn leads to better performance. The existing research regarding (1) the relationship between virtual competence and task performance and (2) the relationship between online networking and task performance has been conducted based on different theoretical perspectives so that little is known about how online social networking and virtual competence interplay to predict individuals' task performance. To fill this gap, this study raises the following research questions: (1) What is the individual virtual competence required for better adjustment to the virtual collaboration environment? (2) How does online networking via diverse social network service sites influence individuals' task performance in organizations? (3) How do the joint effects of individual virtual competence and online networking influence task performance? To address these research questions, we first draw on the prior literature and derive four dimensions of individual virtual competence that are related with an individual's self-concept, knowledge and ability. Computer self-efficacy is defined as the extent to which an individual beliefs in his or her ability to use computer technology broadly. Remotework self-efficacy is defined as the extent to which an individual beliefs in his or her ability to work and perform joint tasks with others in virtual settings. Virtual media skill is defined as the degree of confidence of individuals to function in their work role without face-to-face interactions. Virtual social skill is an individual's skill level in using technologies to communicate in virtual settings to their full potential. It should be noted that the concept of virtual social skill is different from the self-efficacy and captures an individual's cognition-based ability to build social relationships with others in virtual settings. Next, we discuss how online networking influences both individual virtual competence and task performance based on the social network theory and the social learning theory. We argue that online networking may enhance individuals' capability in expanding their social networks with low costs. We also argue that online networking may enable individuals to learn the necessary skills regarding how they use technological functions, communicate with others, and share information and make social relations using the technical functions provided by electronic media, consequently increasing individual virtual competence. To examine the relationships among online networking, virtual competence, and task performance, we developed research models (the mediation, interaction, and additive models, respectively) by integrating the social network theory and the social learning theory. Using data from 112 employees of a virtualized company, we tested the proposed research models. The results of analysis partly support the mediation model in that online social networking positively influences individuals' computer self-efficacy, virtual social skill, and virtual media skill, which are key predictors of individuals' task performance. Furthermore, the results of the analysis partly support the interaction model in that the level of remotework self-efficacy moderates the relationship between online social networking and task performance. The results paint a picture of people adjusting to virtual collaboration that constrains and enables their task performance. This study contributes to research and practice. First, we suggest a shift of research focus to the individual level when examining virtual phenomena and theorize that online social networking can enhance individual virtual competence in some aspects. Second, we replicate and advance the prior competence literature by linking each component of virtual competence and objective task performance. The results of this study provide useful insights into how human resource responsibilities assess employees' weakness and strength when they organize virtualized groups or projects. Furthermore, it provides managers with insights into the kinds of development or training programs that they can engage in with their employees to advance their ability to undertake virtual work.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Education Needs for Home Care Nurse (가정간호 교육요구도 조사 연구)

  • Kim Cho-Ja;Kang Kyu-Sook;Baek Hee-Chon
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.6 no.2
    • /
    • pp.228-239
    • /
    • 1999
  • In 1990 Home Care Education Programs started when legislation established certification for Home Care Nurses. The Ministry of Health and Welfare proposed a home care education curriculum which has 352 class hours and 248 hours of 'family nursing and practice'. Though Home Care Education Programs have been offered in 11 home care educational institutes, there has been no formal revision for the home care education programs. Also a first and second home care demonstration projects have been carried out, but there has been no research on outcomes for home care education as applied in home care practice. The purposes of this study were to identify the important content areas for home care nursing as perceived by home care nurses, and to identify their clinical competence in each of these areas, and from these to identify the education needs. The sample was 107 home care nurses who were working in home care demonstration hospitals and community-based institutions which have been offering home care services. Responses were received from 88 nurses, comprising a 82.2% return rate, and 86 were included in the final analysis. The instrument used was a modification of the instrument developed by Caie-Lawrence et(1995) and Moon's(1991) instrument on home care knowledge. The instrument's Cronbach's coefficient was 0.982. Among the respondents, 64% were working at home care demonstration hospitals and 36% were working at community-based institutions. Their home care experiences were from one month to six years, with a mean of 20.6 months. The importance rating for home care education content was 3.42 0.325, which means importance was rated relatively high. Technical aspects of home care were identified the most important. Five items 'education skill', 'counseling skill', 'interview skill', 'wound care skill', 'bed sore care skill' received 100% importance ratings. The competency rating was 2.87 0.367 and 'technical aspects of home care' was the highest, and 'application to home care skill' was the lowest. Home care nurses' education needs were identified and compared to the importance ratings and competency ratings. Eleven items were identified as the highest in the importance areas and eleven items were in the lowest competency areas. High importance ratings matched with low competency ratings determined training needs, but there was no matching items in this study. In the lowest competency areas four items were excluded, because of not being applicable in current home care practice. Therefore total eighteen items were identified as home care education needs. These items are 'bed sore care skill', 'malpractice', 'wound care skill', 'general infection control', 'change and management of tracheostomy tubes', 'CVA patient care', 'Hospice care', 'pain management', 'urinary catheterization and management', 'L-tube insertion and managements', 'Respirator use and management skill', 'infant care', 'prevention to burnout', 'child assessment', 'CAPD', 'infant assessment', 'computer literacy', and 'psychiatry patient care'.

  • PDF

A Study on Model Development for SW Human Resources Development using Supply Chain Management Model (SCM 모델을 이용한 SW인력양성 모형개발 연구)

  • Lee, Jung-Mann;Om, Ki-Yong;Song, Chan-Hoo;Kim, Kwan-Young
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.22-46
    • /
    • 2007
  • This article introduces a recent innovation in Korea's human resources development policy in the SW sector. Facing serious problems in cultivating SW engineers such as a mismatch in supply and demand of SW workers, shortage of globally competitive SW professionals, and insufficient education and training of university graduates, the Korean government has decided to adopt a new paradigm in national SW engineering education, based on supply chain management (SCM) in manufacturing. SCM has been a major component of the corporate competitive strategy, enhancing organizational productiveness and responsiveness in a highly competitive environment. It weighs improving competitiveness of the supply chain as a whole via long-term commitment to supply chain relationships and a cooperative, integrated approach to business processes. These characteristics of SCM are believed to provide insight into a more effective IT education and university-industry collaboration. On the basis of the SCM literature, a framework for industry-oriented SW human resources development is designed, and then applied in the case of nurturing computer-software engineers in Korea. This approach is expected to fumish valuable implications not only to Korean policy makers, but also to other countries making similar efforts to enhance the effectiveness and flexibility in human resources development. The construction of SCM-based SW HRD model is first trial to apply SCM into SW HRD field. The model is divided into three kinds of primary activities and two kinds of supportive activities in the field of value chain such as SW HRD Council, SW demand and supply plan establishment and the integration of SW engineering capabilities that contribute the reduction of the skill and job matching through SW HR demand and supply collaboration.

  • PDF

Frequency Recognition in SSVEP-based BCI systems With a Combination of CCA and PSDA (CCA와 PSDA를 결합한 SSVEP 기반 BCI 시스템의 주파수 인식 기법)

  • Lee, Ju-Yeong;Lee, Yu-Ri;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.139-147
    • /
    • 2015
  • Steady state visual evoked potential (SSVEP) has been actively studied because of its short training time, relatively higher signal-to-noise ratio, and higher information transfer rate. There are two popular analysis methods for SSVEP signals: power spectral density analysis (PSDA) and canonical correlation analysis (CCA). However, the PSDA is known to be vulnerable to noise due to the use of a single channel. Although conventional CCA is more accurate than PSDA, it may not be appropriate for the real-time SSVEP-based BCI system when it has short time window length because it uses sinusoidal signals as references. Therefore, the two methods are not efficient for the real-time BCI system that requires a short TW and a high recognition accuracy. To overcome this limitation of the conventional methods, this paper proposes a frequency recognition method with a combination of CCA and PSDA using the difference between powers of canonical variables obtained from the results of CCA. Experimental results show that the performance of the combination of CCA and PSDA is better than that of CCA for the case of a short TW.