• Title/Summary/Keyword: Adaptive learning

Search Result 1,003, Processing Time 0.022 seconds

The Effects of Perceived Agile Culture of Chinese Enterprises on Job Performance: Focused on Moderating Effect of Individual Capability (중국기업의 애자일 문화인식이 직무성과에 미치는 영향: 개인역량 조절효과를 중심으로)

  • AN, Na;Choi, Su-Heyong;Kang, Hee-Kyung
    • Journal of Digital Convergence
    • /
    • v.17 no.3
    • /
    • pp.169-180
    • /
    • 2019
  • The purpose of this study is to verify the effect of perceived agile culture(empowerment, continuous learning, personal communication Intensification) on job performance(task, contextual, adaptive) and to explore the moderating effect of individual capability(knowledge, skill). For the empirical analysis, data were collected from convenient sample of 219 employees working at enterprise in China. The analysis of validity and reliability of variables and regression analysis were performed using SPSS 21. The results of this research as followed: firstly, the positive perceived agile culture and job performance were statistically supported. Secondly, the individual capability played as a partial moderator on the relationship between the perceived agile culture and the job performance. The factors that constitute the perceived agile culture can present the research directions for the transformation into the agile organization.

Image Restoration Network with Adaptive Channel Attention Modules for Combined Distortions (적응형 채널 어텐션 모듈을 활용한 복합 열화 복원 네트워크)

  • Lee, Haeyun;Cho, Sunghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.1-9
    • /
    • 2019
  • The image obtained from systems such as autonomous driving cars or fire-fighting robots often suffer from several degradation such as noise, motion blur, and compression artifact due to multiple factor. It is difficult to apply image recognition to these degraded images, then the image restoration is essential. However, these systems cannot recognize what kind of degradation and thus there are difficulty restoring the images. In this paper, we propose the deep neural network, which restore natural images from images degraded in several ways such as noise, blur and JPEG compression in situations where the distortion applied to images is not recognized. We adopt the channel attention modules and skip connections in the proposed method, which makes the network focus on valuable information to image restoration. The proposed method is simpler to train than other methods, and experimental results show that the proposed method outperforms existing state-of-the-art methods.

A Study on the Establishment of Edutech-based Vocational Education and Training Model (에듀테크 기반 평생직업능력개발 선도사업 모델 수립방안 연구)

  • Rim, Kyung-hwa;Shin, Jung-min;Kim, Ju-ri
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.425-437
    • /
    • 2022
  • In this study, the role and function of Edutech, as well as the application and expectations in the field of future vocational competency development, were gathered to define Edutech as a comprehensive working definition. Based on this redefinition of Edutech, this study analyzes Edutech technology trends and examines the level of actual technology applied to education and vocational training based on written interviews with experts, and finds out significant implications from the point of view of vocational training. Finally we propose an Edutech-based Vocational Education and Training Model.

A Comparative Study on Game-Score Prediction Models Using Compuational Thinking Education Game Data (컴퓨팅 사고 교육 게임 데이터를 사용한 게임 점수 예측 모델 성능 비교 연구)

  • Yang, Yeongwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.529-534
    • /
    • 2021
  • Computing thinking is regarded as one of the important skills required in the 21st century, and many countries have introduced and implemented computing thinking training courses. Among computational thinking education methods, educational game-based methods increase student participation and motivation, and increase access to computational thinking. Autothinking is an educational game developed for the purpose of providing computational thinking education to learners. It is an adaptive system that dynamically provides feedback to learners and automatically adjusts the difficulty according to the learner's computational thinking ability. However, because the game was designed based on rules, it cannot intelligently consider the computational thinking of learners or give feedback. In this study, game data collected through Autothikning is introduced, and game score prediction that reflects computational thinking is performed in order to increase the adaptability of the game by using it. To solve this problem, a comparative study was conducted on linear regression, decision tree, random forest, and support vector machine algorithms, which are most commonly used in regression problems. As a result of the study, the linear regression method showed the best performance in predicting game scores.

Image Enhancement based on Piece-wise Linear Enhancement Curves for Improved Visibility under Sunlight (햇빛 아래에서 향상된 시인성을 위한 Piece-wise Linear Enhancement Curves 기반 영상 개선)

  • Lee, Junmin;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.812-815
    • /
    • 2022
  • Images displayed on a digital devices under the sunlight are generally perceived to be darker than the original images, which leads to a decrease in visibility. For better visibility, global luminance compensation or tone mapping adaptive to ambient lighting is required. However, the existing methods have limitations in chrominance compensation and are difficult to use in real world due to their heavy computational cost. To solve these problems, this paper propose a piece-wise linear curves (PLECs)-based image enhancement method to improve both luminance and chrominance. At this time, PLECs are regressed through deep learning and implemented in the form of a lookup table to real-time operation. Experimental results show that the proposed method has better visibility compared to the original image with low computational cost.

Efficient Memory Update Module for Video Object Segmentation (동영상 물체 분할을 위한 효율적인 메모리 업데이트 모듈)

  • Jo, Junho;Cho, Nam Ik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.561-568
    • /
    • 2022
  • Most deep learning-based video object segmentation methods perform the segmentation with past prediction information stored in external memory. In general, the more past information is stored in the memory, the better results can be obtained by accumulating evidence for various changes in the objects of interest. However, all information cannot be stored in the memory due to hardware limitations, resulting in performance degradation. In this paper, we propose a method of storing new information in the external memory without additional memory allocation. Specifically, after calculating the attention score between the existing memory and the information to be newly stored, new information is added to the corresponding memory according to each score. In this way, the method works robustly because the attention mechanism reflects the object changes well without using additional memory. In addition, the update rate is adaptively determined according to the accumulated number of matches in the memory so that the frequently updated samples store more information to maintain reliable information.

Performance Evaluation of ResNet-based Pneumonia Detection Model with the Small Number of Layers Using Chest X-ray Images (흉부 X선 영상을 이용한 작은 층수 ResNet 기반 폐렴 진단 모델의 성능 평가)

  • Youngeun Choi;Seungwan Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.277-285
    • /
    • 2023
  • In this study, pneumonia identification networks with the small number of layers were constructed by using chest X-ray images. The networks had similar trainable-parameters, and the performance of the trained models was quantitatively evaluated with the modification of the network architectures. A total of 6 networks were constructed: convolutional neural network (CNN), VGGNet, GoogleNet, residual network with identity blocks, ResNet with bottleneck blocks and ResNet with identity and bottleneck blocks. Trainable parameters for the 6 networks were set in a range of 273,921-294,817 by adjusting the output channels of convolution layers. The network training was implemented with binary cross entropy (BCE) loss function, sigmoid activation function, adaptive moment estimation (Adam) optimizer and 100 epochs. The performance of the trained models was evaluated in terms of training time, accuracy, precision, recall, specificity and F1-score. The results showed that the trained models with the small number of layers precisely detect pneumonia from chest X-ray images. In particular, the overall quantitative performance of the trained models based on the ResNets was above 0.9, and the performance levels were similar or superior to those based on the CNN, VGGNet and GoogleNet. Also, the residual blocks affected the performance of the trained models based on the ResNets. Therefore, in this study, we demonstrated that the object detection networks with the small number of layers are suitable for detecting pneumonia using chest X-ray images. And, the trained models based on the ResNets can be optimized by applying appropriate residual-blocks.

An optimized ANFIS model for predicting pile pullout resistance

  • Yuwei Zhao;Mesut Gor;Daria K. Voronkova;Hamed Gholizadeh Touchaei;Hossein Moayedi;Binh Nguyen Le
    • Steel and Composite Structures
    • /
    • v.48 no.2
    • /
    • pp.179-190
    • /
    • 2023
  • Many recent attempts have sought accurate prediction of pile pullout resistance (Pul) using classical machine learning models. This study offers an improved methodology for this objective. Adaptive neuro-fuzzy inference system (ANFIS), as a popular predictor, is trained by a capable metaheuristic strategy, namely equilibrium optimizer (EO) to predict the Pul. The used data is collected from laboratory investigations in previous literature. First, two optimal configurations of EO-ANFIS are selected after sensitivity analysis. They are next evaluated and compared with classical ANFIS and two neural-based models using well-accepted accuracy indicators. The results of all five models were in good agreement with laboratory Puls (all correlations > 0.99). However, it was shown that both EO-ANFISs not only outperform neural benchmarks but also enjoy a higher accuracy compared to the classical version. Therefore, utilizing the EO is recommended for optimizing this predictive tool. Furthermore, a comparison between the selected EO-ANFISs, where one employs a larger population, revealed that the model with the population size of 75 is more efficient than 300. In this relation, root mean square error and the optimization time for the EO-ANFIS (75) were 19.6272 and 1715.8 seconds, respectively, while these values were 23.4038 and 9298.7 seconds for EO-ANFIS (300).

Dosimetric Evaluation of Synthetic Computed Tomography Technique on Position Variation of Air Cavity in Magnetic Resonance-Guided Radiotherapy

  • Hyeongmin Jin;Hyun Joon An;Eui Kyu Chie;Jong Min Park;Jung-in Kim
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.142-149
    • /
    • 2022
  • Purpose: This study seeks to compare the dosimetric parameters of the bulk electron density (ED) approach and synthetic computed tomography (CT) image in terms of position variation of the air cavity in magnetic resonance-guided radiotherapy (MRgRT) for patients with pancreatic cancer. Methods: This study included nine patients that previously received MRgRT and their simulation CT and magnetic resonance (MR) images were collected. Air cavities were manually delineated on simulation CT and MR images in the treatment planning system for each patient. The synthetic CT images were generated using the deep learning model trained in a prior study. Two more plans with identical beam parameters were recalculated with ED maps that were either manually overridden by the cavities or derived from the synthetic CT. Dose calculation accuracy was explored in terms of dose-volume histogram parameters and gamma analysis. Results: The D95% averages were 48.80 Gy, 48.50 Gy, and 48.23 Gy for the original, manually assigned, and synthetic CT-based dose distributions, respectively. The greatest deviation was observed for one patient, whose D95% to synthetic CT was 1.84 Gy higher than the original plan. Conclusions: The variation of the air cavity position in the gastrointestinal area affects the treatment dose calculation. Synthetic CT-based ED modification would be a significant option for shortening the time-consuming process and improving MRgRT treatment accuracy.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.