• Title/Summary/Keyword: V-Learning

Search Result 455, Processing Time 0.026 seconds

Sex Education, Sex-related Knowledge, Sex-related Attitude of 6th-Grade Elementary School Students (초등학교 6학년 학생들의 성교육과 성지식, 성태도)

  • Oh, Seung-Mi;Kim, Hyun-Li
    • Journal of the Korean Society of School Health
    • /
    • v.23 no.2
    • /
    • pp.228-236
    • /
    • 2010
  • Purpose: This research was conducted to compare sex-related knowledge and attitude of 6th-grade elementary school students who participated in the field based learning and those with cooperative learning methods. Methods: The data were collected from June to July in 2009. The subjects of the study were recruited from the classes of the 6th grade conveniently assigned from the D elementary school located in Daejeon metro city. Total of 60 students were assigned either to the field based learning group, and the other 60 students to the cooperative learning group. The field based learning group received sex education at the Daejean Youth Sexuality Culture Center for 3 hours. And the cooperative learning group received sex education by cooperative learning method at the classroom for 40 minutes per session, once a week, for 3 weeks. The sex-related knowledge and attitude scales developed by Lee (2004) were used. The data were analyzed by $x^2$-test, Fisher's exact test, and t-test using the SPSS/WIN V. 12.0 program. Results: The results were as follows. 1. Sex-related knowledge was not significantly different between the cooperative learning and the field based learning group. 2. Sex-related attitude was not significantly different between the cooperative learning and the field based learning group. Conclusion: In this study, sex-related knowledge and sex-related attitude of the cooperative learning group and the field based learning group were different from the lecture method groups in the earlier study. It is worthy of notice that the cooperative learning group and the field based learning group took relatively less time to improve their knowlede and attitude than the earlier lecture based group did.

The Study on the Relationship of Learning Style, Professor Image, and Academic Achievement in Cosmetology Majoring College Students (미용계열 대학생들의 학습양식, 교수 이미지, 학업성취도의 관계에 관한 연구)

  • An, Hyeonkyeong
    • Journal of Fashion Business
    • /
    • v.16 no.5
    • /
    • pp.178-191
    • /
    • 2012
  • This paper is purposed to study on the relationship of learning style, professor image, academic achievement in cosmetology majoring college students, and to find the effective education methods of them. The research methods are survey with 400 persons & statistics analysis such as frequency, factor, regression analysis, using SPSS V.14. The results are as belows; 1. Learning styles are divided by (1) shirker, (2) participate, (3) stand-alone, (4) dependent, (5) cooperative, (6) competitive, and professor images are divided by (1) professor ability, (2) professor relationship. 2. There is a relationship in learning styles and professor images. Especially cooperative, participate, dependent valued professor ability, shirker devalued it and cooperative, stand-alone, dependent, competitive valued professor relationship, shirker devalued it. 3. There is a relationship on learning styles and the academic achievement. participate, stand-alone, dependent achieve in high glades and shirker, cooperative low ones. 4. There is a no valid relationship with professor images and the students' academic achievement. 5. The conclusion are; there are the relationship of learning style, professor image, academic achievement in cosmetology majoring college students. So shirker need endless motive giving program, participate personal record management system, dependent creative motivating program, participate class attractive factors, stand-alone learner centered program.

Predicting Brain Tumor Using Transfer Learning

  • Mustafa Abdul Salam;Sanaa Taha;Sameh Alahmady;Alwan Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.73-88
    • /
    • 2023
  • Brain tumors can also be an abnormal collection or accumulation of cells in the brain that can be life-threatening due to their ability to invade and metastasize to nearby tissues. Accurate diagnosis is critical to the success of treatment planning, and resonant imaging is the primary diagnostic imaging method used to diagnose brain tumors and their extent. Deep learning methods for computer vision applications have shown significant improvements in recent years, primarily due to the undeniable fact that there is a large amount of data on the market to teach models. Therefore, improvements within the model architecture perform better approximations in the monitored configuration. Tumor classification using these deep learning techniques has made great strides by providing reliable, annotated open data sets. Reduce computational effort and learn specific spatial and temporal relationships. This white paper describes transfer models such as the MobileNet model, VGG19 model, InceptionResNetV2 model, Inception model, and DenseNet201 model. The model uses three different optimizers, Adam, SGD, and RMSprop. Finally, the pre-trained MobileNet with RMSprop optimizer is the best model in this paper, with 0.995 accuracies, 0.99 sensitivity, and 1.00 specificity, while at the same time having the lowest computational cost.

Android Malware Detection using Machine Learning Techniques KNN-SVM, DBN and GRU

  • Sk Heena Kauser;V.Maria Anu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.202-209
    • /
    • 2023
  • Android malware is now on the rise, because of the rising interest in the Android operating system. Machine learning models may be used to classify unknown Android malware utilizing characteristics gathered from the dynamic and static analysis of an Android applications. Anti-virus software simply searches for the signs of the virus instance in a specific programme to detect it while scanning. Anti-virus software that competes with it keeps these in large databases and examines each file for all existing virus and malware signatures. The proposed model aims to provide a machine learning method that depend on the malware detection method for Android inability to detect malware apps and improve phone users' security and privacy. This system tracks numerous permission-based characteristics and events collected from Android apps and analyses them using a classifier model to determine whether the program is good ware or malware. This method used the machine learning techniques KNN-SVM, DBN, and GRU in which help to find the accuracy which gives the different values like KNN gives 87.20 percents accuracy, SVM gives 91.40 accuracy, Naive Bayes gives 85.10 and DBN-GRU Gives 97.90. Furthermore, in this paper, we simply employ standard machine learning techniques; but, in future work, we will attempt to improve those machine learning algorithms in order to develop a better detection algorithm.

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.

Evaluation of the usefulness of Images according to Reconstruction Techniques in Pediatric Chest CT (소아 흉부 CT 검사에서 재구성 기법에 따른 영상의 유용성 평가)

  • Gu Kim;Jong Hyeok Kwak;Seung-Jae Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.285-295
    • /
    • 2023
  • With the development of technology, efforts to reduce the exposure dose received by patients in CT scans are continuing with the development of new reconstruction techniques. Recently, deep learning reconstruction techniques have been developed to overcome the limitations of repetitive reconstruction techniques. This study aims to evaluate the usefulness of images according to reconstruction techniques in pediatric chest CT images. Patient study conducted a study on 85 pediatric patients who underwent chest CT scan at P-Hospital in Gyeongsangnam-do from January 1, 2021 to December 31, 2022. The phantom used in the Phantom Study is the Pediatrics Whole Body Phantom PBU-70. After the test, the images were reconstructed with FBP, ASIR-V (50%) and DLIR (TF-Medium, High), and the images were evaluated by obtaining SNR and CNR values by setting ROI of the same size. As a result, TF-H of deep learning reconstruction techniques had the lowest noise value compared to ASIR-V (50%) and TF-M in all experiments, and SNR and CNR had the highest values. In pediatric chest CT scans, TF images with deep learning reconstruction techniques were less noisy than ASiR-V images with adaptive statistical iterative reconstruction techniques, CNR and SNR were higher, and the quality of images was improved compared to conventional reconstruction techniques.

A Black Ice Recognition in Infrared Road Images Using Improved Lightweight Model Based on MobileNetV2 (MobileNetV2 기반의 개선된 Lightweight 모델을 이용한 열화도로 영상에서의 블랙 아이스 인식)

  • Li, Yu-Jie;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1835-1845
    • /
    • 2021
  • To accurately identify black ice and warn the drivers of information in advance so they can control speed and take preventive measures. In this paper, we propose a lightweight black ice detection network based on infrared road images. A black ice recognition network model based on CNN transfer learning has been developed. Additionally, to further improve the accuracy of black ice recognition, an enhanced lightweight network based on MobileNetV2 has been developed. To reduce the amount of calculation, linear bottlenecks and inverse residuals was used, and four bottleneck groups were used. At the same time, to improve the recognition rate of the model, each bottleneck group was connected to a 3×3 convolutional layer to enhance regional feature extraction and increase the number of feature maps. Finally, a black ice recognition experiment was performed on the constructed infrared road black ice dataset. The network model proposed in this paper had an accurate recognition rate of 99.07% for black ice.

Dog-Species Classification through CycleGAN and Standard Data Augmentation

  • Chan, Park;Nammee, Moon
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.67-79
    • /
    • 2023
  • In the image field, data augmentation refers to increasing the amount of data through an editing method such as rotating or cropping a photo. In this study, a generative adversarial network (GAN) image was created using CycleGAN, and various colors of dogs were reflected through data augmentation. In particular, dog data from the Stanford Dogs Dataset and Oxford-IIIT Pet Dataset were used, and 10 breeds of dog, corresponding to 300 images each, were selected. Subsequently, a GAN image was generated using CycleGAN, and four learning groups were established: 2,000 original photos (group I); 2,000 original photos + 1,000 GAN images (group II); 3,000 original photos (group III); and 3,000 original photos + 1,000 GAN images (group IV). The amount of data in each learning group was augmented using existing data augmentation methods such as rotating, cropping, erasing, and distorting. The augmented photo data were used to train the MobileNet_v3_Large, ResNet-152, InceptionResNet_v2, and NASNet_Large frameworks to evaluate the classification accuracy and loss. The top-3 accuracy for each deep neural network model was as follows: MobileNet_v3_Large of 86.4% (group I), 85.4% (group II), 90.4% (group III), and 89.2% (group IV); ResNet-152 of 82.4% (group I), 83.7% (group II), 84.7% (group III), and 84.9% (group IV); InceptionResNet_v2 of 90.7% (group I), 88.4% (group II), 93.3% (group III), and 93.1% (group IV); and NASNet_Large of 85% (group I), 88.1% (group II), 91.8% (group III), and 92% (group IV). The InceptionResNet_v2 model exhibited the highest image classification accuracy, and the NASNet_Large model exhibited the highest increase in the accuracy owing to data augmentation.

A Study on Optimal Convolutional Neural Networks Backbone for Reinforced Concrete Damage Feature Extraction (철근콘크리트 손상 특성 추출을 위한 최적 컨볼루션 신경망 백본 연구)

  • Park, Younghoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.511-523
    • /
    • 2023
  • Research on the integration of unmanned aerial vehicles and deep learning for reinforced concrete damage detection is actively underway. Convolutional neural networks have a high impact on the performance of image classification, detection, and segmentation as backbones. The MobileNet, a pre-trained convolutional neural network, is efficient as a backbone for an unmanned aerial vehicle-based damage detection model because it can achieve sufficient accuracy with low computational complexity. Analyzing vanilla convolutional neural networks and MobileNet under various conditions, MobileNet was evaluated to have a verification accuracy 6.0~9.0% higher than vanilla convolutional neural networks with 15.9~22.9% lower computational complexity. MobileNetV2, MobileNetV3Large and MobileNetV3Small showed almost identical maximum verification accuracy, and the optimal conditions for MobileNet's reinforced concrete damage image feature extraction were analyzed to be the optimizer RMSprop, no dropout, and average pooling. The maximum validation accuracy of 75.49% for 7 types of damage detection based on MobilenetV2 derived in this study can be improved by image accumulation and continuous learning.

Implementation of Image Semantic Segmentation on Android Device using Deep Learning (딥-러닝을 활용한 안드로이드 플랫폼에서의 이미지 시맨틱 분할 구현)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.88-91
    • /
    • 2020
  • Image segmentation is the task of partitioning an image into multiple sets of pixels based on some characteristics. The objective is to simplify the image into a representation that is more meaningful and easier to analyze. In this paper, we apply deep-learning to pre-train the learning model, and implement an algorithm that performs image segmentation in real time by extracting frames for the stream input from the Android device. Based on the open source of DeepLab-v3+ implemented in Tensorflow, some convolution filters are modified to improve real-time operation on the Android platform.