• Title/Summary/Keyword: V-Learning

Search Result 455, Processing Time 0.026 seconds

Development of Color Recognition Algorithm for Traffic Lights using Deep Learning Data (딥러닝 데이터 활용한 신호등 색 인식 알고리즘 개발)

  • Baek, Seoha;Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.45-50
    • /
    • 2022
  • The vehicle motion in urban environment is determined by surrounding traffic flow, which cause understanding the flow to be a factor that dominantly affects the motion planning of the vehicle. The traffic flow in this urban environment is accessed using various urban infrastructure information. This paper represents a color recognition algorithm for traffic lights to perceive traffic condition which is a main information among various urban infrastructure information. Deep learning based vision open source realizes positions of traffic lights around the host vehicle. The data are processed to input data based on whether it exists on the route of ego vehicle. The colors of traffic lights are estimated through pixel values from the camera image. The proposed algorithm is validated in intersection situations with traffic lights on the test track. The results show that the proposed algorithm guarantees precise recognition on traffic lights associated with the ego vehicle path in urban intersection scenarios.

Empirical Investigations to Plant Leaf Disease Detection Based on Convolutional Neural Network

  • K. Anitha;M.Srinivasa Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.115-120
    • /
    • 2023
  • Plant leaf diseases and destructive insects are major challenges that affect the agriculture production of the country. Accurate and fast prediction of leaf diseases in crops could help to build-up a suitable treatment technique while considerably reducing the economic and crop losses. In this paper, Convolutional Neural Network based model is proposed to detect leaf diseases of a plant in an efficient manner. Convolutional Neural Network (CNN) is the key technique in Deep learning mainly used for object identification. This model includes an image classifier which is built using machine learning concepts. Tensor Flow runs in the backend and Python programming is used in this model. Previous methods are based on various image processing techniques which are implemented in MATLAB. These methods lack the flexibility of providing good level of accuracy. The proposed system can effectively identify different types of diseases with its ability to deal with complex scenarios from a plant's area. Predictor model is used to precise the disease and showcase the accurate problem which helps in enhancing the noble employment of the farmers. Experimental results indicate that an accuracy of around 93% can be achieved using this model on a prepared Data Set.

Convolutional Neural Network Based Plant Leaf Disease Detection

  • K. Anitha;M.Srinivasa Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.107-112
    • /
    • 2024
  • Plant leaf diseases and destructive insects are major challenges that affect the agriculture production of the country. Accurate and fast prediction of leaf diseases in crops could help to build-up a suitable treatment technique while considerably reducing the economic and crop losses. In this paper, Convolutional Neural Network based model is proposed to detect leaf diseases of a plant in an efficient manner. Convolutional Neural Network (CNN) is the key technique in Deep learning mainly used for object identification. This model includes an image classifier which is built using machine learning concepts. Tensor Flow runs in the backend and Python programming is used in this model. Previous methods are based on various image processing techniques which are implemented in MATLAB. These methods lack the flexibility of providing good level of accuracy. The proposed system can effectively identify different types of diseases with its ability to deal with complex scenarios from a plant's area. Predictor model is used to precise the disease and showcase the accurate problem which helps in enhancing the noble employment of the farmers. Experimental results indicate that an accuracy of around 93% can be achieved using this model on a prepared Data Set.

Data-driven Approach to Explore the Contribution of Process Parameters for Laser Powder Bed Fusion of a Ti-6Al-4V Alloy

  • Jeong Min Park;Jaimyun Jung;Seungyeon Lee;Haeum Park;Yeon Woo Kim;Ji-Hun Yu
    • Journal of Powder Materials
    • /
    • v.31 no.2
    • /
    • pp.137-145
    • /
    • 2024
  • In order to predict the process window of laser powder bed fusion (LPBF) for printing metallic components, the calculation of volumetric energy density (VED) has been widely calculated for controlling process parameters. However, because it is assumed that the process parameters contribute equally to heat input, the VED still has limitation for predicting the process window of LPBF-processed materials. In this study, an explainable machine learning (xML) approach was adopted to predict and understand the contribution of each process parameter to defect evolution in Ti alloys in the LPBF process. Various ML models were trained, and the Shapley additive explanation method was adopted to quantify the importance of each process parameter. This study can offer effective guidelines for fine-tuning process parameters to fabricate high-quality products using LPBF.

Deep Learning-Based Box Office Prediction Using the Image Characteristics of Advertising Posters in Performing Arts (공연예술에서 광고포스터의 이미지 특성을 활용한 딥러닝 기반 관객예측)

  • Cho, Yujung;Kang, Kyungpyo;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.2
    • /
    • pp.19-43
    • /
    • 2021
  • The prediction of box office performance in performing arts institutions is an important issue in the performing arts industry and institutions. For this, traditional prediction methodology and data mining methodology using standardized data such as cast members, performance venues, and ticket prices have been proposed. However, although it is evident that audiences tend to seek out their intentions by the performance guide poster, few attempts were made to predict box office performance by analyzing poster images. Hence, the purpose of this study is to propose a deep learning application method that can predict box office success through performance-related poster images. Prediction was performed using deep learning algorithms such as Pure CNN, VGG-16, Inception-v3, and ResNet50 using poster images published on the KOPIS as learning data set. In addition, an ensemble with traditional regression analysis methodology was also attempted. As a result, it showed high discrimination performance exceeding 85% of box office prediction accuracy. This study is the first attempt to predict box office success using image data in the performing arts field, and the method proposed in this study can be applied to the areas of poster-based advertisements such as institutional promotions and corporate product advertisements.

Cloud Detection from Sentinel-2 Images Using DeepLabV3+ and Swin Transformer Models (DeepLabV3+와 Swin Transformer 모델을 이용한 Sentinel-2 영상의 구름탐지)

  • Kang, Jonggu;Park, Ganghyun;Kim, Geunah;Youn, Youjeong;Choi, Soyeon;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1743-1747
    • /
    • 2022
  • Sentinel-2 can be used as proxy data for the Korean Compact Advanced Satellite 500-4 (CAS500-4), also known as Agriculture and Forestry Satellite, in terms of spectral wavelengths and spatial resolution. This letter examined cloud detection for later use in the CAS500-4 based on deep learning technologies. DeepLabV3+, a traditional Convolutional Neural Network (CNN) model, and Shifted Windows (Swin) Transformer, a state-of-the-art (SOTA) Transformer model, were compared using 22,728 images provided by Radiant Earth Foundation (REF). Swin Transformer showed a better performance with a precision of 0.886 and a recall of 0.875, which is a balanced result, unbiased between over- and under-estimation. Deep learning-based cloud detection is expected to be a future operational module for CAS500-4 through optimization for the Korean Peninsula.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Verification of the Effects of Student-led Simulation with Team and Problem-Based Learning Class Training during COVID-19 (COVID-19시기의 예비간호사 training을 위한 학생주도 팀기반 문제중심학습 시뮬레이션 수업 효과검증)

  • Hana Kim;Mi-Ock Shim;Jisan Lee
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.4
    • /
    • pp.27-39
    • /
    • 2023
  • This study aimed to develop SSTPBL (Student-led Simulation with Team and Problem-Based Learning), whichcombines TBL and PBL with a student-led method to strengthen knowledge application, nursing diagnosis ability, and collaboration ability among the core competencies of nurses. Then, SSTPBL was applied to nursing students, and the results were assessed. The data was collected from September 15, 2022, to December 21, 2022, with structured questionnaires and focus group interviews with 51 fourth-year nursing students at a university in A City. The collected data were analyzed using SPSS version 25.0 and topic analysis. As a results, it was effective in simulation experience satisfaction(t = 3.51, p < .01), vSim experience satisfaction(t = 3.50, p < .01), preparation as a prospective nurse(t = 3.73, p < .01), learning self-efficacy(t = 3.87, p < .01), collaborative self-efficacy (t = 4.30, p < .01), problem-solving ability(t = 5.26, p < .01), educational satisfaction(t = 3.54, p < .01), digital health equity(t = 2.18, p < .05). Through the qualitative data's topic analysis, six main topics were derived. The main topics were 'similar to clinical practice', 'difficulty in immersion', 'learning through others', 'learning through self-reflection', 'improving confidence through new experiences' and 'new teaching methods'. Based on the results of this study, it is expected that SSTPBL can be used in various ways as a new training method for prospective nurses in the face of growing clinical practice restrictions after the pandemic.

Development of a deep learning-based cabbage core region detection and depth classification model (딥러닝 기반 배추 심 중심 영역 및 깊이 분류 모델 개발)

  • Ki Hyun Kwon;Jong Hyeok Roh;Ah-Na Kim;Tae Hyong Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.392-399
    • /
    • 2023
  • This paper proposes a deep learning model to determine the region and depth of cabbage cores for robotic automation of the cabbage core removal process during the kimchi manufacturing process. In addition, rather than predicting the depth of the measured cabbage, a model was presented that simultaneously detects and classifies the area by converting it into a discrete class. For deep learning model learning and verification, RGB images of the harvested cabbage 522 were obtained. The core region and depth labeling and data augmentation techniques from the acquired images was processed. MAP, IoU, acuity, sensitivity, specificity, and F1-score were selected to evaluate the performance of the proposed YOLO-v4 deep learning model-based cabbage core area detection and classification model. As a result, the mAP and IoU values were 0.97 and 0.91, respectively, and the acuity and F1-score values were 96.2% and 95.5% for depth classification, respectively. Through the results of this study, it was confirmed that the depth information of cabbage can be classified, and that it can be used in the development of a robot-automation system for the cabbage core removal process in the future.