• Title/Summary/Keyword: Real-Time Learning

Search Result 1,722, Processing Time 0.034 seconds

Detecting Numeric and Character Areas of Low-quality License Plate Images using YOLOv4 Algorithm (YOLOv4 알고리즘을 이용한 저품질 자동차 번호판 영상의 숫자 및 문자영역 검출)

  • Lee, Jeonghwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.1-11
    • /
    • 2022
  • Recently, research on license plate recognition, which is a core technology of an intelligent transportation system(ITS), is being actively conducted. In this paper, we propose a method to extract numbers and characters from low-quality license plate images by applying the YOLOv4 algorithm. YOLOv4 is a one-stage object detection method using convolution neural network including BACKBONE, NECK, and HEAD parts. It is a method of detecting objects in real time rather than the previous two-stage object detection method such as the faster R-CNN. In this paper, we studied a method to directly extract number and character regions from low-quality license plate images without additional edge detection and image segmentation processes. In order to evaluate the performance of the proposed method we experimented with 500 license plate images. In this experiment, 350 images were used for training and the remaining 150 images were used for the testing process. Computer simulations show that the mean average precision of detecting number and character regions on vehicle license plates was about 93.8%.

Behavior Pattern Prediction Algorithm Based on 2D Pose Estimation and LSTM from Videos (비디오 영상에서 2차원 자세 추정과 LSTM 기반의 행동 패턴 예측 알고리즘)

  • Choi, Jiho;Hwang, Gyutae;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.191-197
    • /
    • 2022
  • This study proposes an image-based Pose Intention Network (PIN) algorithm for rehabilitation via patients' intentions. The purpose of the PIN algorithm is for enabling an active rehabilitation exercise, which is implemented by estimating the patient's motion and classifying the intention. Existing rehabilitation involves the inconvenience of attaching a sensor directly to the patient's skin. In addition, the rehabilitation device moves the patient, which is a passive rehabilitation method. Our algorithm consists of two steps. First, we estimate the user's joint position through the OpenPose algorithm, which is efficient in estimating 2D human pose in an image. Second, an intention classifier is constructed for classifying the motions into three categories, and a sequence of images including joint information is used as input. The intention network also learns correlations between joints and changes in joints over a short period of time, which can be easily used to determine the intention of the motion. To implement the proposed algorithm and conduct real-world experiments, we collected our own dataset, which is composed of videos of three classes. The network is trained using short segment clips of the video. Experimental results demonstrate that the proposed algorithm is effective for classifying intentions based on a short video clip.

Study on Image Processing Techniques Applying Artificial Intelligence-based Gray Scale and RGB scale

  • Lee, Sang-Hyun;Kim, Hyun-Tae
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.252-259
    • /
    • 2022
  • Artificial intelligence is used in fusion with image processing techniques using cameras. Image processing technology is a technology that processes objects in an image received from a camera in real time, and is used in various fields such as security monitoring and medical image analysis. If such image processing reduces the accuracy of recognition, providing incorrect information to medical image analysis, security monitoring, etc. may cause serious problems. Therefore, this paper uses a mixture of YOLOv4-tiny model and image processing algorithm and uses the COCO dataset for learning. The image processing algorithm performs five image processing methods such as normalization, Gaussian distribution, Otsu algorithm, equalization, and gradient operation. For RGB images, three image processing methods are performed: equalization, Gaussian blur, and gamma correction proceed. Among the nine algorithms applied in this paper, the Equalization and Gaussian Blur model showed the highest object detection accuracy of 96%, and the gamma correction (RGB environment) model showed the highest object detection rate of 89% outdoors (daytime). The image binarization model showed the highest object detection rate at 89% outdoors (night).

Development and application of soil moisture prediction using real-time in-situ observation and machine learning (실시간 현장관측과 기계학습을 이용한 토양수분 예측기술의 개발 및 적용)

  • Hyuna Woo;Yaewon Lee;Minyoung Kim;Seong Jin Noh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.286-286
    • /
    • 2023
  • 물의 전체 순환 구조에서 토양수분이 차지하는 정량적 비중은 상대적으로 작지만, 강우-유출 과정의 비선형에 영향을 미치는 지배적 요인 중 하나이고, 토양 침식과 산사태, 농업생산량, 기후 변화 대응 등 광범위한 주제와 연관되어 있어, 토양수분의 물리과정에 대한 이해 증진과 예측 기술의 지속적인 개선이 필요하다. 본 연구에서는 금오공과대학교 유역 내에서 토양수분과 기상 요소를 실시간 관측하고, 기계학습 기법을 이용하여 토양수분을 단기 예측하는 기술을 개발하고 평가한다. 구체적으로는, 토양 관측 장비인 TEROS를 사용하여 표층 지점의 10cm, 심층 지점의 40cm에서의 토양수분, 토양장력과 토양온도를, 기상 관측 장비인 ATMOS를 사용하여 태양복사, 강수량, 기온, 풍속, 대기압 등 다양한 기상 요소를, 실시간 클라우드 방식으로 1여 년간 수집한 데이터를 활용한다. 또한, 과거 및 실시간 데이터를 기반으로 LSTM(Long-Short Term Memory) 기법을 사용하여 토양수분 예측 모형을 구축하고, 선행 예측 시간에 따른 모의 정확도를 평가한다. 기상 요소의 누적 등 자료 분석 방법이 표층 및 심층 토양수분 예측에 미치는 영향, 그리고 예측 모형 개선 방향에 대해 토의한다. 실시간 현장 관측 자료 및 인공지능 기반 단기 토양수분 예측 모의 기술은 소규모 유역의 수문순환 분석 및 물리기반 모형의 개선 등 다양한 분야에서 활용할 수 있을 것으로 기대된다.

  • PDF

Factors Affecting the Usability of Virtual Reality based Anatomy Education Programs of Nursing Students (간호대학생의 가상현실 기반 해부학교육 프로그램의 사용성에 영향을 미치는 요인)

  • Lee, Jonglan;Hwang, inju
    • Journal of Korean Academy of Rural Health Nursing
    • /
    • v.18 no.1
    • /
    • pp.49-57
    • /
    • 2023
  • Purpose: This study was conducted to confirm the usability of virtual reality-based anatomy education programs for nursing college students and to identify factors that affect their usability. Methods: Data were collected from 143 nursing college students in Gyeonggi-do using a structured questionnaire from May to June 2022. The data analysis was analyzed using real numbers, percentages, means and standard deviation, ANOVA, Scheff's test, Pearson's correlation, and multiple regression using the SPSS/WIN 23.0 program. Results: The subject's usability was 4.26 points (out of 5). The variable that has the greatest influence on the usability of virtual reality-based anatomy education programs is perceived innovation (β=.370, p<.001), followed by perceived pleasure (β=.295, p=.001), perceived ease (β=.253, p<.001), smartphone usage time per day (β=.102, p=.031). These variables explained 70.6% of the usability of virtual reality-based anatomy education programs. Conclusion: The results of this study can be utilized as basic data for a virtual reality-based anatomy education program that will be developed and applied to nursing students in the future.

Real-Time Foreground and Facility Extraction with Deep Learning-based Object Detection Results under Static Camera-based Video Monitoring (고정 카메라 기반 비디오 모니터링 환경에서 딥러닝 객체 탐지기 결과를 활용한 실시간 전경 및 시설물 추출)

  • Lee, Nayeon;Son, Seungwook;Yu, Seunghyun;Chung, Yongwha;Park, Daihee
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.711-714
    • /
    • 2021
  • 고정 카메라 환경에서 전경과 배경 간 픽셀값의 차를 이용하여 전경을 추출하기 위해서는 정확한 배경 영상이 필요하다. 또한, 프레임마다 변화하는 실제 배경과 맞추기 위해 배경 영상을 지속해서 갱신할 필요가 있다. 본 논문에서는 정확한 배경 영상을 생성하기 위해 실시간 처리가 가능한 딥러닝 기반 객체 탐지기의 결과를 입력받아 영상 처리에 활용함으로써 배경을 생성 및 지속적으로 갱신하고, 획득한 배경 정보를 이용해 전경을 추출하는 방법을 제안한다. 먼저, 고정 카메라에서 획득되는 비디오 데이터에 딥러닝 기반 객체 탐지기를 적용한 박스 단위 객체 탐지 결과를 지속적으로 입력받아 픽셀 단위의 배경 영상을 갱신하고 개선된 배경 영상을 도출한다. 이후, 획득한 배경 영상을 이용하여 더 정확한 전경 영상을 획득한다. 또한, 본 논문에서는 시설물에 가려진 객체를 더 정확히 탐지하기 위해서 전경 영상을 이용하여 시설물 영상을 추출하는 방법을 제안한다. 실제 돈사에 설치된 카메라로 부터 획득된 12시간 분량의 비디오를 이용하여 실험한 결과, 제안 방법을 이용한 전경과 시설물 추출이 효과적임을 확인하였다.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

YOLO Based Automatic Sorting System for Plastic Recycling (플라스틱 재활용을 위한 YOLO기반의 자동 분류시스템)

  • Kim, Yong jun;Cho, Taeuk;Park, Hyung-kun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.382-384
    • /
    • 2021
  • In this study, we implement a system that automatically classifies types of plastics using YOLO (You Only Look Once), a real-time object recognition algorithm. The system consists of Nvidia jetson nano, a small computer for deep learning and computer vision, with model trained to recognize plastic separation emission marks using YOLO. Using a webcam, recycling marks of plastic waste were recognized as PET, HDPE, and PP, and motors were adjusted to be classified according to the type. By implementing this automatic classifier, it is convenient in that it can reduce the labor of separating and discharging plastic separation marks by humans and increase the efficiency of recycling through accurate recycling.

  • PDF

A Study on a Method for Detecting Leak Holes in Respirators Using IoT Sensors

  • Woochang Shin
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.378-385
    • /
    • 2023
  • The importance of wearing respiratory protective equipment has been highlighted even more during the COVID-19 pandemic. Even if the suitability of respiratory protection has been confirmed through testing in a laboratory environment, there remains the potential for leakage points in the respirators due to improper application by the wearer, damage to the equipment, or sudden movements in real working conditions. In this paper, we propose a method to detect the occurrence of leak holes by measuring the pressure changes inside the mask according to the wearer's breathing activity by attaching an IoT sensor to a full-face respirator. We designed 9 experimental scenarios by adjusting the degree of leak holes of the respirator and the breathing cycle time, and acquired respiratory data for the wearer of the respirator accordingly. Additionally, we analyzed the respiratory data to identify the duration and pressure change range for each breath, utilizing this data to train a neural network model for detecting leak holes in the respirator. The experimental results applying the developed neural network model showed a sensitivity of 100%, specificity of 94.29%, and accuracy of 97.53%. We conclude that the effective detection of leak holes can be achieved by incorporating affordable, small-sized IoT sensors into respiratory protective equipment.

Preemptive Failure Detection using Contamination-Based Stacking Ensemble in Missiles

  • Seong-Mok Kim;Ye-Eun Jeong;Yong Soo Kim;Youn-Ho Lee;Seung Young Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1301-1316
    • /
    • 2024
  • In modern warfare, missiles play a pivotal role but typically spend the majority of their lifecycle in long-term storage or standby mode, making it difficult to detect failures. Preemptive detection of missiles that will fail is crucial to preventing severe consequences, including safety hazards and mission failures. This study proposes a contamination-based stacking ensemble model, employing the local outlier factor (LOF), to detect such missiles. The proposed model creates multiple base LOF models with different contamination values and combines their anomaly scores to achieve a robust anomaly detection. A comparative performance analysis was conducted between the proposed model and the traditional single LOF model, using production-related inspection data from missiles deployed in the military. The experimental results showed that, with the contamination parameter set to 0.1, the proposed model exhibited an increase of approximately 22 percentage points in accuracy and 71 percentage points in F1-score compared to the single LOF model. This approach enables the preemptive identification of potential failures, undetectable through traditional statistical quality control methods. Consequently, it contributes to lower missile failure rates in real battlefield scenarios, leading to significant time and cost savings in the military industry.