• Title/Summary/Keyword: vision artificial intelligence

Search Result 173, Processing Time 0.031 seconds

Intelligent Abnormal Situation Event Detections for Smart Home Users Using Lidar, Vision, and Audio Sensors (스마트 홈 사용자를 위한 라이다, 영상, 오디오 센서를 이용한 인공지능 이상징후 탐지 알고리즘)

  • Kim, Da-hyeon;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.17-26
    • /
    • 2021
  • Recently, COVID-19 has spread and time to stay at home has been increasing in accordance with quarantine guidelines of the government such as recommendations to refrain from going out. As a result, the number of single-person households staying at home is also increasingsingle-person households are less likely to be notified to the outside world in times of emergency than multi-person households. This study collects various situations occurring in the home with lidar, image, and voice sensors and analyzes the data according to the sensors through their respective algorithms. Using this method, we analyzed abnormal patterns such as emergency situations and conducted research to detect abnormal signs in humans. Artificial intelligence algorithms that detect abnormalities in people by each sensor were studied and the accuracy of anomaly detection was measured according to the sensor. Furthermore, this work proposes a fusion method that complements the pros and cons between sensors by experimenting with the detectability of sensors for various situations.

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

Characterization and Classification of Pores in Metal 3D Printing Materials with X-ray Tomography and Machine Learning (X-ray tomography 분석과 기계 학습을 활용한 금속 3D 프린팅 소재 내의 기공 형태 분류)

  • Kim, Eun-Ah;Kwon, Se-Hun;Yang, Dong-Yeol;Yu, Ji-Hun;Kim, Kwon-Ill;Lee, Hak-Sung
    • Journal of Powder Materials
    • /
    • v.28 no.3
    • /
    • pp.208-215
    • /
    • 2021
  • Metal three-dimensional (3D) printing is an important emerging processing method in powder metallurgy. There are many successful applications of additive manufacturing. However, processing parameters such as laser power and scan speed must be manually optimized despite the development of artificial intelligence. Automatic calibration using information in an additive manufacturing database is desirable. In this study, 15 commercial pure titanium samples are processed under different conditions, and the 3D pore structures are characterized by X-ray tomography. These samples are easily classified into three categories, unmelted, well melted, or overmelted, depending on the laser energy density. Using more than 10,000 projected images for each category, convolutional neural networks are applied, and almost perfect classification of these samples is obtained. This result demonstrates that machine learning methods based on X-ray tomography can be helpful to automatically identify more suitable processing parameters.

A System for Determining the Growth Stage of Fruit Tree Using a Deep Learning-Based Object Detection Model (딥러닝 기반의 객체 탐지 모델을 활용한 과수 생육 단계 판별 시스템)

  • Bang, Ji-Hyeon;Park, Jun;Park, Sung-Wook;Kim, Jun-Yung;Jung, Se-Hoon;Sim, Chun-Bo
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.9-18
    • /
    • 2022
  • Recently, research and system using AI is rapidly increasing in various fields. Smart farm using artificial intelligence and information communication technology is also being studied in agriculture. In addition, data-based precision agriculture is being commercialized by convergence various advanced technology such as autonomous driving, satellites, and big data. In Korea, the number of commercialization cases of facility agriculture among smart agriculture is increasing. However, research and investment are being biased in the field of facility agriculture. The gap between research and investment in facility agriculture and open-air agriculture continues to increase. The fields of fruit trees and plant factories have low research and investment. There is a problem that the big data collection and utilization system is insufficient. In this paper, we are proposed the system for determining the fruit tree growth stage using a deep learning-based object detection model. The system was proposed as a hybrid app for use in agricultural sites. In addition, we are implemented an object detection function for the fruit tree growth stage determine.

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

Age and Gender Classification with Small Scale CNN (소규모 합성곱 신경망을 사용한 연령 및 성별 분류)

  • Jamoliddin, Uraimov;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.99-104
    • /
    • 2022
  • Artificial intelligence is getting a crucial part of our lives with its incredible benefits. Machines outperform humans in recognizing objects in images, particularly in classifying people into correct age and gender groups. In this respect, age and gender classification has been one of the hot topics among computer vision researchers in recent decades. Deployment of deep Convolutional Neural Network(: CNN) models achieved state-of-the-art performance. However, the most of CNN based architectures are very complex with several dozens of training parameters so they require much computation time and resources. For this reason, we propose a new CNN-based classification algorithm with significantly fewer training parameters and training time compared to the existing methods. Despite its less complexity, our model shows better accuracy of age and gender classification on the UTKFace dataset.

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi;Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.335-349
    • /
    • 2023
  • Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

Cases of health care services for the elderly using IT technology and future development directions (IT 기술을 활용한 노인돌봄서비스 사례 및 개발 동향)

  • Kim, Han-byeol;Kim, Ji-hong;Lee, Sung-mo;Choi, Hun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.496-498
    • /
    • 2022
  • With the prolonged spread of the new coronavirus infection worldwide and the entry of the super-aged society, smart health care, which combines IT technology for senior health care and the health care industry, is emerging as a solution to the aging problem. The development of non-face-to-face care services using Ai is on a global trend, not in some countries, and the form of care services for the elderly using AI artificial intelligence technology is changing rapidly. The convenience of AI-based care services for the elderly is expected to be highlighted, and the technology and market are expected to develop significantly. As the number of single-person households is increasing, the shortage of welfare workers for the elderly is emerging as a social issue. It is presented as a vision to solve long-term social problems such as the labor shortage of elderly care workers as well as the advantages of convenient care services using IT technology. Therefore, we would like to propose the development direction of care services for the elderly as a case study of care services for the elderly and a countermeasure against the super-aging age.

  • PDF

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Deep-learning performance in identifying and classifying dental implant systems from dental imaging: a systematic review and meta-analysis

  • Akhilanand Chaurasia;Arunkumar Namachivayam;Revan Birke Koca-Unsal;Jae-Hong Lee
    • Journal of Periodontal and Implant Science
    • /
    • v.54 no.1
    • /
    • pp.3-12
    • /
    • 2024
  • Deep learning (DL) offers promising performance in computer vision tasks and is highly suitable for dental image recognition and analysis. We evaluated the accuracy of DL algorithms in identifying and classifying dental implant systems (DISs) using dental imaging. In this systematic review and meta-analysis, we explored the MEDLINE/PubMed, Scopus, Embase, and Google Scholar databases and identified studies published between January 2011 and March 2022. Studies conducted on DL approaches for DIS identification or classification were included, and the accuracy of the DL models was evaluated using panoramic and periapical radiographic images. The quality of the selected studies was assessed using QUADAS-2. This review was registered with PROSPERO (CRDCRD42022309624). From 1,293 identified records, 9 studies were included in this systematic review and meta-analysis. The DL-based implant classification accuracy was no less than 70.75% (95% confidence interval [CI], 65.6%-75.9%) and no higher than 98.19 (95% CI, 97.8%-98.5%). The weighted accuracy was calculated, and the pooled sample size was 46,645, with an overall accuracy of 92.16% (95% CI, 90.8%-93.5%). The risk of bias and applicability concerns were judged as high for most studies, mainly regarding data selection and reference standards. DL models showed high accuracy in identifying and classifying DISs using panoramic and periapical radiographic images. Therefore, DL models are promising prospects for use as decision aids and decision-making tools; however, there are limitations with respect to their application in actual clinical practice.