• Title/Summary/Keyword: Image machine learning

Search Result 595, Processing Time 0.04 seconds

A pilot study of an automated personal identification process: Applying machine learning to panoramic radiographs

  • Ortiz, Adrielly Garcia;Soares, Gustavo Hermes;da Rosa, Gabriela Cauduro;Biazevic, Maria Gabriela Haye;Michel-Crosato, Edgard
    • Imaging Science in Dentistry
    • /
    • v.51 no.2
    • /
    • pp.187-193
    • /
    • 2021
  • Purpose: This study aimed to assess the usefulness of machine learning and automation techniques to match pairs of panoramic radiographs for personal identification. Materials and Methods: Two hundred panoramic radiographs from 100 patients (50 males and 50 females) were randomly selected from a private radiological service database. Initially, 14 linear and angular measurements of the radiographs were made by an expert. Eight ratio indices derived from the original measurements were applied to a statistical algorithm to match radiographs from the same patients, simulating a semi-automated personal identification process. Subsequently, measurements were automatically generated using a deep neural network for image recognition, simulating a fully automated personal identification process. Results: Approximately 85% of the radiographs were correctly matched by the automated personal identification process. In a limited number of cases, the image recognition algorithm identified 2 potential matches for the same individual. No statistically significant differences were found between measurements performed by the expert on panoramic radiographs from the same patients. Conclusion: Personal identification might be performed with the aid of image recognition algorithms and machine learning techniques. This approach will likely facilitate the complex task of personal identification by performing an initial screening of radiographs and matching ante-mortem and post-mortem images from the same individuals.

Post-processing Algorithm Based on Edge Information to Improve the Accuracy of Semantic Image Segmentation (의미론적 영상 분할의 정확도 향상을 위한 에지 정보 기반 후처리 방법)

  • Kim, Jung-Hwan;Kim, Seon-Hyeok;Kim, Joo-heui;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.3
    • /
    • pp.23-32
    • /
    • 2021
  • Semantic image segmentation technology in the field of computer vision is a technology that classifies an image by dividing it into pixels. This technique is also rapidly improving performance using a machine learning method, and a high possibility of utilizing information in units of pixels is drawing attention. However, this technology has been raised from the early days until recently for 'lack of detailed segmentation' problem. Since this problem was caused by increasing the size of the label map, it was expected that the label map could be improved by using the edge map of the original image with detailed edge information. Therefore, in this paper, we propose a post-processing algorithm that maintains semantic image segmentation based on learning, but modifies the resulting label map based on the edge map of the original image. After applying the algorithm to the existing method, when comparing similar applications before and after, approximately 1.74% pixels and 1.35% IoU (Intersection of Union) were applied, and when analyzing the results, the precise targeting fine segmentation function was improved.

Study on Detection for Cochlodinium polykrikoides Red Tide using the GOCI image and Machine Learning Technique (GOCI 영상과 기계학습 기법을 이용한 Cochlodinium polykrikoides 적조 탐지 기법 연구)

  • Unuzaya, Enkhjargal;Bak, Su-Ho;Hwang, Do-Hyun;Jeong, Min-Ji;Kim, Na-Kyeong;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1089-1098
    • /
    • 2020
  • In this study, we propose a method to detect red tide Cochlodinium Polykrikoide using by machine learning and geostationary marine satellite images. To learn the machine learning model, GOCI Level 2 data were used, and the red tide location data of the National Fisheries Research and Development Institute was used. The machine learning model used logistic regression model, decision tree model, and random forest model. As a result of the performance evaluation, compared to the traditional GOCI image-based red tide detection algorithm without machine learning (Son et al., 2012) (75%), it was confirmed that the accuracy was improved by about 13~22%p (88~98%). In addition, as a result of comparing and analyzing the detection performance between machine learning models, the random forest model (98%) showed the highest detection accuracy.It is believed that this machine learning-based red tide detection algorithm can be used to detect red tide early in the future and track and monitor its movement and spread.

Human Face Recognition using Multi-Class Projection Extreme Learning Machine

  • Xu, Xuebin;Wang, Zhixiao;Zhang, Xinman;Yan, Wenyao;Deng, Wanyu;Lu, Longbin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.6
    • /
    • pp.323-331
    • /
    • 2013
  • An extreme learning machine (ELM) is an efficient learning algorithm that is based on the generalized single, hidden-layer feed-forward networks (SLFNs), which perform well in classification applications. Many studies have demonstrated its superiority over the existing classical algorithms: support vector machine (SVM) and BP neural network. This paper presents a novel face recognition approach based on a multi-class project extreme learning machine (MPELM) classifier and 2D Gabor transform. First, all face image features were extracted using 2D Gabor filters, and the MPELM classifier was used to determine the final face classification. Two well-known face databases (CMU-PIE and ORL) were used to evaluate the performance. The experimental results showed that the MPELM-based method outperformed the ELM-based method as well as other methods.

  • PDF

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

Design and Implementation of Machine Learning System for Fine Dust Anomaly Detection based on Big Data (빅데이터 기반 미세먼지 이상 탐지 머신러닝 시스템 설계 및 구현)

  • Jae-Won Lee;Chi-Ho Lin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.55-58
    • /
    • 2024
  • In this paper, we propose a design and implementation of big data-based fine dust anomaly detection machine learning system. The proposed is system that classifies the fine dust air quality index through meteorological information composed of fine dust and big data. This system classifies fine dust through the design of an anomaly detection algorithm according to the outliers for each air quality index classification categories based on machine learning. Depth data of the image collected from the camera collects images according to the level of fine dust, and then creates a fine dust visibility mask. And, with a learning-based fingerprinting technique through a mono depth estimation algorithm, the fine dust level is derived by inferring the visibility distance of fine dust collected from the monoscope camera. For experimentation and analysis of this method, after creating learning data by matching the fine dust level data and CCTV image data by region and time, a model is created and tested in a real environment.

Image Classification of Damaged Bolts using Convolution Neural Networks (합성곱 신경망을 이용한 손상된 볼트의 이미지 분류)

  • Lee, Soo-Byoung;Lee, Seok-Soon
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.4
    • /
    • pp.109-115
    • /
    • 2022
  • The CNN (Convolution Neural Network) algorithm which combines a deep learning technique, and a computer vision technology, makes image classification feasible with the high-performance computing system. In this thesis, the CNN algorithm is applied to the classification problem, by using a typical deep learning framework of TensorFlow and machine learning techniques. The data set required for supervised learning is generated with the same type of bolts. some of which have undamaged threads, but others have damaged threads. The learning model with less quantity data showed good classification performance on detecting damage in a bolt image. Additionally, the model performance is reviewed by altering the quantity of convolution layers, or applying selectively the over and under fitting alleviation algorithm.

A Novel Image Classification Method for Content-based Image Retrieval via a Hybrid Genetic Algorithm and Support Vector Machine Approach

  • Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.75-81
    • /
    • 2011
  • This paper presents a novel method for image classification based on a hybrid genetic algorithm (GA) and support vector machine (SVM) approach which can significantly improve the classification performance for content-based image retrieval (CBIR). Though SVM has been widely applied to CBIR, it has some problems such as the kernel parameters setting and feature subset selection of SVM which impact the classification accuracy in the learning process. This study aims at simultaneously optimizing the parameters of SVM and feature subset without degrading the classification accuracy of SVM using GA for CBIR. Using the hybrid GA and SVM model, we can classify more images in the database effectively. Experiments were carried out on a large-size database of images and experiment results show that the classification accuracy of conventional SVM may be improved significantly by using the proposed model. We also found that the proposed model outperformed all the other models such as neural network and typical SVM models.

Implementation of Image Enhancement Algorithm using Learning User Preferences (선호도 학습을 통한 이미지 개선 알고리즘 구현)

  • Lee, YuKyong;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.1
    • /
    • pp.71-75
    • /
    • 2018
  • Image enhancement is a necessary end essential step after taking a picture with a digital camera. Many different photo software packages attempt to automate this process with various auto enhancement techniques. This paper provides and implements a system that can learn a user's preferences and apply the preferences into the process of image enhancement. Five major components are applied to the implemented system, which are computing a distance metric, finding a training set, finding an optimal parameter set, training and finally enhancing the input image. To estimate the validity of the method, we carried out user studies, and the fact that the implemented system was preferred over the method without learning user preferences.

Assembly performance evaluation method for prefabricated steel structures using deep learning and k-nearest neighbors

  • Hyuntae Bang;Byeongjun Yu;Haemin Jeon
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.111-121
    • /
    • 2023
  • This study proposes an automated assembly performance evaluation method for prefabricated steel structures (PSSs) using machine learning methods. Assembly component images were segmented using a modified version of the receptive field pyramid. By factorizing channel modulation and the receptive field exploration layers of the convolution pyramid, highly accurate segmentation results were obtained. After completing segmentation, the positions of the bolt holes were calculated using various image processing techniques, such as fuzzy-based edge detection, Hough's line detection, and image perspective transformation. By calculating the distance ratio between bolt holes, the assembly performance of the PSS was estimated using the k-nearest neighbors (kNN) algorithm. The effectiveness of the proposed framework was validated using a 3D PSS printing model and a field test. The results indicated that this approach could recognize assembly components with an intersection over union (IoU) of 95% and evaluate assembly performance with an error of less than 5%.