• Title/Summary/Keyword: cnn

Search Result 2,143, Processing Time 0.031 seconds

Image-based Soft Drink Type Classification and Dietary Assessment System Using Deep Convolutional Neural Network with Transfer Learning

  • Rubaiya Hafiz;Mohammad Reduanul Haque;Aniruddha Rakshit;Amina khatun;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.158-168
    • /
    • 2024
  • There is hardly any person in modern times who has not taken soft drinks instead of drinking water. The rate of people taking soft drinks being surprisingly high, researchers around the world have cautioned from time to time that these drinks lead to weight gain, raise the risk of non-communicable diseases and so on. Therefore, in this work an image-based tool is developed to monitor the nutritional information of soft drinks by using deep convolutional neural network with transfer learning. At first, visual saliency, mean shift segmentation, thresholding and noise reduction technique, collectively known as 'pre-processing' are adopted to extract the location of drinks region. After removing backgrounds and segment out only the desired area from image, we impose Discrete Wavelength Transform (DWT) based resolution enhancement technique is applied to improve the quality of image. After that, transfer learning model is employed for the classification of drinks. Finally, nutrition value of each drink is estimated using Bag-of-Feature (BoF) based classification and Euclidean distance-based ratio calculation technique. To achieve this, a dataset is built with ten most consumed soft drinks in Bangladesh. These images were collected from imageNet dataset as well as internet and proposed method confirms that it has the ability to detect and recognize different types of drinks with an accuracy of 98.51%.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Object Pose Estimation and Motion Planning for Service Automation System (서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획)

  • Youngwoo Kwon;Dongyoung Lee;Hosun Kang;Jiwook Choi;Inho Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

Novel Algorithms for Early Cancer Diagnosis Using Transfer Learning with MobileNetV2 in Thermal Images

  • Swapna Davies;Jaison Jacob
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.570-590
    • /
    • 2024
  • Breast cancer ranks among the most prevalent forms of malignancy and foremost cause of death by cancer worldwide. It is not preventable. Early and precise detection is the only remedy for lowering the rate of mortality and improving the probability of survival for victims. In contrast to present procedures, thermography aids in the early diagnosis of cancer and thereby saves lives. But the accuracy experiences detrimental impact by low sensitivity for small and deep tumours and the subjectivity by physicians in interpreting the images. Employing deep learning approaches for cancer detection can enhance the efficacy. This study explored the utilization of thermography in early identification of breast cancer with the use of a publicly released dataset known as the DMR-IR dataset. For this purpose, we employed a novel approach that entails the utilization of a pre-trained MobileNetV2 model and fine tuning it through transfer learning techniques. We created three models using MobileNetV2: one was a baseline transfer learning model with weights trained from ImageNet dataset, the second was a fine-tuned model with an adaptive learning rate, and the third utilized early stopping with callbacks during fine-tuning. The results showed that the proposed methods achieved average accuracy rates of 85.15%, 95.19%, and 98.69%, respectively, with various performance indicators such as precision, sensitivity and specificity also being investigated.

Lightweight Speaker Recognition for Pet Robots using Residuals Neural Network (잔차 신경망을 활용한 펫 로봇용 화자인식 경량화)

  • Seong-Hyun Kang;Tae-Hee Lee;Myung-Ryul Choi
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.168-173
    • /
    • 2024
  • Speaker recognition refers to a technology that analyzes voice frequencies that are different for each individual and compares them with pre-stored voices to determine the identity of the person. Deep learning-based speaker recognition is being applied to many fields, and pet robots are one of them. However, the hardware performance of pet robots is very limited in terms of the large memory space and calculations of deep learning technology. This is an important problem that pet robots must solve in real-time interaction with users. Lightening deep learning models has become an important way to solve the above problems, and a lot of research is being done recently. In this paper, we describe the results of research on lightweight speaker recognition for pet robots by constructing a voice data set for pet robots, which is a specific command type, and comparing the results of models using residuals. In the conclusion, we present the results of the proposed method and Future research plans are described.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Systems for Pill Recognition and Medication Management using Deep Learning (딥러닝을 활용한 알약인식 및 복용관리 시스템)

  • Kang-Hee Kim;So-Hyeon Kim;Da-Ham Jung;Bo-Kyung Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2024
  • It is difficult to know the efficacy of pills if the pill bag or wrapper is lost after purchasing the pill. Many people do not classify the use of commercial pills when storing them after purchasing and taking them, so the inaccessibility of information on the side effects of pills leads to misuse of pills. Even with existing applications that search and provide information about pills, users have to select the details of the pills themselves. In this paper, we develope a pill recognition application by building a model that learns the formulation and colour of 22,000 photos of pills provided by a Pharmaceutical Information Institution to solve the above situation. We also develope a pill medication management function.

Deep Learning Algorithm Training and Performance Analysis for Corridor Monitoring (회랑 감시를 위한 딥러닝 알고리즘 학습 및 성능분석)

  • Woo-Jin Jung;Seok-Min Hong;Won-Hyuck Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.776-781
    • /
    • 2023
  • K-UAM will be commercialized through maturity after 2035. Since the Urban Air Mobility (UAM) corridor will be used vertically separating the existing helicopter corridor, the corridor usage is expected to increase. Therefore, a system for monitoring corridors is also needed. In recent years, object detection algorithms have developed significantly. Object detection algorithms are largely divided into one-stage model and two-stage model. In real-time detection, the two-stage model is not suitable for being too slow. One-stage models also had problems with accuracy, but they have improved performance through version upgrades. Among them, YOLO-V5 improved small image object detection performance through Mosaic. Therefore, YOLO-V5 is the most suitable algorithm for systems that require real-time monitoring of wide corridors. Therefore, this paper trains YOLO-V5 and analyzes whether it is ultimately suitable for corridor monitoring.K-uam will be commercialized through maturity after 2035.

Deep Learning-based Interior Design Recognition (딥러닝 기반 실내 디자인 인식)

  • Wongyu Lee;Jihun Park;Jonghyuk Lee;Heechul Jung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.47-55
    • /
    • 2024
  • We spend a lot of time in indoor space, and the space has a huge impact on our lives. Interior design plays a significant role to make an indoor space attractive and functional. However, it should consider a lot of complex elements such as color, pattern, and material etc. With the increasing demand for interior design, there is a growing need for technologies that analyze these design elements accurately and efficiently. To address this need, this study suggests a deep learning-based design analysis system. The proposed system consists of a semantic segmentation model that classifies spatial components and an image classification model that classifies attributes such as color, pattern, and material from the segmented components. Semantic segmentation model was trained using a dataset of 30000 personal indoor interior images collected for research, and during inference, the model separate the input image pixel into 34 categories. And experiments were conducted with various backbones in order to obtain the optimal performance of the deep learning model for the collected interior dataset. Finally, the model achieved good performance of 89.05% and 0.5768 in terms of accuracy and mean intersection over union (mIoU). In classification part convolutional neural network (CNN) model which has recorded high performance in other image recognition tasks was used. To improve the performance of the classification model we suggests an approach that how to handle data that has data imbalance and vulnerable to light intensity. Using our methods, we achieve satisfactory results in classifying interior design component attributes. In this paper, we propose indoor space design analysis system that automatically analyzes and classifies the attributes of indoor images using a deep learning-based model. This analysis system, used as a core module in the A.I interior recommendation service, can help users pursuing self-interior design to complete their designs more easily and efficiently.

Edge Detection and ROI-Based Concrete Crack Detection (Edge 분석과 ROI 기법을 활용한 콘크리트 균열 분석 - Edge와 ROI를 적용한 콘크리트 균열 분석 및 검사 -)

  • Park, Heewon;Lee, Dong-Eun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.2
    • /
    • pp.36-44
    • /
    • 2024
  • This paper presents the application of Convolutional Neural Networks (CNNs) and Region of Interest (ROI) techniques for concrete crack analysis. Surfaces of concrete structures, such as beams, etc., are exposed to fatigue stress and cyclic loads, typically resulting in the initiation of cracks at a microscopic level on the structure's surface. Early detection enables preventative measures to mitigate potential damage and failures. Conventional manual inspections often yield subpar results, especially for large-scale infrastructure where access is challenging and detecting cracks can be difficult. This paper presents data collection, edge segmentation and ROI techniques application, and analysis of concrete cracks using Convolutional Neural Networks. This paper aims to achieve the following objectives: Firstly, achieving improved accuracy in crack detection using image-based technology compared to traditional manual inspection methods. Secondly, developing an algorithm that utilizes enhanced Sobel edge segmentation and ROI techniques. The algorithm provides automated crack detection capabilities for non-destructive testing.