• Title/Summary/Keyword: Animal Image Classification

Search Result 24, Processing Time 0.026 seconds

Comparison of estimating vegetation index for outdoor free-range pig production using convolutional neural networks

  • Sang-Hyon OH;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.6
    • /
    • pp.1254-1269
    • /
    • 2023
  • This study aims to predict the change in corn share according to the grazing of 20 gestational sows in a mature corn field by taking images with a camera-equipped unmanned air vehicle (UAV). Deep learning based on convolutional neural networks (CNNs) has been verified for its performance in various areas. It has also demonstrated high recognition accuracy and detection time in agricultural applications such as pest and disease diagnosis and prediction. A large amount of data is required to train CNNs effectively. Still, since UAVs capture only a limited number of images, we propose a data augmentation method that can effectively increase data. And most occupancy prediction predicts occupancy by designing a CNN-based object detector for an image and counting the number of recognized objects or calculating the number of pixels occupied by an object. These methods require complex occupancy rate calculations; the accuracy depends on whether the object features of interest are visible in the image. However, in this study, CNN is not approached as a corn object detection and classification problem but as a function approximation and regression problem so that the occupancy rate of corn objects in an image can be represented as the CNN output. The proposed method effectively estimates occupancy for a limited number of cornfield photos, shows excellent prediction accuracy, and confirms the potential and scalability of deep learning.

Multi-scale Attention and Deep Ensemble-Based Animal Skin Lesions Classification (다중 스케일 어텐션과 심층 앙상블 기반 동물 피부 병변 분류 기법)

  • Kwak, Min Ho;Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1212-1223
    • /
    • 2022
  • Skin lesions are common diseases that range from skin rashes to skin cancer, which can lead to death. Note that early diagnosis of skin diseases can be important because early diagnosis of skin diseases considerably can reduce the course of treatment and the harmful effect of the disease. Recently, the development of computer-aided diagnosis (CAD) systems based on artificial intelligence has been actively made for the early diagnosis of skin diseases. In a typical CAD system, the accurate classification of skin lesion types is of great importance for improving the diagnosis performance. Motivated by this, we propose a novel deep ensemble classification with multi-scale attention networks. The proposed deep ensemble networks are jointly trained using a single loss function in an end-to-end manner. In addition, the proposed deep ensemble network is equipped with a multi-scale attention mechanism and segmentation information of the original skin input image, which improves the classification performance. To demonstrate our method, the publicly available human skin disease dataset (HAM 10000) and the private animal skin lesion dataset were used for the evaluation. Experiment results showed that the proposed methods can achieve 97.8% and 81% accuracy on each HAM10000 and animal skin lesion dataset. This research work would be useful for developing a more reliable CAD system which helps doctors early diagnose skin diseases.

Classification of Raccoon dog and Raccoon with Transfer Learning and Data Augmentation (전이 학습과 데이터 증강을 이용한 너구리와 라쿤 분류)

  • Dong-Min Park;Yeong-Seok Jo;Seokwon Yeom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.34-41
    • /
    • 2023
  • In recent years, as the range of human activities has increased, the introduction of alien species has become frequent. Among them, raccoons have been designated as harmful animals since 2020. Raccoons are similar in size and shape to raccoon dogs, so they generally need to be distinguished in capturing them. To solve this problem, we use VGG19, ResNet152V2, InceptionV3, InceptionResNet and NASNet, which are CNN deep learning models specialized for image classification. The parameters to be used for learning are pre-trained with a large amount of data, ImageNet. In order to classify the raccoon and raccoon dog datasets as outward features of animals, the image was converted to grayscale and brightness was normalized. Augmentation methods were applied using left and right inversion, rotation, scaling, and shift to create sufficient data for transfer learning. The FCL consists of 1 layer for the non-augmented dataset while 4 layers for the augmented dataset. Comparing the accuracy of various augmented datasets, the performance increased as more augmentation methods were applied.

Deep Learning-Based Companion Animal Abnormal Behavior Detection Service Using Image and Sensor Data

  • Lee, JI-Hoon;Shin, Min-Chan;Park, Jun-Hee;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.1-9
    • /
    • 2022
  • In this paper, we propose the Deep Learning-Based Companion Animal Abnormal Behavior Detection Service, which using video and sensor data. Due to the recent increase in households with companion animals, the pet tech industry with artificial intelligence is growing in the existing food and medical-oriented companion animal market. In this study, companion animal behavior was classified and abnormal behavior was detected based on a deep learning model using various data for health management of companion animals through artificial intelligence. Video data and sensor data of companion animals are collected using CCTV and the manufactured pet wearable device, and used as input data for the model. Image data was processed by combining the YOLO(You Only Look Once) model and DeepLabCut for extracting joint coordinates to detect companion animal objects for behavior classification. Also, in order to process sensor data, GAT(Graph Attention Network), which can identify the correlation and characteristics of each sensor, was used.

Differentiation of Beef and Fish Meals in Animal Feeds Using Chemometric Analytic Models

  • Yang, Chun-Chieh;Garrido-Novell, Cristobal;Perez-Marin, Dolores;Guerrero-Ginel, Jose E.;Garrido-Varo, Ana;Cho, Hyunjeong;Kim, Moon S.
    • Journal of Biosystems Engineering
    • /
    • v.40 no.2
    • /
    • pp.153-158
    • /
    • 2015
  • Purpose: The research presented in this paper applied the chemometric analysis to the near-infrared spectral data from line-scanned hyperspectral images of beef and fish meals in animal feeds. The chemometric statistical models were developed to distinguish beef meals from fish ones. Methods: The meal samples of 40 fish meals and 15 beef meals were line-scanned to obtain hyperspectral images. The spectral data were retrieved from each of 3600 pixels in the Region of Interest (ROI) of every sample image. The wavebands spanning 969 nm to 1551 nm (across 176 spectral bands) were selected for chemometric analysis. The partial least squares regression (PLSR) and the principal component analysis (PCA) methods of the chemometric analysis were applied to the model development. The purpose of the models was to correctly classify as many beef pixels as possible while misclassified fish pixels in an acceptable amount. Results: The results showed that the success classification rates were 97.9% for beef samples and 99.4% for fish samples by the PLSR model, and 85.1% for beef samples and 88.2% for fish samples by the PCA model. Conclusion: The chemometric analysis-based PLSR and PCA models for the hyperspectral image analysis could differentiate beef meals from fish ones in animal feeds.

Target/non-target classification using active sonar spectrogram image and CNN (능동소나 스펙트로그램 이미지와 CNN을 사용한 표적/비표적 식별)

  • Kim, Dong-Wook;Seok, Jong-Won;Bae, Keun-Sung
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1044-1049
    • /
    • 2018
  • CNN (Convolutional Neural Networks) is a neural network that models animal visual information processing. And it shows good performance in various fields. In this paper, we use CNN to classify target and non-target data by analyzing the spectrogram of active sonar signal. The data were divided into 8 classes according to the ratios containing the targets and used for learning CNN. The spectrogram of the signal is divided into frames and used as inputs. As a result, it was possible to classify the target and non-target using the characteristic that the classification results of the seven classes corresponding to the target signal sequentially appear only at the position of the target signal.

Classification of behavior at the signs of parturition of sows by image information analysis (영상정보에 의한 모돈의 분만징후 행동특성 분류)

  • Yang, Ka-Young;Jeon, Jung-Hwan;Kwon, Kyeong-Seok;Choi, Hee-Chul;Ha, Jae-Jung;Kim, Jong-Bok;Lee, Jun-Yeob
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.12
    • /
    • pp.607-613
    • /
    • 2018
  • The aim of this study is to predict the exact time of parturition from analysis and classification of preliminary behavior based on parturition signals in sows. This study was conducted with 12 crossbred sows (with an average of 3.5 parities). Behavioral characteristics were analyzed for duration and the frequency of different behaviors on a checklist, which includes the duration of the basic behaviors (feeding, standing, lying down, and sitting). The frequency of specific behaviors (investigatory behavior, shame-chewing, scratching, and bar-biting) was also recorded. Image information was collected every two minutes for 24 hours before the first piglets were born. As a result, the basic behavior of a sows' standing time (22.6% of the time after 24 h, 24.9% after 12 h) and time lying down (55.9% after 24 h, 66.3% after 12 h) increased over the 12 h period before parturition, compared with the 24 h period before parturition (p<0.01). Feeding (13.42% after 24 h, 4.38% after 12 h) and sitting (8.2% after 24 h, 4.5% after 12 h) tended to decrease during the 12 h before parturition (p>0.05). The sows' investigatory behavior ($11.44{\pm}1.80$ after 24 h, $55.97{\pm}6.13$ after 12 h), scratching ($3.75{\pm}1.92$ after 24 h, $20.99{\pm}5.81$ after 12 h), and bar-biting ($0.69{\pm}0.15$ after 24 h, $3.71{\pm}1.53$ after 12 h) increased in the 12-hour period before parturition, compared with the 24-hour period before parturition (p<0.01). On the other hand, shame-chewing ($2.20{\pm}1.67$ after 24 h, $0.07{\pm}0.01$ after 12 h) decreased compared to the 12-hour period before parturition (p>0.05). Thus, standing, investigatory behavior, scratching, and bar-biting could be used as behaviors indicative of parturition in sows.

A method using artificial neural networks to morphologically assess mouse blastocyst quality

  • Matos, Felipe Delestro;Rocha, Jose Celso;Nogueira, Marcelo Fabio Gouveia
    • Journal of Animal Science and Technology
    • /
    • v.56 no.4
    • /
    • pp.15.1-15.10
    • /
    • 2014
  • Background: Morphologically classifying embryos is important for numerous laboratory techniques, which range from basic methods to methods for assisted reproduction. However, the standard method currently used for classification is subjective and depends on an embryologist's prior training. Thus, our work was aimed at developing software to classify morphological quality for blastocysts based on digital images. Methods: The developed methodology is suitable for the assistance of the embryologist on the task of analyzing blastocysts. The software uses artificial neural network techniques as a machine learning technique. These networks analyze both visual variables extracted from an image and biological features for an embryo. Results: After the training process the final accuracy of the system using this method was 95%. To aid the end-users in operating this system, we developed a graphical user interface that can be used to produce a quality assessment based on a previously trained artificial neural network. Conclusions: This process has a high potential for applicability because it can be adapted to additional species with greater economic appeal (human beings and cattle). Based on an objective assessment (without personal bias from the embryologist) and with high reproducibility between samples or different clinics and laboratories, this method will facilitate such classification in the future as an alternative practice for assessing embryo morphologies.

Classification and Recognition of Movement Behavior of Animal based on Decision Tree (의사결정나무를 이용한 생물의 행동 패턴 구분과 인식)

  • Lee, Seng-Tai;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.6
    • /
    • pp.682-687
    • /
    • 2005
  • Behavioral sequences of the medaka(Oryzias latipes) were investigated through an image system in response to medaka treated with the insecticide and medaka not treated with the insecticide, diazinon(0.1 mg/1). After much observation, behavioral patterns could be divided into 4 patterns: active smooth, active shaking, inactive smooth, and inactive shaking. These patterns were analyzed by 5 features: speed ratio, x and y axes projection, FFT to angle transition, fractal dimension, and center of mass. Each pattern was classified using decision tree. It provide a natural way to incorporate prior knowledge from human experts in fish behavior, The main focus of this study was to determine whether the decision tree could be useful in interpreting and classifying behavior patterns of the animal.

Quality grading of Hanwoo (Korean native cattle breed) sub-images using convolutional neural network

  • Kwon, Kyung-Do;Lee, Ahyeong;Lim, Jongkuk;Cho, Soohyun;Lee, Wanghee;Cho, Byoung-Kwan;Seo, Youngwook
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1109-1122
    • /
    • 2020
  • The aim of this study was to develop a marbling classification and prediction model using small parts of sirloin images based on a deep learning algorithm, namely, a convolutional neural network (CNN). Samples were purchased from a commercial slaughterhouse in Korea, images for each grade were acquired, and the total images (n = 500) were assigned according to their grade number: 1++, 1+, 1, and both 2 & 3. The image acquisition system consists of a DSLR camera with a polarization filter to remove diffusive reflectance and two light sources (55 W). To correct the distorted original images, a radial correction algorithm was implemented. Color images of sirloins of Hanwoo (mixed with feeder cattle, steer, and calf) were divided and sub-images with image sizes of 161 × 161 were made to train the marbling prediction model. In this study, the convolutional neural network (CNN) has four convolution layers and yields prediction results in accordance with marbling grades (1++, 1+, 1, and 2&3). Every single layer uses a rectified linear unit (ReLU) function as an activation function and max-pooling is used for extracting the edge between fat and muscle and reducing the variance of the data. Prediction accuracy was measured using an accuracy and kappa coefficient from a confusion matrix. We summed the prediction of sub-images and determined the total average prediction accuracy. Training accuracy was 100% and the test accuracy was 86%, indicating comparably good performance using the CNN. This study provides classification potential for predicting the marbling grade using color images and a convolutional neural network algorithm.