• Title/Summary/Keyword: recognition task

Search Result 616, Processing Time 0.028 seconds

Incorporating Recognition in Catfish Counting Algorithm Using Artificial Neural Network and Geometry

  • Aliyu, Ibrahim;Gana, Kolo Jonathan;Musa, Aibinu Abiodun;Adegboye, Mutiu Adesina;Lim, Chang Gyoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4866-4888
    • /
    • 2020
  • One major and time-consuming task in fish production is obtaining an accurate estimate of the number of fish produced. In most Nigerian farms, fish counting is performed manually. Digital image processing (DIP) is an inexpensive solution, but its accuracy is affected by noise, overlapping fish, and interfering objects. This study developed a catfish recognition and counting algorithm that introduces detection before counting and consists of six steps: image acquisition, pre-processing, segmentation, feature extraction, recognition, and counting. Images were acquired and pre-processed. The segmentation was performed by applying three methods: image binarization using Otsu thresholding, morphological operations using fill hole, dilation, and opening operations, and boundary segmentation using edge detection. The boundary features were extracted using a chain code algorithm and Fourier descriptors (CH-FD), which were used to train an artificial neural network (ANN) to perform the recognition. The new counting approach, based on the geometry of the fish, was applied to determine the number of fish and was found to be suitable for counting fish of any size and handling overlap. The accuracies of the segmentation algorithm, boundary pixel and Fourier descriptors (BD-FD), and the proposed CH-FD method were 90.34%, 96.6%, and 100% respectively. The proposed counting algorithm demonstrated 100% accuracy.

MSFM: Multi-view Semantic Feature Fusion Model for Chinese Named Entity Recognition

  • Liu, Jingxin;Cheng, Jieren;Peng, Xin;Zhao, Zeli;Tang, Xiangyan;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1833-1848
    • /
    • 2022
  • Named entity recognition (NER) is an important basic task in the field of Natural Language Processing (NLP). Recently deep learning approaches by extracting word segmentation or character features have been proved to be effective for Chinese Named Entity Recognition (CNER). However, since this method of extracting features only focuses on extracting some of the features, it lacks textual information mining from multiple perspectives and dimensions, resulting in the model not being able to fully capture semantic features. To tackle this problem, we propose a novel Multi-view Semantic Feature Fusion Model (MSFM). The proposed model mainly consists of two core components, that is, Multi-view Semantic Feature Fusion Embedding Module (MFEM) and Multi-head Self-Attention Mechanism Module (MSAM). Specifically, the MFEM extracts character features, word boundary features, radical features, and pinyin features of Chinese characters. The acquired font shape, font sound, and font meaning features are fused to enhance the semantic information of Chinese characters with different granularities. Moreover, the MSAM is used to capture the dependencies between characters in a multi-dimensional subspace to better understand the semantic features of the context. Extensive experimental results on four benchmark datasets show that our method improves the overall performance of the CNER model.

Handwriting Thai Digit Recognition Using Convolution Neural Networks (다양한 컨볼루션 신경망을 이용한 태국어 숫자 인식)

  • Onuean, Athita;Jung, Hanmin;Kim, Taehong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.15-17
    • /
    • 2021
  • Handwriting recognition research is mainly focused on deep learning techniques and has achieved a great performance in the last few years. Especially, handwritten Thai digit recognition has been an important research area including generic digital numerical information, such as Thai official government documents and receipts. However, it becomes also a challenging task for a long time. For resolving the unavailability of a large Thai digit dataset, this paper constructs our dataset and learns them with some variants of the CNN model; Decision tree, K-nearest neighbors, Alexnet, LaNet-5, and VGG (11,13,16,19). The experimental results using the accuracy metric show the maximum accuracy of 98.29% when using VGG 13 with batch normalization.

  • PDF

Improving the Recognition of Known and Unknown Plant Disease Classes Using Deep Learning

  • Yao Meng;Jaehwan Lee;Alvaro Fuentes;Mun Haeng Lee;Taehyun Kim;Sook Yoon;Dong Sun Park
    • Smart Media Journal
    • /
    • v.13 no.8
    • /
    • pp.16-25
    • /
    • 2024
  • Recently, there has been a growing emphasis on identifying both known and unknown diseases in plant disease recognition. In this task, a model trained only on images of known classes is required to classify an input image into either one of the known classes or into an unknown class. Consequently, the capability to recognize unknown diseases is critical for model deployment. To enhance this capability, we are considering three factors. Firstly, we propose a new logits-based scoring function for unknown scores. Secondly, initial experiments indicate that a compact feature space is crucial for the effectiveness of logits-based methods, leading us to employ the AM-Softmax loss instead of Cross-entropy loss during training. Thirdly, drawing inspiration from the efficacy of transfer learning, we utilize a large plant-relevant dataset, PlantCLEF2022, for pre-training a model. The experimental results suggest that our method outperforms current algorithms. Specifically, our method achieved a performance of 97.90 CSA, 91.77 AUROC, and 90.63 OSCR with the ResNet50 model and a performance of 98.28 CSA, 92.05 AUROC, and 91.12 OSCR with the ConvNext base model. We believe that our study will contribute to the community.

A Bio-Inspired Modeling of Visual Information Processing for Action Recognition (생체 기반 시각정보처리 동작인식 모델링)

  • Kim, JinOk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.299-308
    • /
    • 2014
  • Various literatures related computing of information processing have been recently shown the researches inspired from the remarkably excellent human capabilities which recognize and categorize very complex visual patterns such as body motions and facial expressions. Applied from human's outstanding ability of perception, the classification function of visual sequences without context information is specially crucial task for computer vision to understand both the coding and the retrieval of spatio-temporal patterns. This paper presents a biological process based action recognition model of computer vision, which is inspired from visual information processing of human brain for action recognition of visual sequences. Proposed model employs the structure of neural fields of bio-inspired visual perception on detecting motion sequences and discriminating visual patterns in human brain. Experimental results show that proposed recognition model takes not only into account several biological properties of visual information processing, but also is tolerant of time-warping. Furthermore, the model allows robust temporal evolution of classification compared to researches of action recognition. Presented model contributes to implement bio-inspired visual processing system such as intelligent robot agent, etc.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.29 no.4
    • /
    • pp.150-160
    • /
    • 2018
  • Objectives: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. Results: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. Conclusion: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.

Effective Pose-based Approach with Pose Estimation for Emotional Action Recognition (자세 예측을 이용한 효과적인 자세 기반 감정 동작 인식)

  • Kim, Jin Ok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.209-218
    • /
    • 2013
  • Early researches in human action recognition have focused on tracking and classifying articulated body motions. Such methods required accurate segmentation of body parts, which is a sticky task, particularly under realistic imaging conditions. Recent trends of work have become popular towards the use of more and low-level appearance features such as spatio-temporal interest points. Given the great progress in pose estimation over the past few years, redefined views about pose-based approach are needed. This paper addresses the issues of whether it is sufficient to train a classifier only on low-level appearance features in appearance approach and proposes effective pose-based approach with pose estimation for emotional action recognition. In order for these questions to be solved, we compare the performance of pose-based, appearance-based and its combination-based features respectively with respect to scenario of various emotional action recognition. The experiment results show that pose-based features outperform low-level appearance-based approach of features, even when heavily spoiled by noise, suggesting that pose-based approach with pose estimation is beneficial for the emotional action recognition.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

High-Frequency Interchange Network for Multispectral Object Detection (다중 스펙트럼 객체 감지를 위한 고주파 교환 네트워크)

  • Park, Seon-Hoo;Yun, Jun-Seok;Yoo, Seok Bong;Han, Seunghwoi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1121-1129
    • /
    • 2022
  • Object recognition is carried out using RGB images in various object recognition studies. However, RGB images in dark illumination environments or environments where target objects are occluded other objects cause poor object recognition performance. On the other hand, IR images provide strong object recognition performance in these environments because it detects infrared waves rather than visible illumination. In this paper, we propose an RGB-IR fusion model, high-frequency interchange network (HINet), which improves object recognition performance by combining only the strengths of RGB-IR image pairs. HINet connected two object detection models using a mutual high-frequency transfer (MHT) to interchange advantages between RGB-IR images. MHT converts each pair of RGB-IR images into a discrete cosine transform (DCT) spectrum domain to extract high-frequency information. The extracted high-frequency information is transmitted to each other's networks and utilized to improve object recognition performance. Experimental results show the superiority of the proposed network and present performance improvement of the multispectral object recognition task.