• Title/Summary/Keyword: 신경망 기반 이미지 인식

Search Result 84, Processing Time 0.028 seconds

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors (경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식)

  • Jung, Dongkyu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1453-1461
    • /
    • 2022
  • The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.

Face Frontalization Model with A.I. Based on U-Net using Convolutional Neural Network (합성곱 신경망(CNN)을 이용한 U-Net 기반의 인공지능 안면 정면화 모델)

  • Lee, Sangmin;Son, Wonho;Jin, ChangGyun;Kim, Ji-Hyun;Kim, JiYun;Park, Naeun;Kim, Gaeun;Kwon, Jin young;Lee, Hye Yi;Kim, Jongwan;Oh, Dukshin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.685-688
    • /
    • 2020
  • 안면 인식은 Face ID를 비롯하여 미아 찾기, 범죄자 추적 등의 분야에 도입되고 있다. 안면 인식은 최근 딥러닝을 통해 인식률이 향상되었으나, 측면에서의 인식률은 정면에 비해 특징 추출이 어려우므로 비교적 낮다. 이런 문제는 해당 인물의 정면이 없고 측면만 존재할 경우 안면 인식을 통한 신원확인이 어려워 단점으로 작용될 수 있다. 본 논문에서는 측면 이미지를 바탕으로 정면을 생성함으로써 안면 인식을 적용할 수 있는 상황을 확장하는 인공지능 기반의 안면 정면화 모델을 구현한다. 모델의 안면 특징 추출을 위해 VGG-Face를 사용하며 특징 추출에서 생길 수 있는 정보 손실을 막기 위해 U-Net 구조를 사용한다.

Extracting Rules from Neural Networks with Continuous Attributes (연속형 속성을 갖는 인공 신경망의 규칙 추출)

  • Jagvaral, Batselem;Lee, Wan-Gon;Jeon, Myung-joong;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.22-29
    • /
    • 2018
  • Over the decades, neural networks have been successfully used in numerous applications from speech recognition to image classification. However, these neural networks cannot explain their results and one needs to know how and why a specific conclusion was drawn. Most studies focus on extracting binary rules from neural networks, which is often impractical to do, since data sets used for machine learning applications contain continuous values. To fill the gap, this paper presents an algorithm to extract logic rules from a trained neural network for data with continuous attributes. It uses hyperplane-based linear classifiers to extract rules with numeric values from trained weights between input and hidden layers and then combines these classifiers with binary rules learned from hidden and output layers to form non-linear classification rules. Experiments with different datasets show that the proposed approach can accurately extract logical rules for data with nonlinear continuous attributes.

Recognition of Multi Label Fashion Styles based on Transfer Learning and Graph Convolution Network (전이학습과 그래프 합성곱 신경망 기반의 다중 패션 스타일 인식)

  • Kim, Sunghoon;Choi, Yerim;Park, Jonghyuk
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.1
    • /
    • pp.29-41
    • /
    • 2021
  • Recently, there are increasing attempts to utilize deep learning methodology in the fashion industry. Accordingly, research dealing with various fashion-related problems have been proposed, and superior performances have been achieved. However, the studies for fashion style classification have not reflected the characteristics of the fashion style that one outfit can include multiple styles simultaneously. Therefore, we aim to solve the multi-label classification problem by utilizing the dependencies between the styles. A multi-label recognition model based on a graph convolution network is applied to detect and explore fashion styles' dependencies. Furthermore, we accelerate model training and improve the model's performance through transfer learning. The proposed model was verified by a dataset collected from social network services and outperformed baselines.

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

A Bio-Inspired Modeling of Visual Information Processing for Action Recognition (생체 기반 시각정보처리 동작인식 모델링)

  • Kim, JinOk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.299-308
    • /
    • 2014
  • Various literatures related computing of information processing have been recently shown the researches inspired from the remarkably excellent human capabilities which recognize and categorize very complex visual patterns such as body motions and facial expressions. Applied from human's outstanding ability of perception, the classification function of visual sequences without context information is specially crucial task for computer vision to understand both the coding and the retrieval of spatio-temporal patterns. This paper presents a biological process based action recognition model of computer vision, which is inspired from visual information processing of human brain for action recognition of visual sequences. Proposed model employs the structure of neural fields of bio-inspired visual perception on detecting motion sequences and discriminating visual patterns in human brain. Experimental results show that proposed recognition model takes not only into account several biological properties of visual information processing, but also is tolerant of time-warping. Furthermore, the model allows robust temporal evolution of classification compared to researches of action recognition. Presented model contributes to implement bio-inspired visual processing system such as intelligent robot agent, etc.

Recognition Performance Enhancement by License Plate Normalization (번호판 정규화에 의한 인식 성능 향상 기법)

  • Kim, Do-Hyeon;Kang, Min-Kyung;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1278-1290
    • /
    • 2008
  • This paper proposes a preprocessing method and a neural network based character recognizer to enhance the overall performance of the license plate recognition system. First, plate outlines are extracted by virtual line matching, and then the 4 vertexes are obtained by calculating intersecting points of extracted lines. By these vertexes, plate image is reconstructed as rectangle-shaped image by bilinear transform. Finally, the license plate is recognized by the neural network based classifier which had been trained using delta-bar-delta algorithm. Various license plate images were used in the experiments, and the proposed plate normalization enhanced the recognition performance up to 16 percent.

Age and Gender Classification with Small Scale CNN (소규모 합성곱 신경망을 사용한 연령 및 성별 분류)

  • Jamoliddin, Uraimov;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.99-104
    • /
    • 2022
  • Artificial intelligence is getting a crucial part of our lives with its incredible benefits. Machines outperform humans in recognizing objects in images, particularly in classifying people into correct age and gender groups. In this respect, age and gender classification has been one of the hot topics among computer vision researchers in recent decades. Deployment of deep Convolutional Neural Network(: CNN) models achieved state-of-the-art performance. However, the most of CNN based architectures are very complex with several dozens of training parameters so they require much computation time and resources. For this reason, we propose a new CNN-based classification algorithm with significantly fewer training parameters and training time compared to the existing methods. Despite its less complexity, our model shows better accuracy of age and gender classification on the UTKFace dataset.

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.

Feature Extraction and Recognition of Myanmar Characters Based on Deep Learning (딥러닝 기반 미얀마 문자의 특징 추출 및 인식)

  • Ohnmar, Khin;Lee, Sung-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.977-984
    • /
    • 2022
  • Recently, with the economic development of Southeast Asia, the use of information devices is widely spreading, and the demand for application services using intelligent character recognition is increasing. This paper discusses deep learning-based feature extraction and recognition of Myanmar, one of the Southeast Asian countries. Myanmar alphabet (33 letters) and Myanmar numerals (10 numbers) are used for feature extraction. In this paper, the number of nine features are extracted and more than three new features are proposed. Extracted features of each characters and numbers are expressed with successful results. In the recognition part, convolutional neural networks are used to assess its execution on character distinction. Its algorithm is implemented on captured image data-sets and its implementation is evaluated. The precision of models on the input data set is 96 % and uses a real-time input image.