• 제목/요약/키워드: CNN model

검색결과 1,012건 처리시간 0.032초

딥러닝 기반의 다범주 감성분석 모델 개발 (Development of Deep Learning Models for Multi-class Sentiment Analysis)

  • 알렉스 샤이코니;서상현;권영식
    • 한국IT서비스학회지
    • /
    • 제16권4호
    • /
    • pp.149-160
    • /
    • 2017
  • Sentiment analysis is the process of determining whether a piece of document, text or conversation is positive, negative, neural or other emotion. Sentiment analysis has been applied for several real-world applications, such as chatbot. In the last five years, the practical use of the chatbot has been prevailing in many field of industry. In the chatbot applications, to recognize the user emotion, sentiment analysis must be performed in advance in order to understand the intent of speakers. The specific emotion is more than describing positive or negative sentences. In light of this context, we propose deep learning models for conducting multi-class sentiment analysis for identifying speaker's emotion which is categorized to be joy, fear, guilt, sad, shame, disgust, and anger. Thus, we develop convolutional neural network (CNN), long short term memory (LSTM), and multi-layer neural network models, as deep neural networks models, for detecting emotion in a sentence. In addition, word embedding process was also applied in our research. In our experiments, we have found that long short term memory (LSTM) model performs best compared to convolutional neural networks and multi-layer neural networks. Moreover, we also show the practical applicability of the deep learning models to the sentiment analysis for chatbot.

Classroom Roll-Call System Based on ResNet Networks

  • Zhu, Jinlong;Yu, Fanhua;Liu, Guangjie;Sun, Mingyu;Zhao, Dong;Geng, Qingtian;Su, Jinbo
    • Journal of Information Processing Systems
    • /
    • 제16권5호
    • /
    • pp.1145-1157
    • /
    • 2020
  • A convolution neural networks (CNNs) has demonstrated outstanding performance compared to other algorithms in the field of face recognition. Regarding the over-fitting problem of CNN, researchers have proposed a residual network to ease the training for recognition accuracy improvement. In this study, a novel face recognition model based on game theory for call-over in the classroom was proposed. In the proposed scheme, an image with multiple faces was used as input, and the residual network identified each face with a confidence score to form a list of student identities. Face tracking of the same identity or low confidence were determined to be the optimisation objective, with the game participants set formed from the student identity list. Game theory optimises the authentication strategy according to the confidence value and identity set to improve recognition accuracy. We observed that there exists an optimal mapping relation between face and identity to avoid multiple faces associated with one identity in the proposed scheme and that the proposed game-based scheme can reduce the error rate, as compared to the existing schemes with deeper neural network.

A Study on the Facial Expression Recognition using Deep Learning Technique

  • Jeong, Bong Jae;Kang, Min Soo;Jung, Yong Gyu
    • International Journal of Advanced Culture Technology
    • /
    • 제6권1호
    • /
    • pp.60-67
    • /
    • 2018
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the symbols that users often use, you can identify facial expressions with a camera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar expressions, reached 66%. It doesn't need to search for symbols. If you use the camera to recognize the expression, it will appear symbols immediately. So, this service is the symbols used when people send messages to others, and it can feel a lot of convenience. In countless symbols, there is no need to find symbols, which is an increasing trend in deep learning. So, we need to use more suitable algorithm for expression recognition, and then improve accuracy.

데이터 오·결측 저감 정제 알고리즘 (Data Cleansing Algorithm for reducing Outlier)

  • 이종원;김호성;황철현;강인식;정회경
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2018년도 추계학술대회
    • /
    • pp.342-344
    • /
    • 2018
  • 본 논문에서는 기존 오 결측 데이터 분석 기법인 평균 대체법, 상관계수 수치분석, 그래프 상관성 분석 및 통계 전문가 분석 등 통계적 방법으로 대체 가능성을 조사하여 정수처리 공정에서 계측되는 각종 이상 데이터를 정제하기 위한 방법을 다양한 분석연구로 진행하였다. 또한 물 정보 데이터 오 결측 저감 정제 알고리즘의 신뢰성 및 검증에 있어 분위수 패턴과 딥러닝 기반의 LSTM 알고리즘으로 동작하는 시스템을 모델링하고, Keras, Theano, Tensorflow 등의 오픈 소스 라이브러리로 구현할 수 있는 체계를 연구하였다.

  • PDF

Development of ResNet-based WBC Classification Algorithm Using Super-pixel Image Segmentation

  • Lee, Kyu-Man;Kang, Soon-Ah
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권4호
    • /
    • pp.147-153
    • /
    • 2018
  • In this paper, we propose an efficient WBC 14-Diff classification which performs using the WBC-ResNet-152, a type of CNN model. The main point of view is to use Super-pixel for the segmentation of the image of WBC, and to use ResNet for the classification of WBC. A total of 136,164 blood image samples (224x224) were grouped for image segmentation, training, training verification, and final test performance analysis. Image segmentation using super-pixels have different number of images for each classes, so weighted average was applied and therefore image segmentation error was low at 7.23%. Using the training data-set for training 50 times, and using soft-max classifier, TPR average of 80.3% for the training set of 8,827 images was achieved. Based on this, using verification data-set of 21,437 images, 14-Diff classification TPR average of normal WBCs were at 93.4% and TPR average of abnormal WBCs were at 83.3%. The result and methodology of this research demonstrates the usefulness of artificial intelligence technology in the blood cell image classification field. WBC-ResNet-152 based morphology approach is shown to be meaningful and worthwhile method. And based on stored medical data, in-depth diagnosis and early detection of curable diseases is expected to improve the quality of treatment.

게임 어플리케이션을 위한 컨볼루션 신경망 기반의 실시간 제스처 인식 연구 (Study on Real-time Gesture Recognition based on Convolutional Neural Network for Game Applications)

  • 채지훈;임종헌;김해성;이준재
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.835-843
    • /
    • 2017
  • Humans have often been used gesture to communicate with each other. The communication between computer and person was also not different. To interact with a computer, we command with gesture, keyboard, mouse and extra devices. Especially, the gesture is very useful in many environments such as gaming and VR(Virtual Reality), which requires high specification and rendering time. In this paper, we propose a gesture recognition method based on CNN model to apply to gaming and real-time applications. Deep learning for gesture recognition is processed in a separated server and the preprocessing for data acquisition is done a client PC. The experimental results show that the proposed method is in accuracy higher than the conventional method in game environment.

합성곱 신경망을 이용한 프로펠러 캐비테이션 침식 위험도 연구 (A Study on the Risk of Propeller Cavitation Erosion Using Convolutional Neural Network)

  • 김지혜;이형석;허재욱
    • 대한조선학회논문집
    • /
    • 제58권3호
    • /
    • pp.129-136
    • /
    • 2021
  • Cavitation erosion is one of the major factors causing damage by lowering the structural strength of the marine propeller and the risk of it has been qualitatively evaluated by each institution with their own criteria based on the experiences. In this study, in order to quantitatively evaluate the risk of cavitation erosion on the propeller, we implement a deep learning algorithm based on a convolutional neural network. We train and verify it using the model tests results, including cavitation characteristics of various ship types. Here, we adopt the validated well-known networks such as VGG, GoogLeNet, and ResNet, and the results are compared with the expert's qualitative prediction results to confirm the feasibility of the prediction algorithm using a convolutional neural network.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • 제2권2호
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

funcGNN과 Siamese Network의 코드 유사성 분석 성능비교 (Comparison of Code Similarity Analysis Performance of funcGNN and Siamese Network)

  • 최동빈;조인수;박용범
    • 반도체디스플레이기술학회지
    • /
    • 제20권3호
    • /
    • pp.113-116
    • /
    • 2021
  • As artificial intelligence technologies, including deep learning, develop, these technologies are being introduced to code similarity analysis. In the traditional analysis method of calculating the graph edit distance (GED) after converting the source code into a control flow graph (CFG), there are studies that calculate the GED through a trained graph neural network (GNN) with the converted CFG, Methods for analyzing code similarity through CNN by imaging CFG are also being studied. In this paper, to determine which approach will be effective and efficient in researching code similarity analysis methods using artificial intelligence in the future, code similarity is measured through funcGNN, which measures code similarity using GNN, and Siamese Network, which is an image similarity analysis model. The accuracy was compared and analyzed. As a result of the analysis, the error rate (0.0458) of the Siamese network was bigger than that of the funcGNN (0.0362).

Keypoint-based Deep Learning Approach for Building Footprint Extraction Using Aerial Images

  • Jeong, Doyoung;Kim, Yongil
    • 대한원격탐사학회지
    • /
    • 제37권1호
    • /
    • pp.111-122
    • /
    • 2021
  • Building footprint extraction is an active topic in the domain of remote sensing, since buildings are a fundamental unit of urban areas. Deep convolutional neural networks successfully perform footprint extraction from optical satellite images. However, semantic segmentation produces coarse results in the output, such as blurred and rounded boundaries, which are caused by the use of convolutional layers with large receptive fields and pooling layers. The objective of this study is to generate visually enhanced building objects by directly extracting the vertices of individual buildings by combining instance segmentation and keypoint detection. The target keypoints in building extraction are defined as points of interest based on the local image gradient direction, that is, the vertices of a building polygon. The proposed framework follows a two-stage, top-down approach that is divided into object detection and keypoint estimation. Keypoints between instances are distinguished by merging the rough segmentation masks and the local features of regions of interest. A building polygon is created by grouping the predicted keypoints through a simple geometric method. Our model achieved an F1-score of 0.650 with an mIoU of 62.6 for building footprint extraction using the OpenCitesAI dataset. The results demonstrated that the proposed framework using keypoint estimation exhibited better segmentation performance when compared with Mask R-CNN in terms of both qualitative and quantitative results.