• Title/Summary/Keyword: cnn

Search Result 2,164, Processing Time 0.023 seconds

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Sketch-based 3D object retrieval using Wasserstein Center Loss (Wasserstein Center 손실을 이용한 스케치 기반 3차원 물체 검색)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.91-99
    • /
    • 2018
  • Sketch-based 3D object retrieval is a convenient way to search for various 3D data using human-drawn sketches as query. In this paper, we propose a new method of using Sketch CNN, Wasserstein CNN and Wasserstein center loss for sketch-based 3D object search. Specifically, Wasserstein center loss is a method of learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. To do this, the proposed 3D object retrieval is performed as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we learn the features of the extracted 3D object and the features of the sketch using the proposed Wasserstein center loss. In order to demonstrate the superiority of the proposed method, we evaluated two sets of benchmark data sets, SHREC 13 and SHREC 14, and the proposed method shows better performance in all conventional metrics compared to the state of the art methods.

A Study on Model for Drivable Area Segmentation based on Deep Learning (딥러닝 기반의 주행가능 영역 추출 모델에 관한 연구)

  • Jeon, Hyo-jin;Cho, Soo-sun
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.105-111
    • /
    • 2019
  • Core technologies that lead the Fourth Industrial Revolution era, such as artificial intelligence, big data, and autonomous driving, are implemented and serviced through the rapid development of computing power and hyper-connected networks based on the Internet of Things. In this paper, we implement two different models for drivable area segmentation in various environment, and propose a better model by comparing the results. The models for drivable area segmentation are using DeepLab V3+ and Mask R-CNN, which have great performances in the field of image segmentation and are used in many studies in autonomous driving technology. For driving information in various environment, we use BDD dataset which provides driving videos and images in various weather conditions and day&night time. The result of two different models shows that Mask R-CNN has higher performance with 68.33% IoU than DeepLab V3+ with 48.97% IoU. In addition, the result of visual inspection of drivable area segmentation on driving image, the accuracy of Mask R-CNN is 83% and DeepLab V3+ is 69%. It indicates Mask R-CNN is more efficient than DeepLab V3+ in drivable area segmentation.

Implementation of CNN-based Classification Training Model for Unstructured Fashion Image Retrieval using Preprocessing with MASK R-CNN (비정형 패션 이미지 검색을 위한 MASK R-CNN 선형처리 기반 CNN 분류 학습모델 구현)

  • Seunga, Cho;Hayoung, Lee;Hyelim, Jang;Kyuri, Kim;Hyeon-Ji, Lee;Bong-Ki, Son;Jaeho, Lee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.6
    • /
    • pp.13-23
    • /
    • 2022
  • In this paper, we propose a detailed component image classification algorithm by fashion item for unstructured data retrieval in the fashion field. Due to the COVID-19 environment, AI-based online shopping malls are increasing recently. However, there is a limit to accurate unstructured data search with existing keyword search and personalized style recommendations based on user surfing behavior. In this study, pre-processing using Mask R-CNN was conducted using images crawled from online shopping sites and then classified components for each fashion item through CNN. We obtain the accuaracy for collar of the shirt's as 93.28%, the pattern of the shirt as 98.10%, the 3 classese fit of the jeans as 91.73%, And, we further obtained one for the 4 classes fit of jeans as 81.59% and the color of the jeans as 93.91%. At the results for the decorated items, we also obtained the accuract of the washing of the jeans as 91.20% and the demage of jeans accuaracy as 92.96%.

The Prediction of Cryptocurrency Prices Using eXplainable Artificial Intelligence based on Deep Learning (설명 가능한 인공지능과 CNN을 활용한 암호화폐 가격 등락 예측모형)

  • Taeho Hong;Jonggwan Won;Eunmi Kim;Minsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.129-148
    • /
    • 2023
  • Bitcoin is a blockchain technology-based digital currency that has been recognized as a representative cryptocurrency and a financial investment asset. Due to its highly volatile nature, Bitcoin has gained a lot of attention from investors and the public. Based on this popularity, numerous studies have been conducted on price and trend prediction using machine learning and deep learning. This study employed LSTM (Long Short Term Memory) and CNN (Convolutional Neural Networks), which have shown potential for predictive performance in the finance domain, to enhance the classification accuracy in Bitcoin price trend prediction. XAI(eXplainable Artificial Intelligence) techniques were applied to the predictive model to enhance its explainability and interpretability by providing a comprehensive explanation of the model. In the empirical experiment, CNN was applied to technical indicators and Google trend data to build a Bitcoin price trend prediction model, and the CNN model using both technical indicators and Google trend data clearly outperformed the other models using neural networks, SVM, and LSTM. Then SHAP(Shapley Additive exPlanations) was applied to the predictive model to obtain explanations about the output values. Important prediction drivers in input variables were extracted through global interpretation, and the interpretation of the predictive model's decision process for each instance was suggested through local interpretation. The results show that our proposed research framework demonstrates both improved classification accuracy and explainability by using CNN, Google trend data, and SHAP.

Assessment of Visual Landscape Image Analysis Method Using CNN Deep Learning - Focused on Healing Place - (CNN 딥러닝을 활용한 경관 이미지 분석 방법 평가 - 힐링장소를 대상으로 -)

  • Sung, Jung-Han;Lee, Kyung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.166-178
    • /
    • 2023
  • This study aims to introduce and assess CNN Deep Learning methods to analyze visual landscape images on social media with embedded user perceptions and experiences. This study analyzed visual landscape images by focusing on a healing place. For the study, seven adjectives related to healing were selected through text mining and consideration of previous studies. Subsequently, 50 evaluators were recruited to build a Deep Learning image. Evaluators were asked to collect three images most suitable for 'healing', 'healing landscape', and 'healing place' on portal sites. The collected images were refined and a data augmentation process was applied to build a CNN model. After that, 15,097 images of 'healing' and 'healing landscape' on portal sites were collected and classified to analyze the visual landscape of a healing place. As a result of the study, 'quiet' was the highest in the category except 'other' and 'indoor' with 2,093 (22%), followed by 'open', 'joyful', 'comfortable', 'clean', 'natural', and 'beautiful'. It was found through research that CNN Deep Learning is an analysis method that can derive results from visual landscape image analysis. It also suggested that it is one way to supplement the existing visual landscape analysis method, and suggests in-depth and diverse visual landscape analysis in the future by establishing a landscape image learning dataset.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Implementing a Depth Map Generation Algorithm by Convolutional Neural Network (깊이맵 생성 알고리즘의 합성곱 신경망 구현)

  • Lee, Seungsoo;Kim, Hong Jin;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.3-10
    • /
    • 2018
  • Depth map has been utilized in a varity of fields. Recently research on generating depth map by artificial neural network (ANN) has gained much interest. This paper validates the feasibility of implementing the ready-made depth map generation by convolutional neural network (CNN). First, for a given image, a depth map is generated by the weighted average of a saliency map as well as a motion history image. Then CNN network is trained by test images and depth maps. The objective and subjective experiments are performed on the CNN and showed that the CNN can replace the ready-made depth generation method.

A Method for accelerating training of Convolutional Neural Network (합성곱 신경망의 학습 가속화를 위한 방법)

  • Choi, Se Jin;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.171-175
    • /
    • 2017
  • Recently, Training of the convolutional neural network (CNN) entails many iterative computations. Therefore, a method of accelerating the training speed through parallel processing using the hardware specifications of GPGPU is actively researched. In this paper, the operations of the feature extraction unit and the classification unit are divided into blocks and threads of GPGPU and processed in parallel. Convolution and Pooling operations of the feature extraction unit are processed in parallel at once without sequentially processing. As a result, proposed method improved the training time about 314%.