• Title/Summary/Keyword: 컨볼루셔널 뉴럴 네트워크

Search Result 11, Processing Time 0.025 seconds

Performance Improvement of Object Recognition System in Broadcast Media Using Hierarchical CNN (계층적 CNN을 이용한 방송 매체 내의 객체 인식 시스템 성능향상 방안)

  • Kwon, Myung-Kyu;Yang, Hyo-Sik
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.201-209
    • /
    • 2017
  • This paper is a smartphone object recognition system using hierarchical convolutional neural network. The overall configuration is a method of communicating object information to the smartphone by matching the collected data by connecting the smartphone and the server and recognizing the object to the convergence neural network in the server. It is also compared to a hierarchical convolutional neural network and a fractional convolutional neural network. Hierarchical convolutional neural networks have 88% accuracy, fractional convolutional neural networks have 73% accuracy and 15%p performance improvement. Based on this, it shows possibility of expansion of T-Commerce market connected with smartphone and broadcasting media.

Techniques for Performance Improvement of Convolutional Neural Networks using XOR-based Data Reconstruction Operation (XOR연산 기반의 데이터 재구성 기법을 활용한 컨볼루셔널 뉴럴 네트워크 성능 향상 기법)

  • Kim, Young-Ung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.193-198
    • /
    • 2020
  • The various uses of the Convolutional Neural Network technology are accelerating the evolution of the computing area, but the opposite is causing serious hardware performance shortages. Neural network accelerators, next-generation memory device technologies, and high-bandwidth memory architectures were proposed as countermeasures, but they are difficult to actively introduce due to the problems of versatility, technological maturity, and high cost, respectively. This study proposes DRAM-based main memory technology that enables read operations to be completed without waiting until the end of the refresh operation using pre-stored XOR bit values, even when the refresh operation is performed in the main memory. The results showed that the proposed technique improved performance by 5.8%, saved energy by 1.2%, and improved EDP by 10.6%.

A scene search method based on principal character identification using convolutional neural network (컨볼루셔널 뉴럴 네트워크를 이용한 주인공 식별 기반의 영상장면 탐색 기법)

  • Kwon, Myung-Kyu;Yang, Hyeong-Sik
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.2
    • /
    • pp.31-36
    • /
    • 2017
  • In this paper, we try to search and reproduce the image part of a specific cast from a large number of images. The conventional method must manually set the offset value when searching for a scene or viewing a corner. However, in this paper, the proposed method learns the main character 's face, then finds the main character in the image recognition and moves to the scene where the main character appears to reproduce the image. Data for specific performers is extracted and collected using crawl techniques. Based on the collected data, we learn using convolutional neural network algorithm and perform performance evaluation using it. The performance evaluation measures the accuracy by extracting and judging a specific performer learned in the extracted key frame while playing the drama. The performance confirmation of how quickly and accurately the learned scene is searched has obtained about 93% accuracy. Based on the derived performance, it is applied to the image service such as viewing, searching for person and detailed information retrieval per corner

Development of Korean Audio Caption System (한국어 오디오 캡션 시스템 개발)

  • Kang, Taeho;Kim, Juhee;Lee, Joonha
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.364-367
    • /
    • 2020
  • 오디오 캡셔닝(Audio Captioning)은 시스템이 입력으로 오디오 신호를 받아들이고 해당 신호의 텍스트 설명을 출력하는 중간 번역 작업이다. 이 논문에서는 컨볼루셔널 뉴럴 네트워크(CNN), 트랜스포머의 딥러닝 알고리즘을 사용하여 주변 환경 소리에 대한 오디오 캡셔닝을 자동으로 수행하고 한글화된 출력 결과를 제공하는 모델을 제시한다. 본 연구 결과, 모델의 성능 평가 척도인 SPIDEr 점수는 0.1977이 나왔다.

  • PDF

Printer Identification Methods Using Global and Local Feature-Based Deep Learning (전역 및 지역 특징 기반 딥러닝을 이용한 프린터 장치 판별 기술)

  • Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.1
    • /
    • pp.37-44
    • /
    • 2019
  • With the advance of digital IT technology, the performance of the printing and scanning devices is improved and their price becomes cheaper. As a result, the public can easily access these devices for crimes such as forgery of official and private documents. Therefore, if we can identify which printing device is used to print the documents, it would help to narrow the investigation and identify suspects. In this paper, we propose a deep learning model for printer identification. A convolutional neural network model based on local features which is widely used for identification in recent is presented. Then, another model including a step to calculate global features and hence improving the convergence speed and accuracy is presented. Using 8 printer models, the performance of the presented models was compared with previous feature-based identification methods. Experimental results show that the presented model using local feature and global feature achieved 97.23% and 99.98% accuracy respectively, which is much better than other previous methods in accuracy.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

Design of Deep Learning-based Location information technology for Place image collecting

  • Jang, Jin-wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.31-36
    • /
    • 2020
  • This research study designed a location image collecting technology. It provides the exact location information of an image which is not given in the photo to the user. Deep learning technology analysis and collects the images. The purpose of this service system is to provide the exact place name, location and the various information of the place such as nearby recommended attractions when the user upload the image photo to the service system. Suggested system has a deep learning model that has a size of 25.3MB, and the model repeats the learning process 50 times with a total of 15,266 data, performing 93.75% of the final accuracy. This system can also be linked with various services potentially for further development.

LiDAR Image Segmentation using Convolutional Neural Network Model with Refinement Modules (정제 모듈을 포함한 컨볼루셔널 뉴럴 네트워크 모델을 이용한 라이다 영상의 분할)

  • Park, Byungjae;Seo, Beom-Su;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.8-15
    • /
    • 2018
  • This paper proposes a convolutional neural network model for distinguishing areas occupied by obstacles from a LiDAR image converted from a 3D point cloud. The channels of a LiDAR image used as input consist of the distances to 3D points, the reflectivities of 3D points, and the heights of 3D points from the ground. The proposed model uses a LiDAR image as an input and outputs a result of a segmented LiDAR image. The proposed model adopts refinement modules with skip connections to segment a LiDAR image. The refinement modules with skip connections in the proposed model make it possible to construct a complex structure with a small number of parameters than a convolutional neural network model with a linear structure. Using the proposed model, it is possible to distinguish areas in a LiDAR image occupied by obstacles such as vehicles, pedestrians, and bicyclists. The proposed model can be applied to recognize surrounding obstacles and to search for safe paths.

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

Streamlined GoogLeNet Algorithm Based on CNN for Korean Character Recognition (한글 인식을 위한 CNN 기반의 간소화된 GoogLeNet 알고리즘 연구)

  • Kim, Yeon-gyu;Cha, Eui-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1657-1665
    • /
    • 2016
  • Various fields are being researched through Deep Learning using CNN(Convolutional Neural Network) and these researches show excellent performance in the image recognition. In this paper, we provide streamlined GoogLeNet of CNN architecture that is capable of learning a large-scale Korean character database. The experimental data used in this paper is PHD08 that is the large-scale of Korean character database. PHD08 has 2,187 samples for each character and there are 2,350 Korean characters that make total 5,139,450 sample data. As a training result, streamlined GoogLeNet showed over 99% of test accuracy at PHD08. Also, we made additional Korean character data that have fonts that are not in the PHD08 in order to ensure objectivity and we compared the performance of classification between streamlined GoogLeNet and other OCR programs. While other OCR programs showed a classification success rate of 66.95% to 83.16%, streamlined GoogLeNet showed 89.14% of the classification success rate that is higher than other OCR program's rate.