• Title/Summary/Keyword: fully convolutional networks

검색결과 45건 처리시간 0.024초

다양한 차수의 합성 미니맥스 근사 다항식이 완전 동형 암호 상에서의 컨볼루션 신경망 네트워크에 미치는 영향 (The Impact of Various Degrees of Composite Minimax ApproximatePolynomials on Convolutional Neural Networks over Fully HomomorphicEncryption)

  • 이정현;노종선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.861-868
    • /
    • 2023
  • 보안을 유지하는 가운데 딥 러닝을 이용하여 데이터 분석 결과를 제공하는 서비스의 핵심적인 기술 중의 하나로 완전 동형 암호가 있다. 완전 동형 암호화된 데이터 간의 연산의 제약으로 인해 딥 러닝에 사용되는 비산술 함수를 다항식으로 근사해야 한다. 현재까지는 합성 미니맥스 다항식을 사용하여 비산술 함수를 근사한 다항식을 컨볼루션 뉴럴 네트워크에 적용했을 때 계층별로 같은 차수의 다항식만 적용하였는데, 이는 완전 동형 암호를 위한 효과적인 네트워크의 설계에 어려움을 준다. 본 연구는 합성 미니맥스 다항식으로 설계한 근사 다항식의 차수를 계층별로 서로 다르게 설정하여도 컨볼루션 뉴럴 네트워크에서 데이터의 분석에 문제가 없음을 이론적으로 증명하였다.

완전 합성곱 신경망을 활용한 자동 포트홀 탐지 기술의 개발 및 평가 (Development and Evaluation of Automatic Pothole Detection Using Fully Convolutional Neural Networks)

  • 전찬준;심승보;강성모;류승기
    • 한국ITS학회 논문지
    • /
    • 제17권5호
    • /
    • pp.55-64
    • /
    • 2018
  • 운전자의 안전사고에 직접적인 원인이 되고, 차량 파손을 유발시켜 재산상의 피해를 발생시키고 있는 포트홀을 완전 합성곱 신경망 기반의 자동으로 탐지하는 기법을 본 논문에서는 제안한다. 먼저, 실제 국내 도로를 주행하면서 차량에 설치된 카메라를 통하여 학습 데이터셋을 수집하고, 완전 합성곱 신경망 구조를 활용하여 의미론적 분할 형태로 신경망을 학습하였다. 어두운 환경에서 강건한 성능을 보이기 위하여 학습 데이터셋을 밝기에 따라서 증강하여 총 30,000장의 이미지를 학습하였다. 또한, 제안된 자동 포트홀 탐지 기술의 성능을 검증하기 위하여 총 450장의 평가 DB를 생성하였고, 총 네 명의 전문가가 각각의 이미지를 평가하였다. 평가 결과, 제안된 포트홀 탐지 기술은 높은 민감도 수치를 나타나는 것으로 평가 되었으며, 이는 정탐에서 강건한 성능을 보이는 것으로 해석 가능하다.

심층 합성곱 신경망을 이용한 교통신호등 인식 (Traffic Light Recognition Using a Deep Convolutional Neural Network)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제21권11호
    • /
    • pp.1244-1253
    • /
    • 2018
  • The color of traffic light is sensitive to various illumination conditions. Especially it loses the hue information when oversaturation happens on the lighting area. This paper proposes a traffic light recognition method robust to these illumination variations. The method consists of two steps of traffic light detection and recognition. It just uses the intensity and saturation in the first step of traffic light detection. It delays the use of hue information until it reaches to the second step of recognizing the signal of traffic light. We utilized a deep learning technique in the second step. We designed a deep convolutional neural network(DCNN) which is composed of three convolutional networks and two fully connected networks. 12 video clips were used to evaluate the performance of the proposed method. Experimental results show the performance of traffic light detection reporting the precision of 93.9%, the recall of 91.6%, and the recognition accuracy of 89.4%. Considering that the maximum distance between the camera and traffic lights is 70m, the results shows that the proposed method is effective.

Modeling of Convolutional Neural Network-based Recommendation System

  • Kim, Tae-Yeun
    • 통합자연과학논문집
    • /
    • 제14권4호
    • /
    • pp.183-188
    • /
    • 2021
  • Collaborative filtering is one of the commonly used methods in the web recommendation system. Numerous researches on the collaborative filtering proposed the numbers of measures for enhancing the accuracy. This study suggests the movie recommendation system applied with Word2Vec and ensemble convolutional neural networks. First, user sentences and movie sentences are made from the user, movie, and rating information. Then, the user sentences and movie sentences are input into Word2Vec to figure out the user vector and movie vector. The user vector is input on the user convolutional model while the movie vector is input on the movie convolutional model. These user and movie convolutional models are connected to the fully-connected neural network model. Ultimately, the output layer of the fully-connected neural network model outputs the forecasts for user, movie, and rating. The test result showed that the system proposed in this study showed higher accuracy than the conventional cooperative filtering system and Word2Vec and deep neural network-based system suggested in the similar researches. The Word2Vec and deep neural network-based recommendation system is expected to help in enhancing the satisfaction while considering about the characteristics of users.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

Smartphone-based structural crack detection using pruned fully convolutional networks and edge computing

  • Ye, X.W.;Li, Z.X.;Jin, T.
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.141-151
    • /
    • 2022
  • In recent years, the industry and research communities have focused on developing autonomous crack inspection approaches, which mainly include image acquisition and crack detection. In these approaches, mobile devices such as cameras, drones or smartphones are utilized as sensing platforms to acquire structural images, and the deep learning (DL)-based methods are being developed as important crack detection approaches. However, the process of image acquisition and collection is time-consuming, which delays the inspection. Also, the present mobile devices such as smartphones can be not only a sensing platform but also a computing platform that can be embedded with deep neural networks (DNNs) to conduct on-site crack detection. Due to the limited computing resources of mobile devices, the size of the DNNs should be reduced to improve the computational efficiency. In this study, an architecture called pruned crack recognition network (PCR-Net) was developed for the detection of structural cracks. A dataset containing 11000 images was established based on the raw images from bridge inspections. A pruning method was introduced to reduce the size of the base architecture for the optimization of the model size. Comparative studies were conducted with image processing techniques (IPTs) and other DNNs for the evaluation of the performance of the proposed PCR-Net. Furthermore, a modularly designed framework that integrated the PCR-Net was developed to realize a DL-based crack detection application for smartphones. Finally, on-site crack detection experiments were carried out to validate the performance of the developed system of smartphone-based detection of structural cracks.

Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구 (A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks)

  • 강부식
    • 디지털융복합연구
    • /
    • 제17권1호
    • /
    • pp.123-130
    • /
    • 2019
  • 웹 추천기법에서 가장 많이 사용하는 방식 중의 하나는 협업필터링 기법이다. 협업필터링 관련 많은 연구에서 정확도를 개선하기 위한 방안이 제시되어 왔다. 본 연구는 Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 방안에 대해 제안한다. 먼저 사용자, 영화, 평점 정보에서 사용자 문장과 영화 문장을 구성한다. 사용자 문장과 영화 문장을 Word2Vec에 입력으로 넣어 사용자 벡터와 영화 벡터를 구한다. 사용자 벡터는 사용자 합성곱 모델에 입력하고, 영화 벡터는 영화 합성곱 모델에 입력한다. 사용자 합성곱 모델과 영화 합성곱 모델은 완전연결 신경망 모델로 연결된다. 최종적으로 완전연결 신경망의 출력 계층은 사용자 영화 평점의 예측값을 출력한다. 실험결과 전통적인 협업필터링 기법과 유사 연구에서 제안한 Word2Vec과 심층 신경망을 사용한 기법에 비해 본 연구의 제안기법이 정확도를 개선함을 알 수 있었다.

Surface Water Mapping of Remote Sensing Data Using Pre-Trained Fully Convolutional Network

  • Song, Ah Ram;Jung, Min Young;Kim, Yong Il
    • 한국측량학회지
    • /
    • 제36권5호
    • /
    • pp.423-432
    • /
    • 2018
  • Surface water mapping has been widely used in various remote sensing applications. Water indices have been commonly used to distinguish water bodies from land; however, determining the optimal threshold and discriminating water bodies from similar objects such as shadows and snow is difficult. Deep learning algorithms have greatly advanced image segmentation and classification. In particular, FCN (Fully Convolutional Network) is state-of-the-art in per-pixel image segmentation and are used in most benchmarks such as PASCAL VOC2012 and Microsoft COCO (Common Objects in Context). However, these data sets are designed for daily scenarios and a few studies have conducted on applications of FCN using large scale remotely sensed data set. This paper aims to fine-tune the pre-trained FCN network using the CRMS (Coastwide Reference Monitoring System) data set for surface water mapping. The CRMS provides color infrared aerial photos and ground truth maps for the monitoring and restoration of wetlands in Louisiana, USA. To effectively learn the characteristics of surface water, we used pre-trained the DeepWaterMap network, which classifies water, land, snow, ice, clouds, and shadows using Landsat satellite images. Furthermore, the DeepWaterMap network was fine-tuned for the CRMS data set using two classes: water and land. The fine-tuned network finally classifies surface water without any additional learning process. The experimental results show that the proposed method enables high-quality surface mapping from CRMS data set and show the suitability of pre-trained FCN networks using remote sensing data for surface water mapping.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • 제14권1호
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

Image Retrieval Based on the Weighted and Regional Integration of CNN Features

  • Liao, Kaiyang;Fan, Bing;Zheng, Yuanlin;Lin, Guangfeng;Cao, Congjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.894-907
    • /
    • 2022
  • The features extracted by convolutional neural networks are more descriptive of images than traditional features, and their convolutional layers are more suitable for retrieving images than are fully connected layers. The convolutional layer features will consume considerable time and memory if used directly to match an image. Therefore, this paper proposes a feature weighting and region integration method for convolutional layer features to form global feature vectors and subsequently use them for image matching. First, the 3D feature of the last convolutional layer is extracted, and the convolutional feature is subsequently weighted again to highlight the edge information and position information of the image. Next, we integrate several regional eigenvectors that are processed by sliding windows into a global eigenvector. Finally, the initial ranking of the retrieval is obtained by measuring the similarity of the query image and the test image using the cosine distance, and the final mean Average Precision (mAP) is obtained by using the extended query method for rearrangement. We conduct experiments using the Oxford5k and Paris6k datasets and their extended datasets, Paris106k and Oxford105k. These experimental results indicate that the global feature extracted by the new method can better describe an image.