• Title/Summary/Keyword: Convolutional Network (CNN)

Search Result 969, Processing Time 0.029 seconds

A Vision Transformer Based Recommender System Using Side Information (부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템)

  • Kwon, Yujin;Choi, Minseok;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.119-137
    • /
    • 2022
  • Recent recommendation system studies apply various deep learning models to represent user and item interactions better. One of the noteworthy studies is ONCF(Outer product-based Neural Collaborative Filtering) which builds a two-dimensional interaction map via outer product and employs CNN (Convolutional Neural Networks) to learn high-order correlations from the map. However, ONCF has limitations in recommendation performance due to the problems with CNN and the absence of side information. ONCF using CNN has an inductive bias problem that causes poor performances for data with a distribution that does not appear in the training data. This paper proposes to employ a Vision Transformer (ViT) instead of the vanilla CNN used in ONCF. The reason is that ViT showed better results than state-of-the-art CNN in many image classification cases. In addition, we propose a new architecture to reflect side information that ONCF did not consider. Unlike previous studies that reflect side information in a neural network using simple input combination methods, this study uses an independent auxiliary classifier to reflect side information more effectively in the recommender system. ONCF used a single latent vector for user and item, but in this study, a channel is constructed using multiple vectors to enable the model to learn more diverse expressions and to obtain an ensemble effect. The experiments showed our deep learning model improved performance in recommendation compared to ONCF.

Chest CT Image Patch-Based CNN Classification and Visualization for Predicting Recurrence of Non-Small Cell Lung Cancer Patients (비소세포폐암 환자의 재발 예측을 위한 흉부 CT 영상 패치 기반 CNN 분류 및 시각화)

  • Ma, Serie;Ahn, Gahee;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Non-small cell lung cancer (NSCLC) accounts for a high proportion of 85% among all lung cancer and has a significantly higher mortality rate (22.7%) compared to other cancers. Therefore, it is very important to predict the prognosis after surgery in patients with non-small cell lung cancer. In this study, the types of preoperative chest CT image patches for non-small cell lung cancer patients with tumor as a region of interest are diversified into five types according to tumor-related information, and performance of single classifier model, ensemble classifier model with soft-voting method, and ensemble classifier model using 3 input channels for combination of three different patches using pre-trained ResNet and EfficientNet CNN networks are analyzed through misclassification cases and Grad-CAM visualization. As a result of the experiment, the ResNet152 single model and the EfficientNet-b7 single model trained on the peritumoral patch showed accuracy of 87.93% and 81.03%, respectively. In addition, ResNet152 ensemble model using the image, peritumoral, and shape-focused intratumoral patches which were placed in each input channels showed stable performance with an accuracy of 87.93%. Also, EfficientNet-b7 ensemble classifier model with soft-voting method using the image and peritumoral patches showed accuracy of 84.48%.

Development of Image Classification Model for Urban Park User Activity Using Deep Learning of Social Media Photo Posts (소셜미디어 사진 게시물의 딥러닝을 활용한 도시공원 이용자 활동 이미지 분류모델 개발)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.6
    • /
    • pp.42-57
    • /
    • 2022
  • This study aims to create a basic model for classifying the activity photos that urban park users shared on social media using Deep Learning through Artificial Intelligence. Regarding the social media data, photos related to urban parks were collected through a Naver search, were collected, and used for the classification model. Based on the indicators of Naturalness, Potential Attraction, and Activity, which can be used to evaluate the characteristics of urban parks, 21 classification categories were created. Urban park photos shared on Naver were collected by category, and annotated datasets were created. A custom CNN model and a transfer learning model utilizing a CNN pre-trained on the collected photo datasets were designed and subsequently analyzed. As a result of the study, the Xception transfer learning model, which demonstrated the best performance, was selected as the urban park user activity image classification model and evaluated through several evaluation indicators. This study is meaningful in that it has built AI as an index that can evaluate the characteristics of urban parks by using user-shared photos on social media. The classification model using Deep Learning mitigates the limitations of manual classification, and it can efficiently classify large amounts of urban park photos. So, it can be said to be a useful method that can be used for the monitoring and management of city parks in the future.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.

Comparative analysis of Machine-Learning Based Models for Metal Surface Defect Detection (머신러닝 기반 금속외관 결함 검출 비교 분석)

  • Lee, Se-Hun;Kang, Seong-Hwan;Shin, Yo-Seob;Choi, Oh-Kyu;Kim, Sijong;Kang, Jae-Mo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.834-841
    • /
    • 2022
  • Recently, applying artificial intelligence technologies in various fields of production has drawn an upsurge of research interest due to the increase for smart factory and artificial intelligence technologies. A great deal of effort is being made to introduce artificial intelligence algorithms into the defect detection task. Particularly, detection of defects on the surface of metal has a higher level of research interest compared to other materials (wood, plastics, fibers, etc.). In this paper, we compare and analyze the speed and performance of defect classification by combining machine learning techniques (Support Vector Machine, Softmax Regression, Decision Tree) with dimensionality reduction algorithms (Principal Component Analysis, AutoEncoders) and two convolutional neural networks (proposed method, ResNet). To validate and compare the performance and speed of the algorithms, we have adopted two datasets ((i) public dataset, (ii) actual dataset), and on the basis of the results, the most efficient algorithm is determined.

A Study on Lightweight CNN-based Interpolation Method for Satellite Images (위성 영상을 위한 경량화된 CNN 기반의 보간 기술 연구)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.167-177
    • /
    • 2022
  • In order to obtain satellite image products using the image transmitted to the ground station after capturing the satellite images, many image pre/post-processing steps are involved. During the pre/post-processing, when converting from level 1R images to level 1G images, geometric correction is essential. An interpolation method necessary for geometric correction is inevitably used, and the quality of the level 1G images is determined according to the accuracy of the interpolation method. Also, it is crucial to speed up the interpolation algorithm by the level processor. In this paper, we proposed a lightweight CNN-based interpolation method required for geometric correction when converting from level 1R to level 1G. The proposed method doubles the resolution of satellite images and constructs a deep learning network with a lightweight deep convolutional neural network for fast processing speed. In addition, a feature map fusion method capable of improving the image quality of multispectral (MS) bands using panchromatic (PAN) band information was proposed. The images obtained through the proposed interpolation method improved by about 0.4 dB for the PAN image and about 4.9 dB for the MS image in the quantitative peak signal-to-noise ratio (PSNR) index compared to the existing deep learning-based interpolation methods. In addition, it was confirmed that the time required to acquire an image that is twice the resolution of the 36,500×36,500 input image based on the PAN image size is improved by about 1.6 times compared to the existing deep learning-based interpolation method.

Road Surface Damage Detection Based on Semi-supervised Learning Using Pseudo Labels (수도 레이블을 활용한 준지도 학습 기반의 도로노면 파손 탐지)

  • Chun, Chanjun;Ryu, Seung-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.4
    • /
    • pp.71-79
    • /
    • 2019
  • By using convolutional neural networks (CNNs) based on semantic segmentation, road surface damage detection has being studied. In order to generate the CNN model, it is essential to collect the input and the corresponding labeled images. Unfortunately, such collecting pairs of the dataset requires a great deal of time and costs. In this paper, we proposed a road surface damage detection technique based on semi-supervised learning using pseudo labels to mitigate such problem. The model is updated by properly mixing labeled and unlabeled datasets, and compares the performance against existing model using only labeled dataset. As a subjective result, it was confirmed that the recall was slightly degraded, but the precision was considerably improved. In addition, the $F_1-score$ was also evaluated as a high value.

Luma Mapping Function Generation Method Using Attention Map of Convolutional Neural Network in Versatile Video Coding Encoder (VVC 인코더에서 합성 곱 신경망의 어텐션 맵을 이용한 휘도 매핑 함수 생성 방법)

  • Kwon, Naseong;Lee, Jongseok;Byeon, Joohyung;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.441-452
    • /
    • 2021
  • In this paper, we propose a method for generating luma signal mapping function to improve the coding efficiency of luma signal mapping methods in LMCS. In this paper, we propose a method to reflect the cognitive and perceptual features by multiplying the attention map of convolutional neural networks on local spatial variance used to reflect local features in the existing LMCS. To evaluate the performance of the proposed method, BD-rate is compared with VTM-12.0 using classes A1, A2, B, C and D of MPEG standard test sequences under AI (All Intra) conditions. As a result of experiments, the proposed method in this paper shows improvement in performance the average of -0.07% for luma components in terms of BD-rate performance compared to VTM-12.0 and encoding/decoding time is almost the same.

Vector and Thickness Based Learning Augmentation Method for Efficiently Collecting Concrete Crack Images

  • Jong-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.65-73
    • /
    • 2023
  • In this paper, we propose a data augmentation method based on CNN(Convolutional Neural Network) learning for efficiently obtaining concrete crack image datasets. Real concrete crack images are not only difficult to obtain due to their unstructured shape and complex patterns, but also may be exposed to dangerous situations when acquiring data. In this paper, we solve the problem of collecting datasets exposed to such situations efficiently in terms of cost and time by using vector and thickness-based data augmentation techniques. To demonstrate the effectiveness of the proposed method, experiments were conducted in various scenes using U-Net-based crack detection, and the performance was improved in all scenes when measured by IoU accuracy. When the concrete crack data was not augmented, the percentage of incorrect predictions was about 25%, but when the data was augmented by our method, the percentage of incorrect predictions was reduced to 3%.

Breaking character and natural image based CAPTCHA using feature classification (특징 분리를 통한 자연 배경을 지닌 글자 기반 CAPTCHA 공격)

  • Kim, Jaehwan;Kim, Suah;Kim, Hyoung Joong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1011-1019
    • /
    • 2015
  • CAPTCHA(Completely Automated Public Turing test to tell Computers and Humans Apart) is a test used in computing to distinguish whether or not the user is computer or human. Many web sites mostly use the character-based CAPTCHA consisting of digits and characters. Recently, with the development of OCR technology, simple character-based CAPTCHA are broken quite easily. As an alternative, many web sites add noise to make it harder for recognition. In this paper, we analyzed the most recent CAPTCHA, which incorporates the addition of the natural images to obfuscate the characters. We proposed an efficient method using support vector machine to separate the characters from the background image and use convolutional neural network to recognize each characters. As a result, 368 out of 1000 CAPTCHAs were correctly identified, it was demonstrated that the current CAPTCHA is not safe.