• Title/Summary/Keyword: Fully convolutional Network

Search Result 118, Processing Time 0.027 seconds

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

Object Feature Tracking Algorithm based on Siame-FPN (Siame-FPN기반 객체 특징 추적 알고리즘)

  • Kim, Jong-Chan;Lim, Su-Chang
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.247-256
    • /
    • 2022
  • Visual tracking of selected target objects is fundamental challenging problems in computer vision. Object tracking localize the region of target object with bounding box in the video. We propose a Siam-FPN based custom fully CNN to solve visual tracking problems by regressing the target area in an end-to-end manner. A method of preserving the feature information flow using a feature map connection structure was applied. In this way, information is preserved and emphasized across the network. To regress object region and to classify object, the region proposal network was connected with the Siamese network. The performance of the tracking algorithm was evaluated using the OTB-100 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.621 in Success Plot and 0.838 in Precision Plot were achieved.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Adaptive Importance Channel Selection for Perceptual Image Compression

  • He, Yifan;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3823-3840
    • /
    • 2020
  • Recently, auto-encoder has emerged as the most popular method in convolutional neural network (CNN) based image compression and has achieved impressive performance. In the traditional auto-encoder based image compression model, the encoder simply sends the features of last layer to the decoder, which cannot allocate bits over different spatial regions in an efficient way. Besides, these methods do not fully exploit the contextual information under different receptive fields for better reconstruction performance. In this paper, to solve these issues, a novel auto-encoder model is designed for image compression, which can effectively transmit the hierarchical features of the encoder to the decoder. Specifically, we first propose an adaptive bit-allocation strategy, which can adaptively select an importance channel. Then, we conduct the multiply operation on the generated importance mask and the features of the last layer in our proposed encoder to achieve efficient bit allocation. Moreover, we present an additional novel perceptual loss function for more accurate image details. Extensive experiments demonstrated that the proposed model can achieve significant superiority compared with JPEG and JPEG2000 both in both subjective and objective quality. Besides, our model shows better performance than the state-of-the-art convolutional neural network (CNN)-based image compression methods in terms of PSNR.

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.

Reconstruction of wind speed fields in mountainous areas using a full convolutional neural network

  • Ruifang Shen;Bo Li;Ke Li;Bowen Yan;Yuanzhao Zhang
    • Wind and Structures
    • /
    • v.38 no.4
    • /
    • pp.231-244
    • /
    • 2024
  • As wind farms expand into low wind speed areas, an increasing number are being established in mountainous regions. To fully utilize wind energy resources, it is essential to understand the details of mountain flow fields. Reconstructing the wind speed field in complex terrain is crucial for planning, designing, operation of wind farms, which impacts the wind farm's profits throughout its life cycle. Currently, wind speed reconstruction is primarily achieved through physical and machine learning methods. However, physical methods often require significant computational costs. Therefore, we propose a Full Convolutional Neural Network (FCNN)-based reconstruction method for mountain wind velocity fields to evaluate wind resources more accurately and efficiently. This method establishes the mapping relation between terrain, wind angle, height, and corresponding velocity fields of three velocity components within a specific terrain range. Guided by this mapping relation, wind velocity fields of three components at different terrains, wind angles, and heights can be generated. The effectiveness of this method was demonstrated by reconstructing the wind speed field of complex terrain in Beijing.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.

Implementation of handwritten digit recognition CNN structure using GPGPU and Combined Layer (GPGPU와 Combined Layer를 이용한 필기체 숫자인식 CNN구조 구현)

  • Lee, Sangil;Nam, Kihun;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.165-169
    • /
    • 2017
  • CNN(Convolutional Nerual Network) is one of the algorithms that show superior performance in image recognition and classification among machine learning algorithms. CNN is simple, but it has a large amount of computation and it takes a lot of time. Consequently, in this paper we performed an parallel processing unit for the convolution layer, pooling layer and the fully connected layer, which consumes a lot of handling time in the process of CNN, through the SIMT(Single Instruction Multiple Thread)'s structure of GPGPU(General-Purpose computing on Graphics Processing Units).And we also expect to improve performance by reducing the number of memory accesses and directly using the output of convolution layer not storing it in pooling layer. In this paper, we use MNIST dataset to verify this experiment and confirm that the proposed CNN structure is 12.38% better than existing structure.

A Study on Deep Learning Optimization by Land Cover Classification Item Using Satellite Imagery (위성영상을 활용한 토지피복 분류 항목별 딥러닝 최적화 연구)

  • Lee, Seong-Hyeok;Lee, Moung-jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1591-1604
    • /
    • 2020
  • This study is a study on classifying land cover by applying high-resolution satellite images to deep learning algorithms and verifying the performance of algorithms for each spatial object. For this, the Fully Convolutional Network-based algorithm was selected, and a dataset was constructed using Kompasat-3 satellite images, land cover maps, and forest maps. By applying the constructed data set to the algorithm, each optimal hyperparameter was calculated. Final classification was performed after hyperparameter optimization, and the overall accuracy of DeeplabV3+ was calculated the highest at 81.7%. However, when looking at the accuracy of each category, SegNet showed the best performance in roads and buildings, and U-Net showed the highest accuracy in hardwood trees and discussion items. In the case of Deeplab V3+, it performed better than the other two models in fields, facility cultivation, and grassland. Through the results, the limitations of applying one algorithm for land cover classification were confirmed, and if an appropriate algorithm for each spatial object is applied in the future, it is expected that high quality land cover classification results can be produced.

Urinary Stones Segmentation Model and AI Web Application Development in Abdominal CT Images Through Machine Learning (기계학습을 통한 복부 CT영상에서 요로결석 분할 모델 및 AI 웹 애플리케이션 개발)

  • Lee, Chung-Sub;Lim, Dong-Wook;Noh, Si-Hyeong;Kim, Tae-Hoon;Park, Sung-Bin;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.11
    • /
    • pp.305-310
    • /
    • 2021
  • Artificial intelligence technology in the medical field initially focused on analysis and algorithm development, but it is gradually changing to web application development for service as a product. This paper describes a Urinary Stone segmentation model in abdominal CT images and an artificial intelligence web application based on it. To implement this, a model was developed using U-Net, a fully-convolutional network-based model of the end-to-end method proposed for the purpose of image segmentation in the medical imaging field. And for web service development, it was developed based on AWS cloud using a Python-based micro web framework called Flask. Finally, the result predicted by the urolithiasis segmentation model by model serving is shown as the result of performing the AI web application service. We expect that our proposed AI web application service will be utilized for screening test.