• Title/Summary/Keyword: Convolution Mask

Search Result 19, Processing Time 0.021 seconds

New Approach to Optimize the Size of Convolution Mask in Convolutional Neural Networks

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • Convolutional neural network (CNN) consists of a few pairs of both convolution layer and subsampling layer. Thus it has more hidden layers than multi-layer perceptron. With the increased layers, the size of convolution mask ultimately determines the total number of weights in CNN because the mask is shared among input images. It also is an important learning factor which makes or breaks CNN's learning. Therefore, this paper proposes the best method to choose the convolution size and the number of layers for learning CNN successfully. Through our face recognition with vast learning examples, we found that the best size of convolution mask is 5 by 5 and 7 by 7, regardless of the number of layers. In addition, the CNN with two pairs of both convolution and subsampling layer is found to make the best performance as if the multi-layer perceptron having two hidden layers does.

Accelerated Convolution Image Processing by Using Look-Up Table and Overlap Region Buffering Method (Loop-Up Table과 필터 중첩영역 버퍼링 기법을 이용한 컨벌루션 영상처리 고속화)

  • Kim, Hyun-Woo;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.17-22
    • /
    • 2012
  • Convolution filtering methods have been widely applied to various digital signal processing fields for image blurring, sharpening, edge detection, and noise reduction, etc. According to their application purpose, the filter mask size or shape and the mask value are selected in advance, and the designed filter is applied to input image for the convolution processing. In this paper, we proposed an image processing acceleration method for the convolution processing by using two-dimensional Look-up table (LUT) and overlap-region buffering technique. First, based on the fixed convolution mask value, the multiplication operation between 8 or 10 bit pixel values of the input image and the filter mask values is performed a priori, and the results memorized in LUT are referred during the convolution process. Second, based on symmetric structural characteristics of the convolution filters, inherent duplicated operation region is analysed, and the saved operation results in one step before in the predefined memory buffer is recalled and reused in current operation step. Through this buffering, unnecessary repeated filter operation on the same regions is minimized in sequential manner. As the proposed algorithms minimize the computational amount needed for the convolution operation, they work well under the operation environments utilizing embedded systems with limited computational resources or the environments of utilizing general personnel computers. A series of experiments under various situations verifies the effectiveness and usefulness of the proposed methods.

Digital Filter based on Expended Convolution Mask to Reconstruct Impulse Noise Image (임펄스 잡음 영상을 복원하기 위한 확장된 컨벌루션 마스크 기반의 디지털 필터)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.431-433
    • /
    • 2022
  • With the development of IoT technology, various technologies such as artificial intelligence and automation are being grafted into industrial sites, and accordingly, the importance of data processing is increasing. Image denoising is one of the basic processes of image processing, and is used as a preprocessing step in many applications. Various studies have been conducted to remove noise, but various problems arise in the process of noise removal, such as image detail preservation, texture restoration, and special noise removal. In this paper, we propose a digital filter using an extended convolutional mask to preserve image detail during the impulse denoising process. The proposed algorithm uses an extended convolution mask as a filtering mask, and obtains the final output by switching the extension level according to the noise level. Simulation was conducted to evaluate the performance of the proposed algorithm, and the performance was analyzed compared to the existing method.

  • PDF

Helmet and Mask Classification for Personnel Safety Using a Deep Learning (딥러닝 기반 직원 안전용 헬멧과 마스크 분류)

  • Shokhrukh, Bibalaev;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.473-482
    • /
    • 2022
  • Wearing a mask is also necessary to limit the risk of infection in today's era of COVID-19 and wearing a helmet is inevitable for the safety of personnel who works in a dangerous working environment such as construction sites. This paper proposes an effective deep learning model, HelmetMask-Net, to classify both Helmet and Mask. The proposed HelmetMask-Net is based on CNN which consists of data processing, convolution layers, max pooling layers and fully connected layers with four output classifications, and 4 classes for Helmet, Mask, Helmet & Mask, and no Helmet & no Mask are classified. The proposed HelmatMask-Net has been chosen with 2 convolutional layers and AdaGrad optimizer by various simulations for accuracy, optimizer and the number of hyperparameters. Simulation results show the accuracy of 99% and the best performance compared to other models. The results of this paper would enhance the safety of personnel in this era of COVID-19.

A method based on Multi-Convolution layers Joint and Generative Adversarial Networks for Vehicle Detection

  • Han, Guang;Su, Jinpeng;Zhang, Chengwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1795-1811
    • /
    • 2019
  • In order to achieve rapid and accurate detection of vehicle objects in complex traffic conditions, we propose a novel vehicle detection method. Firstly, more contextual and small-object vehicle information can be obtained by our Joint Feature Network (JFN). Secondly, our Evolved Region Proposal Network (EPRN) generates initial anchor boxes by adding an improved version of the region proposal network in this network, and at the same time filters out a large number of false vehicle boxes by soft-Non Maximum Suppression (NMS). Then, our Mask Network (MaskN) generates an example that includes the vehicle occlusion, the generator and discriminator can learn from each other in order to further improve the vehicle object detection capability. Finally, these candidate vehicle detection boxes are optimized to obtain the final vehicle detection boxes by the Fine-Tuning Network(FTN). Through the evaluation experiment on the DETRAC benchmark dataset, we find that in terms of mAP, our method exceeds Faster-RCNN by 11.15%, YOLO by 11.88%, and EB by 1.64%. Besides, our algorithm also has achieved top2 comaring with MS-CNN, YOLO-v3, RefineNet, RetinaNet, Faster-rcnn, DSSD and YOLO-v2 of vehicle category in KITTI dataset.

Crack Detection on the Road in Aerial Image using Mask R-CNN (Mask R-CNN을 이용한 항공 영상에서의 도로 균열 검출)

  • Lee, Min Hye;Nam, Kwang Woo;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.3
    • /
    • pp.23-29
    • /
    • 2019
  • Conventional crack detection methods have a problem of consuming a lot of labor, time and cost. To solve these problems, an automatic detection system is needed to detect cracks in images obtained by using vehicles or UAVs(unmanned aerial vehicles). In this paper, we have studied road crack detection with unmanned aerial photographs. Aerial images are generated through preprocessing and labeling to generate morphological information data sets of cracks. The generated data set was applied to the mask R-CNN model to obtain a new model in which various crack information was learned. Experimental results show that the cracks in the proposed aerial image were detected with an accuracy of 73.5% and some of them were predicted in a certain type of crack region.

Low Pass Filtering for the Extraction of Island Detection in Coastal Zone from SPOT Imagery (SPOT 위성영상을 이용한 LPF 기법으로 해안지역의 섬 경계 추출)

  • Choi Hyun;Yoon Hong-Joo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1787-1792
    • /
    • 2005
  • The join of remote sensing and GIS(Geographic Information System) could be useful in various fields of marine information and land information as well as ITS(Intelligent Transport Systems). This paper is LPF(Low Pass Filtering) for the extraction of island detection in coastal zone Iron SPOT imagery which is 10m resolution photograph. The study area is based on the southern sea in korea. Sobel operator performed the extraction of island detection in coastal zone after the LPF processing by remote sensing. And, GIS was used to generate from raster to vector data. As the result, The best way prove out the 5${\times}$5 convolution mask about the LPF processing of island detection in coastal zone. It is judged the research which it sees with the fact that the presentation of very scientific and reasonable data will be possible from the oceanic dispute will occur from the EEZ(Exclusive Economic Zone).

Microcode based Controller for Compact CNN Accelerators Aimed at Mobile Devices (모바일 디바이스를 위한 소형 CNN 가속기의 마이크로코드 기반 컨트롤러)

  • Na, Yong-Seok;Son, Hyun-Wook;Kim, Hyung-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.355-366
    • /
    • 2022
  • This paper proposes a microcode-based neural network accelerator controller for artificial intelligence accelerators that can be reconstructed using a programmable architecture and provide the advantages of low-power and ultra-small chip size. In order for the target accelerator to support various neural network models, the neural network model can be converted into microcode through microcode compiler and mounted on accelerator to control the operators of the accelerator such as datapath and memory access. While the proposed controller and accelerator can run various CNN models, in this paper, we tested them using the YOLOv2-Tiny CNN model. Using a system clock of 200 MHz, the Controller and accelerator achieved an inference time of 137.9 ms/image for VOC 2012 dataset to detect object, 99.5ms/image for mask detection dataset to detect wearing mask. When implementing an accelerator equipped with the proposed controller as a silicon chip, the gate count is 618,388, which corresponds to 65.5% reduction in chip area compared with an accelerator employing a CPU-based controller (RISC-V).

Improved Sliding Shapes for Instance Segmentation of Amodal 3D Object

  • Lin, Jinhua;Yao, Yu;Wang, Yanjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5555-5567
    • /
    • 2018
  • State-of-art instance segmentation networks are successful at generating 2D segmentation mask for region proposals with highest classification score, yet 3D object segmentation task is limited to geocentric embedding or detector of Sliding Shapes. To this end, we propose an amodal 3D instance segmentation network called A3IS-CNN, which extends the detector of Deep Sliding Shapes to amodal 3D instance segmentation by adding a new branch of 3D ConvNet called A3IS-branch. The A3IS-branch which takes 3D amodal ROI as input and 3D semantic instances as output is a fully convolution network(FCN) sharing convolutional layers with existing 3d RPN which takes 3D scene as input and 3D amodal proposals as output. For two branches share computation with each other, our 3D instance segmentation network adds only a small overhead of 0.25 fps to Deep Sliding Shapes, trading off accurate detection and point-to-point segmentation of instances. Experiments show that our 3D instance segmentation network achieves at least 10% to 50% improvement over the state-of-art network in running time, and outperforms the state-of-art 3D detectors by at least 16.1 AP.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.