• Title/Summary/Keyword: Deep Feature Reconstruction

Search Result 18, Processing Time 0.021 seconds

Enhanced Deep Feature Reconstruction : Texture Defect Detection and Segmentation through Preservation of Multi-scale Features (개선된 Deep Feature Reconstruction : 다중 스케일 특징의 보존을 통한 텍스쳐 결함 감지 및 분할)

  • Jongwook Si;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.369-377
    • /
    • 2023
  • In the industrial manufacturing sector, quality control is pivotal for minimizing defect rates; inadequate management can result in additional costs and production delays. This study underscores the significance of detecting texture defects in manufactured goods and proposes a more precise defect detection technique. While the DFR(Deep Feature Reconstruction) model adopted an approach based on feature map amalgamation and reconstruction, it had inherent limitations. Consequently, we incorporated a new loss function using statistical methodologies, integrated a skip connection structure, and conducted parameter tuning to overcome constraints. When this enhanced model was applied to the texture category of the MVTec-AD dataset, it recorded a 2.3% higher Defect Segmentation AUC compared to previous methods, and the overall defect detection performance was improved. These findings attest to the significant contribution of the proposed method in defect detection through the reconstruction of feature map combinations.

Rank-weighted reconstruction feature for a robust deep neural network-based acoustic model

  • Chung, Hoon;Park, Jeon Gue;Jung, Ho-Young
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2019
  • In this paper, we propose a rank-weighted reconstruction feature to improve the robustness of a feed-forward deep neural network (FFDNN)-based acoustic model. In the FFDNN-based acoustic model, an input feature is constructed by vectorizing a submatrix that is created by slicing the feature vectors of frames within a context window. In this type of feature construction, the appropriate context window size is important because it determines the amount of trivial or discriminative information, such as redundancy, or temporal context of the input features. However, we ascertained whether a single parameter is sufficiently able to control the quantity of information. Therefore, we investigated the input feature construction from the perspectives of rank and nullity, and proposed a rank-weighted reconstruction feature herein, that allows for the retention of speech information components and the reduction in trivial components. The proposed method was evaluated in the TIMIT phone recognition and Wall Street Journal (WSJ) domains. The proposed method reduced the phone error rate of the TIMIT domain from 18.4% to 18.0%, and the word error rate of the WSJ domain from 4.70% to 4.43%.

Pyramid Feature Compression with Inter-Level Feature Restoration-Prediction Network (계층 간 특징 복원-예측 네트워크를 통한 피라미드 특징 압축)

  • Kim, Minsub;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.283-294
    • /
    • 2022
  • The feature map used in the network for deep learning generally has larger data than the image and a higher compression rate than the image compression rate is required to transmit the feature map. This paper proposes a method for transmitting a pyramid feature map with high compression rate, which is used in a network with an FPN structure that has robustness to object size in deep learning-based image processing. In order to efficiently compress the pyramid feature map, this paper proposes a structure that predicts a pyramid feature map of a level that is not transmitted with pyramid feature map of some levels that transmitted through the proposed prediction network to efficiently compress the pyramid feature map and restores compression damage through the proposed reconstruction network. Suggested mAP, the performance of object detection for the COCO data set 2017 Train images of the proposed method, showed a performance improvement of 31.25% in BD-rate compared to the result of compressing the feature map through VTM12.0 in the rate-precision graph, and compared to the method of performing compression through PCA and DeepCABAC, the BD-rate improved by 57.79%.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Anomaly-based Alzheimer's disease detection using entropy-based probability Positron Emission Tomography images

  • Husnu Baris Baydargil;Jangsik Park;Ibrahim Furkan Ince
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.513-525
    • /
    • 2024
  • Deep neural networks trained on labeled medical data face major challenges owing to the economic costs of data acquisition through expensive medical imaging devices, expert labor for data annotation, and large datasets to achieve optimal model performance. The heterogeneity of diseases, such as Alzheimer's disease, further complicates deep learning because the test cases may substantially differ from the training data, possibly increasing the rate of false positives. We propose a reconstruction-based self-supervised anomaly detection model to overcome these challenges. It has a dual-subnetwork encoder that enhances feature encoding augmented by skip connections to the decoder for improving the gradient flow. The novel encoder captures local and global features to improve image reconstruction. In addition, we introduce an entropy-based image conversion method. Extensive evaluations show that the proposed model outperforms benchmark models in anomaly detection and classification using an encoder. The supervised and unsupervised models show improved performances when trained with data preprocessed using the proposed image conversion method.

Single Image Super Resolution using Multi Grouped Block with Adaptive Weighted Residual Blocks (적응형 가중치 잔차 블록을 적용한 다중 블록 구조 기반의 단일 영상 초해상도 기법)

  • Hyun Ho Han
    • Journal of Digital Policy
    • /
    • v.3 no.3
    • /
    • pp.9-14
    • /
    • 2024
  • In this paper, proposes a method using a multi block structure composed of residual blocks with adaptive weights to improve the quality of results in single image super resolution. In the process of generating super resolution images using deep learning, the most critical factor for enhancing quality is feature extraction and application. While extracting various features is essential for restoring fine details that have been lost due to low resolution, issues such as increased network depth and complexity pose challenges in practical implementation. Therefore, the feature extraction process was structured efficiently, and the application process was improved to enhance quality. To achieve this, a multi block structure was designed after the initial feature extraction, with nested residual blocks inside each block, where adaptive weights were applied. Additionally, for final high resolution reconstruction, a multi kernel image reconstruction process was employed, further improving the quality of the results. The performance of the proposed method was evaluated by calculating PSNR and SSIM values compared to the original image, and its superiority was demonstrated through comparisons with existing algorithms.

ASPPMVSNet: A high-receptive-field multiview stereo network for dense three-dimensional reconstruction

  • Saleh Saeed;Sungjun Lee;Yongju Cho;Unsang Park
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.1034-1046
    • /
    • 2022
  • The learning-based multiview stereo (MVS) methods for three-dimensional (3D) reconstruction generally use 3D volumes for depth inference. The quality of the reconstructed depth maps and the corresponding point clouds is directly influenced by the spatial resolution of the 3D volume. Consequently, these methods produce point clouds with sparse local regions because of the lack of the memory required to encode a high volume of information. Here, we apply the atrous spatial pyramid pooling (ASPP) module in MVS methods to obtain dense feature maps with multiscale, long-range, contextual information using high receptive fields. For a given 3D volume with the same spatial resolution as that in the MVS methods, the dense feature maps from the ASPP module encoded with superior information can produce dense point clouds without a high memory footprint. Furthermore, we propose a 3D loss for training the MVS networks, which improves the predicted depth values by 24.44%. The ASPP module provides state-of-the-art qualitative results by constructing relatively dense point clouds, which improves the DTU MVS dataset benchmarks by 2.25% compared with those achieved in the previous MVS methods.

Multimode-fiber Speckle Image Reconstruction Based on Multiscale Convolution and a Multidimensional Attention Mechanism

  • Kai Liu;Leihong Zhang;Runchu Xu;Dawei Zhang;Haima Yang;Quan Sun
    • Current Optics and Photonics
    • /
    • v.8 no.5
    • /
    • pp.463-471
    • /
    • 2024
  • Multimode fibers (MMFs) possess high information throughput and small core diameter, making them highly promising for applications such as endoscopy and communication. However, modal dispersion hinders the direct use of MMFs for image transmission. By training neural networks on time-series waveforms collected from MMFs it is possible to reconstruct images, transforming blurred speckle patterns into recognizable images. This paper proposes a fully convolutional neural-network model, MSMDFNet, for image restoration in MMFs. The network employs an encoder-decoder architecture, integrating multiscale convolutional modules in the decoding layers to enhance the receptive field for feature extraction. Additionally, attention mechanisms are incorporated from both spatial and channel dimensions, to improve the network's feature-perception capabilities. The algorithm demonstrates excellent performance on MNIST and Fashion-MNIST datasets collected through MMFs, showing significant improvements in various metrics such as SSIM.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

A Study on Pre-processing for the Classification of Rare Classes (희소 클래스 분류 문제 해결을 위한 전처리 연구)

  • Ryu, Kyungjoon;Shin, Dongkyoo;Shin, Dongil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.472-475
    • /
    • 2020
  • 실생활의 사례를 바탕으로 생성된 여러 분야의 데이터셋을 기계학습 (Machine Learning) 문제에 적용하고 있다. 정보보안 분야에서도 사이버 공간에서의 공격 트래픽 데이터를 기계학습으로 분석하는 많은 연구들이 진행 되어 왔다. 본 논문에서는 공격 데이터를 유형별로 정확히 분류할 때, 실생활 데이터에서 흔하게 발생하는 데이터 불균형 문제로 인한 분류 성능 저하에 대한 해결방안을 연구했다. 희소 클래스 관점에서 데이터를 재구성하고 기계학습에 악영향을 끼치는 특징들을 제거하고 DNN(Deep Neural Network) 모델을 사용해 분류 성능을 평가했다.