• Title/Summary/Keyword: deep-approach

Search Result 1,051, Processing Time 0.024 seconds

Mapping Submarine Bathymetry and Geological Structure Using the Lineament Analysis Method

  • Kwon, O-Il;Baek, Yong;Kim, Jinhwan
    • The Journal of Engineering Geology
    • /
    • v.24 no.4
    • /
    • pp.455-461
    • /
    • 2014
  • The Honam-Jeju, Korea-Japan, and Korea-China subsea tunnel construction projects have drawn significant attention since the early 2000s. These subsea tunnels are much deeper than most existing natural shallow sea tunnels linking coastal areas. Thus, the need for developing new technologies for the site selection and construction of deep subsea tunnels has recently emerged, with the launch of a research project titled "Development of Key Subsea Tunnelling Technology" in 2013. A component of this research, an analysis of deep subsea geological structure, is currently underway. A ground investigation, such as a borehole or geophysical investigation, is generally carried out for tunnel design. However, when investigating a potential site for a deep subsea tunnel, borehole drilling requires equipment at the scale of offshore oil drilling. The huge cost of such an undertaking has raised the urgent need for methods to indirectly assess the local geological structure as much as possible to limit the need for repeated borehole investigations. This study introduces an indirect approach for assessing the geological structure of the seafloor through a submarine bathymetry analysis. The ultimate goal here is to develop an automated approach to the analysis of submarine geological structures, which may prove useful in the selection of future deep subsea tunnel sites.

A Deep Convolutional Neural Network with Batch Normalization Approach for Plant Disease Detection

  • Albogamy, Fahad R.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.51-62
    • /
    • 2021
  • Plant disease is one of the issues that can create losses in the production and economy of the agricultural sector. Early detection of this disease for finding solutions and treatments is still a challenge in the sustainable agriculture field. Currently, image processing techniques and machine learning methods have been applied to detect plant diseases successfully. However, the effectiveness of these methods still needs to be improved, especially in multiclass plant diseases classification. In this paper, a convolutional neural network with a batch normalization-based deep learning approach for classifying plant diseases is used to develop an automatic diagnostic assistance system for leaf diseases. The significance of using deep learning technology is to make the system be end-to-end, automatic, accurate, less expensive, and more convenient to detect plant diseases from their leaves. For evaluating the proposed model, an experiment is conducted on a public dataset contains 20654 images with 15 plant diseases. The experimental validation results on 20% of the dataset showed that the model is able to classify the 15 plant diseases labels with 96.4% testing accuracy and 0.168 testing loss. These results confirmed the applicability and effectiveness of the proposed model for the plant disease detection task.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

DEMO: Deep MR Parametric Mapping with Unsupervised Multi-Tasking Framework

  • Cheng, Jing;Liu, Yuanyuan;Zhu, Yanjie;Liang, Dong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.4
    • /
    • pp.300-312
    • /
    • 2021
  • Compressed sensing (CS) has been investigated in magnetic resonance (MR) parametric mapping to reduce scan time. However, the relatively long reconstruction time restricts its widespread applications in the clinic. Recently, deep learning-based methods have shown great potential in accelerating reconstruction time and improving imaging quality in fast MR imaging, although their adaptation to parametric mapping is still in an early stage. In this paper, we proposed a novel deep learning-based framework DEMO for fast and robust MR parametric mapping. Different from current deep learning-based methods, DEMO trains the network in an unsupervised way, which is more practical given that it is difficult to acquire large fully sampled training data of parametric-weighted images. Specifically, a CS-based loss function is used in DEMO to avoid the necessity of using fully sampled k-space data as the label, thus making it an unsupervised learning approach. DEMO reconstructs parametric weighted images and generates a parametric map simultaneously by unrolling an interaction approach in conventional fast MR parametric mapping, which enables multi-tasking learning. Experimental results showed promising performance of the proposed DEMO framework in quantitative MR T1ρ mapping.

Predicting Session Conversion on E-commerce: A Deep Learning-based Multimodal Fusion Approach

  • Minsu Kim;Woosik Shin;SeongBeom Kim;Hee-Woong Kim
    • Asia pacific journal of information systems
    • /
    • v.33 no.3
    • /
    • pp.737-767
    • /
    • 2023
  • With the availability of big customer data and advances in machine learning techniques, the prediction of customer behavior at the session-level has attracted considerable attention from marketing practitioners and scholars. This study aims to predict customer purchase conversion at the session-level by employing customer profile, transaction, and clickstream data. For this purpose, we develop a multimodal deep learning fusion model with dynamic and static features (i.e., DS-fusion). Specifically, we base page views within focal visist and recency, frequency, monetary value, and clumpiness (RFMC) for dynamic and static features, respectively, to comprehensively capture customer characteristics for buying behaviors. Our model with deep learning architectures combines these features for conversion prediction. We validate the proposed model using real-world e-commerce data. The experimental results reveal that our model outperforms unimodal classifiers with each feature and the classical machine learning models with dynamic and static features, including random forest and logistic regression. In this regard, this study sheds light on the promise of the machine learning approach with the complementary method for different modalities in predicting customer behaviors.

Tomato Crop Disease Classification Using an Ensemble Approach Based on a Deep Neural Network (심층 신경망 기반의 앙상블 방식을 이용한 토마토 작물의 질병 식별)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1250-1257
    • /
    • 2020
  • The early detection of diseases is important in agriculture because diseases are major threats of reducing crop yield for farmers. The shape and color of plant leaf are changed differently according to the disease. So we can detect and estimate the disease by inspecting the visual feature in leaf. This study presents a vision-based leaf classification method for detecting the diseases of tomato crop. ResNet-50 model was used to extract the visual feature in leaf and classify the disease of tomato crop, since the model showed the higher accuracy than the other ResNet models with different depths. We propose a new ensemble approach using several DCNN classifiers that have the same structure but have been trained at different ranges in the DCNN layers. Experimental result achieved accuracy of 97.19% for PlantVillage dataset. It validates that the proposed method effectively classify the disease of tomato crop.

TVM-based Performance Optimization for Image Classification in Embedded Systems (임베디드 시스템에서의 객체 분류를 위한 TVM기반의 성능 최적화 연구)

  • Cheonghwan Hur;Minhae Ye;Ikhee Shin;Daewoo Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.101-108
    • /
    • 2023
  • Optimizing the performance of deep neural networks on embedded systems is a challenging task that requires efficient compilers and runtime systems. We propose a TVM-based approach that consists of three steps: quantization, auto-scheduling, and ahead-of-time compilation. Our approach reduces the computational complexity of models without significant loss of accuracy, and generates optimized code for various hardware platforms. We evaluate our approach on three representative CNNs using ImageNet Dataset on the NVIDIA Jetson AGX Xavier board and show that it outperforms baseline methods in terms of processing speed.

Strut-and-tie model of deep beams with web openings - An optimization approach

  • Guan, Hong
    • Structural Engineering and Mechanics
    • /
    • v.19 no.4
    • /
    • pp.361-379
    • /
    • 2005
  • Reinforced concrete deep beams have useful applications in tall buildings and foundations. Over the past two decades, numerous design models for deep beams were suggested. However even the latest design manuals still offer little insight into the design of deep beams in particular when complexities exist in the beams like web openings. A method commonly suggested for the design of deep beams with openings is the strut-and-tie model which is primarily used to represent the actual load transfer mechanism in a structural concrete member under ultimate load. In the present study, the development of the strut-and-tie model is transformed to the topology optimization problem of continuum structures. During the optimization process, both the stress and displacement constraints are satisfied and the performance of progressive topologies is evaluated. The influences on the strut-and-tie model in relation to different size, location and number of openings, as well as different loading and support conditions in deep beams are examined in some detail. In all, eleven deep beams with web openings are optimized and compared in nine groups. The optimal strut-and-tie models achieved are also compared with published experimental crack patterns. Numerical results have shown to confirm the experimental observations and to efficiently represent the load transfer mechanism in concrete deep beams with openings under ultimate load.

Generalized Steganalysis using Deep Learning (딥러닝을 이용한 범용적 스테그아날리시스)

  • Kim, Hyunjae;Lee, Jaekoo;Kim, Gyuwan;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.244-249
    • /
    • 2017
  • Steganalysis is to detect information hidden by steganography inside general data such as images. There are stegoanalysis techniques that use machine learning (ML). Existing ML approaches to steganalysis are based on extracting features from stego images and modeling them. Recently deep learning-based methodologies have shown significant improvements in detection accuracy. However, all the existing methods, including deep learning-based ones, have a critical limitation in that they can only detect stego images that are created by a specific steganography method. In this paper, we propose a generalized steganalysis method that can model multiple types of stego images using deep learning. Through various experiments, we confirm the effectiveness of our approach and envision directions for future research. In particular, we show that our method can detect each type of steganography with the same level of accuracy as that of a steganalysis method dedicated to that type of steganography, thereby demonstrating the general applicability of our approach to multiple types of stego images.

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.