• Title/Summary/Keyword: Deep neural networks

Search Result 883, Processing Time 0.024 seconds

Detection of Proximal Caries Lesions with Deep Learning Algorithm (심층학습 알고리즘을 활용한 인접면 우식 탐지)

  • Hyuntae, Kim;Ji-Soo, Song;Teo Jeon, Shin;Hong-Keun, Hyun;Jung-Wook, Kim;Ki-Taeg, Jang;Young-Jae, Kim
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.49 no.2
    • /
    • pp.131-139
    • /
    • 2022
  • This study aimed to evaluate the effectiveness of deep convolutional neural networks (CNNs) for diagnosis of interproximal caries in pediatric intraoral radiographs. A total of 500 intraoral radiographic images of first and second primary molars were used for the study. A CNN model (Resnet 50) was applied for the detection of proximal caries. The diagnostic accuracy, sensitivity, specificity, receiver operating characteristic (ROC) curve, and area under ROC curve (AUC) were calculated on the test dataset. The diagnostic accuracy was 0.84, sensitivity was 0.74, and specificity was 0.94. The trained CNN algorithm achieved AUC of 0.86. The diagnostic CNN model for pediatric intraoral radiographs showed good performance with high accuracy. Deep learning can assist dentists in diagnosis of proximal caries lesions in pediatric intraoral radiographs.

Deep learning algorithms for identifying 79 dental implant types (79종의 임플란트 식별을 위한 딥러닝 알고리즘)

  • Hyun-Jun, Kong;Jin-Yong, Yoo;Sang-Ho, Eom;Jun-Hyeok, Lee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.38 no.4
    • /
    • pp.196-203
    • /
    • 2022
  • Purpose: This study aimed to evaluate the accuracy and clinical usability of an identification model using deep learning for 79 dental implant types. Materials and Methods: A total of 45396 implant fixture images were collected through panoramic radiographs of patients who received implant treatment from 2001 to 2020 at 30 dental clinics. The collected implant images were 79 types from 18 manufacturers. EfficientNet and Meta Pseudo Labels algorithms were used. For EfficientNet, EfficientNet-B0 and EfficientNet-B4 were used as submodels. For Meta Pseudo Labels, two models were applied according to the widen factor. Top 1 accuracy was measured for EfficientNet and top 1 and top 5 accuracy for Meta Pseudo Labels were measured. Results: EfficientNet-B0 and EfficientNet-B4 showed top 1 accuracy of 89.4. Meta Pseudo Labels 1 showed top 1 accuracy of 87.96, and Meta pseudo labels 2 with increased widen factor showed 88.35. In Top5 Accuracy, the score of Meta Pseudo Labels 1 was 97.90, which was 0.11% higher than 97.79 of Meta Pseudo Labels 2. Conclusion: All four deep learning algorithms used for implant identification in this study showed close to 90% accuracy. In order to increase the clinical applicability of deep learning for implant identification, it will be necessary to collect a wider amount of data and develop a fine-tuned algorithm for implant identification.

User Identification Method using Palm Creases and Veins based on Deep Learning (손금과 손바닥 정맥을 함께 이용한 심층 신경망 기반 사용자 인식)

  • Kim, Seulbeen;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.395-402
    • /
    • 2018
  • Human palms contain discriminative features for proving the identity of each person. In this paper, we present a novel method for user verification based on palmprints and palm veins. Specifically, the region of interest (ROI) is first determined to be forced to include the maximum amount of information with respect to underlying structures of a given palm image. The extracted ROI is subsequently enhanced by directional patterns and statistical characteristics of intensities. For multispectral palm images, each of convolutional neural networks (CNNs) is independently trained. In a spirit of ensemble, we finally combine network outputs to compute the probability of a given ROI image for determining the identity. Based on various experiments, we confirm that the proposed ensemble method is effective for user verification with palmprints and palm veins.

A Verification about the Formation Process of Filter Bubble with Personalization Algorithm (개인화 알고리즘으로 필터 버블이 형성되는 과정에 대한 검증)

  • Jun, Junyong;Hwang, Soyoun;Yoon, Youngmi
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.369-381
    • /
    • 2018
  • Nowadays a personalization algorithm is gaining huge attention. It gives users selective information which is helpful and interesting in a deluge of information based on their past behavior on the internet. However there is also a fatal side effect that the user can only get restricted information on restricted topics selected by the algorithm. Basically, the personalization algorithm makes users have a narrower perspective and even stronger bias because users have less chances to get views of opponent. Eli Pariser called this problem the 'filter bubble' in his book. It is important to understand exactly what a filter bubble is to solve the problem. Therefore, this paper shows how much Google's personalized search algorithm influences search result through an experiment with deep neural networks acting like users. At the beginning of the experiment, two Google accounts are newly created, not to be influenced by the Google's personalized search algorithm. Then the two pure accounts get politically biased by two methods. We periodically calculate the numerical score depending on the character of links and it shows how biased the account is. In conclusion, this paper shows the formation process of filter bubble by a personalization algorithm through the experiment.

Recent Trends of Object and Scene Recognition Technologies for Mobile/Embedded Devices (모바일/임베디드 객체 및 장면 인식 기술 동향)

  • Lee, S.W.;Lee, G.D.;Ko, J.G.;Lee, S.J.;Yoo, W.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.133-144
    • /
    • 2019
  • Although deep learning-based visual image recognition technology has evolved rapidly, most of the commonly used methods focus solely on recognition accuracy. However, the demand for low latency and low power consuming image recognition with an acceptable accuracy is rising for practical applications in edge devices. For example, most Internet of Things (IoT) devices have a low computing power requiring more pragmatic use of these technologies; in addition, drones or smartphones have limited battery capacity again requiring practical applications that take this into consideration. Furthermore, some people do not prefer that central servers process their private images, as is required by high performance serverbased recognition technologies. To address these demands, the object and scene recognition technologies for mobile/embedded devices that enable optimized neural networks to operate in mobile and embedded environments are gaining attention. In this report, we briefly summarize the recent trends and issues of object and scene recognition technologies for mobile and embedded devices.

Waste Classification by Fine-Tuning Pre-trained CNN and GAN

  • Alsabei, Amani;Alsayed, Ashwaq;Alzahrani, Manar;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.65-70
    • /
    • 2021
  • Waste accumulation is becoming a significant challenge in most urban areas and if it continues unchecked, is poised to have severe repercussions on our environment and health. The massive industrialisation in our cities has been followed by a commensurate waste creation that has become a bottleneck for even waste management systems. While recycling is a viable solution for waste management, it can be daunting to classify waste material for recycling accurately. In this study, transfer learning models were proposed to automatically classify wastes based on six materials (cardboard, glass, metal, paper, plastic, and trash). The tested pre-trained models were ResNet50, VGG16, InceptionV3, and Xception. Data augmentation was done using a Generative Adversarial Network (GAN) with various image generation percentages. It was found that models based on Xception and VGG16 were more robust. In contrast, models based on ResNet50 and InceptionV3 were sensitive to the added machine-generated images as the accuracy degrades significantly compared to training with no artificial data.

Refined identification of hybrid traffic in DNS tunnels based on regression analysis

  • Bai, Huiwen;Liu, Guangjie;Zhai, Jiangtao;Liu, Weiwei;Ji, Xiaopeng;Yang, Luhui;Dai, Yuewei
    • ETRI Journal
    • /
    • v.43 no.1
    • /
    • pp.40-52
    • /
    • 2021
  • DNS (Domain Name System) tunnels almost obscure the true network activities of users, which makes it challenging for the gateway or censorship equipment to identify malicious or unpermitted network behaviors. An efficient way to address this problem is to conduct a temporal-spatial analysis on the tunnel traffic. Nevertheless, current studies on this topic limit the DNS tunnel to those with a single protocol, whereas more than one protocol may be used simultaneously. In this paper, we concentrate on the refined identification of two protocols mixed in a DNS tunnel. A feature set is first derived from DNS query and response flows, which is incorporated with deep neural networks to construct a regression model. We benchmark the proposed method with captured DNS tunnel traffic, the experimental results show that the proposed scheme can achieve identification accuracy of more than 90%. To the best of our knowledge, the proposed scheme is the first to estimate the ratios of two mixed protocols in DNS tunnels.

Trends and Future of Digital Personal Assistant (디지털 개인비서 동향과 미래)

  • Kwon, O.W.;Lee, K.Y.;Lee, Y.H.;Roh, Y.H.;Cho, M.S.;Huang, J.X.;Lim, S.J.;Choi, S.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • In this study, we introduce trends in and the future of digital personal assistants. Recently, digital personal assistants have begun to handle many tasks like humans by communicating with users in human language on smart devices such as smart phones, smart speakers, and smart cars. Their capabilities range from simple voice commands and chitchat to complex tasks such as device control, reservation, ordering, and scheduling. The digital personal assistants of the future will certainly speak like a person, have a person-like personality, see, hear, and analyze situations like a person, and become more human. Dialogue processing technology that makes them more human-like has developed into an end-to-end learning model based on deep neural networks in recent years. In addition, language models pre-trained from a large corpus make dialogue processing more natural and better understood. Advances in artificial intelligence such as dialogue processing technology will enable digital personal assistants to serve with more familiar and better performance in various areas.

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

Application of YOLOv5 Neural Network Based on Improved Attention Mechanism in Recognition of Thangka Image Defects

  • Fan, Yao;Li, Yubo;Shi, Yingnan;Wang, Shuaishuai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.245-265
    • /
    • 2022
  • In response to problems such as insufficient extraction information, low detection accuracy, and frequent misdetection in the field of Thangka image defects, this paper proposes a YOLOv5 prediction algorithm fused with the attention mechanism. Firstly, the Backbone network is used for feature extraction, and the attention mechanism is fused to represent different features, so that the network can fully extract the texture and semantic features of the defect area. The extracted features are then weighted and fused, so as to reduce the loss of information. Next, the weighted fused features are transferred to the Neck network, the semantic features and texture features of different layers are fused by FPN, and the defect target is located more accurately by PAN. In the detection network, the CIOU loss function is used to replace the GIOU loss function to locate the image defect area quickly and accurately, generate the bounding box, and predict the defect category. The results show that compared with the original network, YOLOv5-SE and YOLOv5-CBAM achieve an improvement of 8.95% and 12.87% in detection accuracy respectively. The improved networks can identify the location and category of defects more accurately, and greatly improve the accuracy of defect detection of Thangka images.