• 제목/요약/키워드: deep learning program

검색결과 145건 처리시간 0.022초

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • 제54권1호
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

A Study on Security Event Detection in ESM Using Big Data and Deep Learning

  • Lee, Hye-Min;Lee, Sang-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권3호
    • /
    • pp.42-49
    • /
    • 2021
  • As cyber attacks become more intelligent, there is difficulty in detecting advanced attacks in various fields such as industry, defense, and medical care. IPS (Intrusion Prevention System), etc., but the need for centralized integrated management of each security system is increasing. In this paper, we collect big data for intrusion detection and build an intrusion detection platform using deep learning and CNN (Convolutional Neural Networks). In this paper, we design an intelligent big data platform that collects data by observing and analyzing user visit logs and linking with big data. We want to collect big data for intrusion detection and build an intrusion detection platform based on CNN model. In this study, we evaluated the performance of the Intrusion Detection System (IDS) using the KDD99 dataset developed by DARPA in 1998, and the actual attack categories were tested with KDD99's DoS, U2R, and R2L using four probing methods.

Real-time photoplethysmographic heart rate measurement using deep neural network filters

  • Kim, Ji Woon;Park, Sung Min;Choi, Seong Wook
    • ETRI Journal
    • /
    • 제43권5호
    • /
    • pp.881-890
    • /
    • 2021
  • Photoplethysmography (PPG) is a noninvasive technique that can be used to conveniently measure heart rate (HR) and thus obtain relevant health-related information. However, developing an automated PPG system is difficult, because its waveforms are susceptible to motion artifacts and between-patient variation, making its interpretation difficult. We use deep neural network (DNN) filters to mimic the cognitive ability of a human expert who can distinguish the features of PPG altered by noise from various sources. Systolic (S), onset (O), and first derivative peaks (W) are recognized by three different DNN filters. In addition, the boundaries of uninformative regions caused by artifacts are identified by two different filters. The algorithm reliably derives the HR and presents recognition scores for the S, O, and W peaks and artifacts with only a 0.7-s delay. In the evaluation using data from 11 patients obtained from PhysioNet, the algorithm yields 8643 (86.12%) reliable HR measurements from a total of 10 036 heartbeats, including some with uninformative data resulting from arrhythmias and artifacts.

An intelligent method for pregnancy diagnosis in breeding sows according to ultrasonography algorithms

  • Jung-woo Chae;Yo-han Choi;Jeong-nam Lee;Hyun-ju Park;Yong-dae Jeong;Eun-seok Cho;Young-sin, Kim;Tae-kyeong Kim;Soo-jin Sa;Hyun-chong Cho
    • Journal of Animal Science and Technology
    • /
    • 제65권2호
    • /
    • pp.365-376
    • /
    • 2023
  • Pig breeding management directly contributes to the profitability of pig farms, and pregnancy diagnosis is an important factor in breeding management. Therefore, the need to diagnose pregnancy in sows is emphasized, and various studies have been conducted in this area. We propose a computer-aided diagnosis system to assist livestock farmers to diagnose sow pregnancy through ultrasound. Methods for diagnosing pregnancy in sows through ultrasound include the Doppler method, which measures the heart rate and pulse status, and the echo method, which diagnoses by amplitude depth technique. We propose a method that uses deep learning algorithms on ultrasonography, which is part of the echo method. As deep learning-based classification algorithms, Inception-v4, Xception, and EfficientNetV2 were used and compared to find the optimal algorithm for pregnancy diagnosis in sows. Gaussian and speckle noises were added to the ultrasound images according to the characteristics of the ultrasonography, which is easily affected by noise from the surrounding environments. Both the original and noise added ultrasound images of sows were tested together to determine the suitability of the proposed method on farms. The pregnancy diagnosis performance on the original ultrasound images achieved 0.99 in accuracy in the highest case and on the ultrasound images with noises, the performance achieved 0.98 in accuracy. The diagnosis performance achieved 0.96 in accuracy even when the intensity of noise was strong, proving its robustness against noise.

A Study on the Development of a Program to Body Circulation Measurement Using the Machine Learning and Depth Camera

  • Choi, Dong-Gyu;Jang, Jong-Wook
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권1호
    • /
    • pp.122-129
    • /
    • 2020
  • The circumference of the body is not only an indicator in order to buy clothes in our life but an important factor which can increase the effectiveness healing properly after figuring out the shape of body in a hospital. There are several measurement tools and methods so as to know this, however, it spends a lot of time because of the method measured by hand for accurate identification, compared to the modern advanced societies. Also, the current equipments for automatic body scanning are not easy to use due to their big volume or high price generally. In this papers, OpenPose model which is a deep learning-based Skeleton Tracking is used in order to solve the problems previous methods have and for ease of application. It was researched to find joints and an approximation by applying the data of the deep camera via reference data of the measurement parts provided by the hospitals and to develop a program which is able to measure the circumference of the body lighter and easier by utilizing the elliptical circumference formula.

Transfer Learning based DNN-SVM Hybrid Model for Breast Cancer Classification

  • Gui Rae Jo;Beomsu Baek;Young Soon Kim;Dong Hoon Lim
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권11호
    • /
    • pp.1-11
    • /
    • 2023
  • 유방암은 전 세계적으로 여성들 대다수에게 가장 두려워하는 질환이다. 오늘날 데이터의 증가와 컴퓨팅 기술의 향상으로 머신러닝(machine learning)의 효율성이 증대되어 암 검출 및 진단 등에 중요한 역할을 하고 있다. 딥러닝(deep learning)은 인공신경망(artificial neural network, ANN)을 기반으로 하는 머신러닝 기술의 한 분야로 최근 여러 분야에서 성능이 급속도로 개선되어 활용 범위가 확대되고 있다. 본 연구에서는 유방암 분류를 위해 전이학습(transfer learning) 기반 DNN(Deep Neural Network)과 SVM(support vector machine)의 구조를 결합한 DNN-SVM Hybrid 모형을 제안한다. 전이학습 기반 제안된 모형은 적은 학습 데이터에도 효과적이고, 학습 속도도 빠르며, 단일모형, 즉 DNN과 SVM이 가지는 장점을 모두 활용 가능토록 결합함으로써 모형 성능이 개선되었다. 제안된 DNN-SVM Hybrid 모형의 성능평가를 위해 UCI 머신러닝 저장소에서 제공하는 WOBC와 WDBC 유방암 자료를 가지고 성능실험 결과, 제안된 모형은 여러 가지 성능 척도 면에서 단일모형인 로지스틱회귀 모형, DNN, SVM 그리고 앙상블 모형인 랜덤 포레스트보다 우수함을 보였다.

A Study on the Classification of Variables Affecting Smartphone Addiction in Decision Tree Environment Using Python Program

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • 제11권4호
    • /
    • pp.68-80
    • /
    • 2022
  • Since the launch of AI, technology development to implement complete and sophisticated AI functions has continued. In efforts to develop technologies for complete automation, Machine Learning techniques and deep learning techniques are mainly used. These techniques deal with supervised learning, unsupervised learning, and reinforcement learning as internal technical elements, and use the Big-data Analysis method again to set the cornerstone for decision-making. In addition, established decision-making is being improved through subsequent repetition and renewal of decision-making standards. In other words, big data analysis, which enables data classification and recognition/recognition, is important enough to be called a key technical element of AI function. Therefore, big data analysis itself is important and requires sophisticated analysis. In this study, among various tools that can analyze big data, we will use a Python program to find out what variables can affect addiction according to smartphone use in a decision tree environment. We the Python program checks whether data classification by decision tree shows the same performance as other tools, and sees if it can give reliability to decision-making about the addictiveness of smartphone use. Through the results of this study, it can be seen that there is no problem in performing big data analysis using any of the various statistical tools such as Python and R when analyzing big data.

물체인식 딥러닝 모델 구성을 위한 파이썬 기반의 Annotation 툴 개발 (Development of Python-based Annotation Tool Program for Constructing Object Recognition Deep-Learning Model)

  • 임송원;박구만
    • 방송공학회논문지
    • /
    • 제25권3호
    • /
    • pp.386-398
    • /
    • 2020
  • 본 논문에서는 물체인식 딥러닝 모델을 구성하는데 필요한 데이터 레이블링 과정을 하나의 프로그램에서 사용할 수 있는 Annotation 툴을 개발했다. 프로그램의 인터페이스는 파이썬의 기본 GUI 라이브러리를 활용하였으며, 실시간으로 데이터 수집이 가능한 크롤러 기능을 구성하였다. 기존의 물체인식 딥러닝 모델인 Retinanet을 활용하여, 자동으로 Annotation 정보를 제공하는 기능을 구현했다. 또한, 다양한 물체인식 네트워크의 레이블링 형식에 맞추어 학습할 수 있도록 Pascal-VOC, YOLO, Retinanet 등 제각기 다른 학습 데이터 레이블링 형식을 저장하도록 했다. 제안하는 방식을 통해 국산 차량 이미지 데이터셋을 구축했으며, 기존의 물체인식 딥러닝 네트워크인 Retinanet과 YOLO 등에 학습하고, 정확도를 측정했다. 차량이 진입하는 영상에서 실시간으로 차량의 모델을 구별하는 정확성은 약 94%의 정확도를 기록했다.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • 제24권4호
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

Knowledge-guided artificial intelligence technologies for decoding complex multiomics interactions in cells

  • Lee, Dohoon;Kim, Sun
    • Clinical and Experimental Pediatrics
    • /
    • 제65권5호
    • /
    • pp.239-249
    • /
    • 2022
  • Cells survive and proliferate through complex interactions among diverse molecules across multiomics layers. Conventional experimental approaches for identifying these interactions have built a firm foundation for molecular biology, but their scalability is gradually becoming inadequate compared to the rapid accumulation of multiomics data measured by high-throughput technologies. Therefore, the need for data-driven computational modeling of interactions within cells has been highlighted in recent years. The complexity of multiomics interactions is primarily due to their nonlinearity. That is, their accurate modeling requires intricate conditional dependencies, synergies, or antagonisms between considered genes or proteins, which retard experimental validations. Artificial intelligence (AI) technologies, including deep learning models, are optimal choices for handling complex nonlinear relationships between features that are scalable and produce large amounts of data. Thus, they have great potential for modeling multiomics interactions. Although there exist many AI-driven models for computational biology applications, relatively few explicitly incorporate the prior knowledge within model architectures or training procedures. Such guidance of models by domain knowledge will greatly reduce the amount of data needed to train models and constrain their vast expressive powers to focus on the biologically relevant space. Therefore, it can enhance a model's interpretability, reduce spurious interactions, and prove its validity and utility. Thus, to facilitate further development of knowledge-guided AI technologies for the modeling of multiomics interactions, here we review representative bioinformatics applications of deep learning models for multiomics interactions developed to date by categorizing them by guidance mode.