• 제목/요약/키워드: Biomedical machine learning

검색결과 83건 처리시간 0.02초

머신러닝과 3D 프린팅을 이용한 저비용 인공의수 모형 (Low-cost Prosthetic Hand Model using Machine Learning and 3D Printing)

  • 신동욱;염호준;박상수
    • 문화기술의 융합
    • /
    • 제10권1호
    • /
    • pp.19-23
    • /
    • 2024
  • 양손 절단 환자들에게 미용적 목적과 함께 기능적 목적을 갖춘 의수가 필요하며 잔존 근육의 근전도를 이용한 인공 의수에 대한 연구가 활발하나 아직도 비싼 비용의 문제가 있다. 본 연구에서는 저비용의 부품과 소프트웨어인 표면 근전도 센서, 머신러닝 소프트웨어 Edge Impulse, Arduino Nano 33 BLE, 그리고 3D 프린팅을 이용하여 인공의수를 제작하고 성능을 평가하였다. 표면 근전도 센서로 획득하고 Edge Impulse에서 디지털 시그널 프로세싱 과정을 거친 신호들을 이용하여 머신러닝으로 손가락 운동의 종류를 판단하는 훈련을 통해 각 손가락의 굽힘 운동신호를 의수 모델의 손가락들에 전달하였다. 디지털 시그널 프로세싱 조건을 노치 필터 60 Hz, 대역필터 10-300 Hz, 그리고 샘플링 주파수 1,000 Hz로 했을 때, 머신 러닝의 정확도가 82.1%로 가장 높았다. 각 손가락 굴곡 운동간에 혼동될 수 있는 가능성은 약지가 가장 높아서 검지의 운동으로 혼동될 가능성이 44.7 %이었다. 저비용 인공의수의 성공적인 개발을 위해서는 더 많은 연구가 필요하다.

자연어 처리 및 기계학습을 통한 동의보감 기반 한의변증진단 기술 개발 (Donguibogam-Based Pattern Diagnosis Using Natural Language Processing and Machine Learning)

  • 이승현;장동표;성강경
    • 대한한의학회지
    • /
    • 제41권3호
    • /
    • pp.1-8
    • /
    • 2020
  • Objectives: This paper aims to investigate the Donguibogam-based pattern diagnosis by applying natural language processing and machine learning. Methods: A database has been constructed by gathering symptoms and pattern diagnosis from Donguibogam. The symptom sentences were tokenized with nouns, verbs, and adjectives with natural language processing tool. To apply symptom sentences into machine learning, Word2Vec model has been established for converting words into numeric vectors. Using the pair of symptom's vector and pattern diagnosis, a pattern prediction model has been trained through Logistic Regression. Results: The Word2Vec model's maximum performance was obtained by optimizing Word2Vec's primary parameters -the number of iterations, the vector's dimensions, and window size. The obtained pattern diagnosis regression model showed 75% (chance level 16.7%) accuracy for the prediction of Six-Qi pattern diagnosis. Conclusions: In this study, we developed pattern diagnosis prediction model based on the symptom and pattern diagnosis from Donguibogam. The prediction accuracy could be increased by the collection of data through future expansions of oriental medicine classics.

Artificial Intelligence based Tumor detection System using Computational Pathology

  • Naeem, Tayyaba;Qamar, Shamweel;Park, Peom
    • 시스템엔지니어링학술지
    • /
    • 제15권2호
    • /
    • pp.72-78
    • /
    • 2019
  • Pathology is the motor that drives healthcare to understand diseases. The way pathologists diagnose diseases, which involves manual observation of images under a microscope has been used for the last 150 years, it's time to change. This paper is specifically based on tumor detection using deep learning techniques. Pathologist examine the specimen slides from the specific portion of body (e-g liver, breast, prostate region) and then examine it under the microscope to identify the effected cells among all the normal cells. This process is time consuming and not sufficiently accurate. So, there is a need of a system that can detect tumor automatically in less time. Solution to this problem is computational pathology: an approach to examine tissue data obtained through whole slide imaging using modern image analysis algorithms and to analyze clinically relevant information from these data. Artificial Intelligence models like machine learning and deep learning are used at the molecular levels to generate diagnostic inferences and predictions; and presents this clinically actionable knowledge to pathologist through dynamic and integrated reports. Which enables physicians, laboratory personnel, and other health care system to make the best possible medical decisions. I will discuss the techniques for the automated tumor detection system within the new discipline of computational pathology, which will be useful for the future practice of pathology and, more broadly, medical practice in general.

안정 상태에서의 정량 뇌파를 이용한 기계학습 기반의 경도인지장애 환자의 감별 진단 모델 개발 및 검증 (Development and Validation of a Machine Learning-based Differential Diagnosis Model for Patients with Mild Cognitive Impairment using Resting-State Quantitative EEG)

  • 문기욱;임승의;김진욱;하상원;이기원
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권4호
    • /
    • pp.185-192
    • /
    • 2022
  • Early detection of mild cognitive impairment can help prevent the progression of dementia. The purpose of this study was to design and validate a machine learning model that automatically differential diagnosed patients with mild cognitive impairment and identified cognitive decline characteristics compared to a control group with normal cognition using resting-state quantitative electroencephalogram (qEEG) with eyes closed. In the first step, a rectified signal was obtained through a preprocessing process that receives a quantitative EEG signal as an input and removes noise through a filter and independent component analysis (ICA). Frequency analysis and non-linear features were extracted from the rectified signal, and the 3067 extracted features were used as input of a linear support vector machine (SVM), a representative algorithm among machine learning algorithms, and classified into mild cognitive impairment patients and normal cognitive adults. As a result of classification analysis of 58 normal cognitive group and 80 patients in mild cognitive impairment, the accuracy of SVM was 86.2%. In patients with mild cognitive impairment, alpha band power was decreased in the frontal lobe, and high beta band power was increased in the frontal lobe compared to the normal cognitive group. Also, the gamma band power of the occipital-parietal lobe was decreased in mild cognitive impairment. These results represented that quantitative EEG can be used as a meaningful biomarker to discriminate cognitive decline.

Machine Learning-Based Prediction of COVID-19 Severity and Progression to Critical Illness Using CT Imaging and Clinical Data

  • Subhanik Purkayastha;Yanhe Xiao;Zhicheng Jiao;Rujapa Thepumnoeysuk;Kasey Halsey;Jing Wu;Thi My Linh Tran;Ben Hsieh;Ji Whae Choi;Dongcui Wang;Martin Vallieres;Robin Wang;Scott Collins;Xue Feng;Michael Feldman;Paul J. Zhang;Michael Atalay;Ronnie Sebro;Li Yang;Yong Fan;Wei-hua Liao;Harrison X. Bai
    • Korean Journal of Radiology
    • /
    • 제22권7호
    • /
    • pp.1213-1224
    • /
    • 2021
  • Objective: To develop a machine learning (ML) pipeline based on radiomics to predict Coronavirus Disease 2019 (COVID-19) severity and the future deterioration to critical illness using CT and clinical variables. Materials and Methods: Clinical data were collected from 981 patients from a multi-institutional international cohort with real-time polymerase chain reaction-confirmed COVID-19. Radiomics features were extracted from chest CT of the patients. The data of the cohort were randomly divided into training, validation, and test sets using a 7:1:2 ratio. A ML pipeline consisting of a model to predict severity and time-to-event model to predict progression to critical illness were trained on radiomics features and clinical variables. The receiver operating characteristic area under the curve (ROC-AUC), concordance index (C-index), and time-dependent ROC-AUC were calculated to determine model performance, which was compared with consensus CT severity scores obtained by visual interpretation by radiologists. Results: Among 981 patients with confirmed COVID-19, 274 patients developed critical illness. Radiomics features and clinical variables resulted in the best performance for the prediction of disease severity with a highest test ROC-AUC of 0.76 compared with 0.70 (0.76 vs. 0.70, p = 0.023) for visual CT severity score and clinical variables. The progression prediction model achieved a test C-index of 0.868 when it was based on the combination of CT radiomics and clinical variables compared with 0.767 when based on CT radiomics features alone (p < 0.001), 0.847 when based on clinical variables alone (p = 0.110), and 0.860 when based on the combination of visual CT severity scores and clinical variables (p = 0.549). Furthermore, the model based on the combination of CT radiomics and clinical variables achieved time-dependent ROC-AUCs of 0.897, 0.933, and 0.927 for the prediction of progression risks at 3, 5 and 7 days, respectively. Conclusion: CT radiomics features combined with clinical variables were predictive of COVID-19 severity and progression to critical illness with fairly high accuracy.

Support Vector Machine Based Arrhythmia Classification Using Reduced Features

  • Song, Mi-Hye;Lee, Jeon;Cho, Sung-Pil;Lee, Kyoung-Joung;Yoo, Sun-Kook
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권4호
    • /
    • pp.571-579
    • /
    • 2005
  • In this paper, we proposed an algorithm for arrhythmia classification, which is associated with the reduction of feature dimensions by linear discriminant analysis (LDA) and a support vector machine (SVM) based classifier. Seventeen original input features were extracted from preprocessed signals by wavelet transform, and attempts were then made to reduce these to 4 features, the linear combination of original features, by LDA. The performance of the SVM classifier with reduced features by LDA showed higher than with that by principal component analysis (PCA) and even with original features. For a cross-validation procedure, this SVM classifier was compared with Multilayer Perceptrons (MLP) and Fuzzy Inference System (FIS) classifiers. When all classifiers used the same reduced features, the overall performance of the SVM classifier was comprehensively superior to all others. Especially, the accuracy of discrimination of normal sinus rhythm (NSR), arterial premature contraction (APC), supraventricular tachycardia (SVT), premature ventricular contraction (PVC), ventricular tachycardia (VT) and ventricular fibrillation (VF) were $99.307\%,\;99.274\%,\;99.854\%,\;98.344\%,\;99.441\%\;and\;99.883\%$, respectively. And, even with smaller learning data, the SVM classifier offered better performance than the MLP classifier.

A Survey of Transfer and Multitask Learning in Bioinformatics

  • Xu, Qian;Yang, Qiang
    • Journal of Computing Science and Engineering
    • /
    • 제5권3호
    • /
    • pp.257-268
    • /
    • 2011
  • Machine learning and data mining have found many applications in biological domains, where we look to build predictive models based on labeled training data. However, in practice, high quality labeled data is scarce, and to label new data incurs high costs. Transfer and multitask learning offer an attractive alternative, by allowing useful knowledge to be extracted and transferred from data in auxiliary domains helps counter the lack of data problem in the target domain. In this article, we survey recent advances in transfer and multitask learning for bioinformatics applications. In particular, we survey several key bioinformatics application areas, including sequence classification, gene expression data analysis, biological network reconstruction and biomedical applications.

콘볼루션 신경망(CNN)과 다양한 이미지 증강기법을 이용한 혀 영역 분할 (Tongue Image Segmentation Using CNN and Various Image Augmentation Techniques)

  • 안일구;배광호;이시우
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권5호
    • /
    • pp.201-210
    • /
    • 2021
  • In Korean medicine, tongue diagnosis is one of the important diagnostic methods for diagnosing abnormalities in the body. Representative features that are used in the tongue diagnosis include color, shape, texture, cracks, and tooth marks. When diagnosing a patient through these features, the diagnosis criteria may be different for each oriental medical doctor, and even the same person may have different diagnosis results depending on time and work environment. In order to overcome this problem, recent studies to automate and standardize tongue diagnosis using machine learning are continuing and the basic process of such a machine learning-based tongue diagnosis system is tongue segmentation. In this paper, image data is augmented based on the main tongue features, and backbones of various famous deep learning architecture models are used for automatic tongue segmentation. The experimental results show that the proposed augmentation technique improves the accuracy of tongue segmentation, and that automatic tongue segmentation can be performed with a high accuracy of 99.12%.

단일 리드 심전도를 이용한 개인 식별 (Identification of Individuals using Single-Lead Electrocardiogram Signal)

  • 임서현;민경란;이종실;장동표;김인영
    • 대한의용생체공학회:의공학회지
    • /
    • 제35권3호
    • /
    • pp.42-49
    • /
    • 2014
  • We propose an individual identification method using a single-lead electrocardiogram signal. In this paper, lead I ECG is measured from subjects in various physical and psychological states. We performed a noise reduction for lead I signal as a preprocessing stage and this signal is used to acquire the representative beat waveform for individuals by utilizing the ensemble average. From the P-QRS-T waves, features are extracted to identify individuals, 19 using the duration and amplitude information, and 16 from the QRS complex acquired by applying Pan-Tompkins algorithm to the ensemble averaged waveform. To analyze the effect of each feature and to improve efficiency while maintaining the performance, Relief-F algorithm is used to select features from the 35 features extracted. Some or all of these 35 features were used in the support vector machine (SVM) learning and tests. The classification accuracy using the entire feature set was 98.34%. Experimental results show that it is possible to identify a person by features extracted from limb lead I signal only.

Automatically Diagnosing Skull Fractures Using an Object Detection Method and Deep Learning Algorithm in Plain Radiography Images

  • Tae Seok, Jeong;Gi Taek, Yee; Kwang Gi, Kim;Young Jae, Kim;Sang Gu, Lee;Woo Kyung, Kim
    • Journal of Korean Neurosurgical Society
    • /
    • 제66권1호
    • /
    • pp.53-62
    • /
    • 2023
  • Objective : Deep learning is a machine learning approach based on artificial neural network training, and object detection algorithm using deep learning is used as the most powerful tool in image analysis. We analyzed and evaluated the diagnostic performance of a deep learning algorithm to identify skull fractures in plain radiographic images and investigated its clinical applicability. Methods : A total of 2026 plain radiographic images of the skull (fracture, 991; normal, 1035) were obtained from 741 patients. The RetinaNet architecture was used as a deep learning model. Precision, recall, and average precision were measured to evaluate the deep learning algorithm's diagnostic performance. Results : In ResNet-152, the average precision for intersection over union (IOU) 0.1, 0.3, and 0.5, were 0.7240, 0.6698, and 0.3687, respectively. When the intersection over union (IOU) and confidence threshold were 0.1, the precision was 0.7292, and the recall was 0.7650. When the IOU threshold was 0.1, and the confidence threshold was 0.6, the true and false rates were 82.9% and 17.1%, respectively. There were significant differences in the true/false and false-positive/false-negative ratios between the anterior-posterior, towne, and both lateral views (p=0.032 and p=0.003). Objects detected in false positives had vascular grooves and suture lines. In false negatives, the detection performance of the diastatic fractures, fractures crossing the suture line, and fractures around the vascular grooves and orbit was poor. Conclusion : The object detection algorithm applied with deep learning is expected to be a valuable tool in diagnosing skull fractures.