• Title/Summary/Keyword: Machine Learning and Artificial Intelligence

Search Result 747, Processing Time 0.027 seconds

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Face Recognition using Correlation Filters and Support Vector Machine in Machine Learning Approach

  • Long, Hoang;Kwon, Oh-Heum;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.528-537
    • /
    • 2021
  • Face recognition has gained significant notice because of its application in many businesses: security, healthcare, and marketing. In this paper, we will present the recognition method using the combination of correlation filters (CF) and Support Vector Machine (SVM). Firstly, we evaluate the performance and compared four different correlation filters: minimum average correlation energy (MACE), maximum average correlation height (MACH), unconstrained minimum average correlation energy (UMACE), and optimal-tradeoff (OT). Secondly, we propose the machine learning approach by using the OT correlation filter for features extraction and SVM for classification. The numerical results on National Cheng Kung University (NCKU) and Pointing'04 face database show that the proposed method OT-SVM gets higher accuracy in face recognition compared to other machine learning methods. Our approach doesn't require graphics card to train the image. As a result, it could run well on a low hardware system like an embedded system.

Classification of Clothing Using Googlenet Deep Learning and IoT based on Artificial Intelligence (인공지능 기반 구글넷 딥러닝과 IoT를 이용한 의류 분류)

  • Noh, Sun-Kuk
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.41-45
    • /
    • 2020
  • Recently, artificial intelligence (AI) and the Internet of things (IoT), which are represented by machine learning and deep learning among IT technologies related to the Fourth Industrial Revolution, are applied to our real life in various fields through various researches. In this paper, IoT and AI using object recognition technology are applied to classify clothing. For this purpose, the image dataset was taken using webcam and raspberry pi, and GoogLeNet, a convolutional neural network artificial intelligence network, was applied to transfer the photographed image data. The clothing image dataset was classified into two categories (shirtwaist, trousers): 900 clean images, 900 loss images, and total 1800 images. The classification measurement results showed that the accuracy of the clean clothing image was about 97.78%. In conclusion, the study confirmed the applicability of other objects using artificial intelligence networks on the Internet of Things based platform through the measurement results and the supplementation of more image data in the future.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

CNN-LSTM-based Upper Extremity Rehabilitation Exercise Real-time Monitoring System (CNN-LSTM 기반의 상지 재활운동 실시간 모니터링 시스템)

  • Jae-Jung Kim;Jung-Hyun Kim;Sol Lee;Ji-Yun Seo;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.134-139
    • /
    • 2023
  • Rehabilitators perform outpatient treatment and daily rehabilitation exercises to recover physical function with the aim of quickly returning to society after surgical treatment. Unlike performing exercises in a hospital with the help of a professional therapist, there are many difficulties in performing rehabilitation exercises by the patient on a daily basis. In this paper, we propose a CNN-LSTM-based upper limb rehabilitation real-time monitoring system so that patients can perform rehabilitation efficiently and with correct posture on a daily basis. The proposed system measures biological signals through shoulder-mounted hardware equipped with EMG and IMU, performs preprocessing and normalization for learning, and uses them as a learning dataset. The implemented model consists of three polling layers of three synthetic stacks for feature detection and two LSTM layers for classification, and we were able to confirm a learning result of 97.44% on the validation data. After that, we conducted a comparative evaluation with the Teachable machine, and as a result of the comparative evaluation, we confirmed that the model was implemented at 93.6% and the Teachable machine at 94.4%, and both models showed similar classification performance.

Automated infographic recommendation system based on machine learning (기계학습 기반의 인포그래픽 자동 추천 시스템)

  • Kim, Hyeong-Gyun;Lee, Sang-hee
    • Journal of Digital Convergence
    • /
    • v.19 no.11
    • /
    • pp.17-22
    • /
    • 2021
  • In this paper, a machine learning-based automatic infographic recommendation system is proposed to improve the existing infographic production method. This system consists of a part that machine learning multiple infographic images and a part that automatically recommends infographics with artificial intelligence only by inputting basic data from the user. The recommended infographics are provided in the form of a library, and additional data can be input by drag & drop method. In addition, the infographic image is designed to be dynamically adjusted according to the size of the input data. As a result of analyzing the machine learning-based automatic infographic recommendation process, the matching success rate for layout and keyword was very high, and the matching success rate for type was rather low. In the future, a study to improve the matching success rate for the image type for each part of the infographic will be needed.

An Artificial Intelligent based Learning Model for BIM Elements Usage (건축 부재 사용량 예측을 위한 인공지능 학습 모델)

  • Beom-Su Kim;Jong-Hyeok Park;Soo-Hee Han;Kyung-Jun Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.107-114
    • /
    • 2023
  • This study described a method of designing and implementing an artificial intelligence-based learning model for predicting the usage of building members. Artificial intelligence (AI) is widely used in various fields thanks to the development of technology, but in the field of building information management (BIM), the case of utilizing AI technology is very low due to the specificity of the data in the field and the difficulty of collecting big data. Therefore, AI problems for BIM were discovered, and a new preprocessing technique was devised to solve the specificity of data in the field. An artificial intelligence model was implemented based on the designed preprocessing technique, and it was confirmed that the accuracy of predicting the construction component usage of the implemented artificial intelligence model is at a level that can be used in the actual industry.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

Developing an Artificial Intelligence Algorithm to Predict the Timing of Dialysis Vascular Surgery (투석혈관 수술시기 예측을 위한 인공지능 알고리즘 개발)

  • Kim Dohyoung;Kim Hyunsuk;Lee Sunpyo;Oh Injong;Park Seungbum
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.97-115
    • /
    • 2023
  • In South Korea, chronic kidney disease(CKD) impacts around 4.6 million adults, leading to a high reliance on hemodialysis. For effective dialysis, vascular access is crucial, with decisions about vascular surgeries often made during dialysis sessions. Anticipating these needs could improve dialysis quality and patient comfort. This study investigates the use of Artificial Intelligence(AI) to predict the timing of surgeries for dialysis vessels, an area not extensively researched. We've developed an AI algorithm using predictive maintenance methods, transitioning from machine learning to a more advanced deep learning approach with Long Short-Term Memory(LSTM) models. The algorithm processes variables such as venous pressure, blood flow, and patient age, demonstrating high effectiveness with metrics exceeding 0.91. By shortening the data collection intervals, a more refined model can be obtained. Implementing this AI in clinical practice could notably enhance patient experience and the quality of medical services in dialysis, marking a significant advancement in the treatment of CKD.

Design of Block-based Modularity Architecture for Machine Learning (머신러닝을 위한 블록형 모듈화 아키텍처 설계)

  • Oh, Yoosoo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.476-482
    • /
    • 2020
  • In this paper, we propose a block-based modularity architecture design method for distributed machine learning. The proposed architecture is a block-type module structure with various machine learning algorithms. It allows free expansion between block-type modules and allows multiple machine learning algorithms to be organically interlocked according to the situation. The architecture enables open data communication using the metadata query protocol. Also, the architecture makes it easy to implement an application service combining various edge computing devices by designing a communication method suitable for surrounding applications. To confirm the interlocking between the proposed block-type modules, we implemented a hardware-based modularity application system.