• Title/Summary/Keyword: Python Programming

Search Result 110, Processing Time 0.027 seconds

Determination of the stage and grade of periodontitis according to the current classification of periodontal and peri-implant diseases and conditions (2018) using machine learning algorithms

  • Kubra Ertas;Ihsan Pence;Melike Siseci Cesmeli;Zuhal Yetkin Ay
    • Journal of Periodontal and Implant Science
    • /
    • v.53 no.1
    • /
    • pp.38-53
    • /
    • 2023
  • Purpose: The current Classification of Periodontal and Peri-Implant Diseases and Conditions, published and disseminated in 2018, involves some difficulties and causes diagnostic conflicts due to its criteria, especially for inexperienced clinicians. The aim of this study was to design a decision system based on machine learning algorithms by using clinical measurements and radiographic images in order to determine and facilitate the staging and grading of periodontitis. Methods: In the first part of this study, machine learning models were created using the Python programming language based on clinical data from 144 individuals who presented to the Department of Periodontology, Faculty of Dentistry, Süleyman Demirel University. In the second part, panoramic radiographic images were processed and classification was carried out with deep learning algorithms. Results: Using clinical data, the accuracy of staging with the tree algorithm reached 97.2%, while the random forest and k-nearest neighbor algorithms reached 98.6% accuracy. The best staging accuracy for processing panoramic radiographic images was provided by a hybrid network model algorithm combining the proposed ResNet50 architecture and the support vector machine algorithm. For this, the images were preprocessed, and high success was obtained, with a classification accuracy of 88.2% for staging. However, in general, it was observed that the radiographic images provided a low level of success, in terms of accuracy, for modeling the grading of periodontitis. Conclusions: The machine learning-based decision system presented herein can facilitate periodontal diagnoses despite its current limitations. Further studies are planned to optimize the algorithm and improve the results.

Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study

  • Mohammad-Rahimi, Hossein;Motamadian, Saeed Reza;Nadimi, Mohadeseh;Hassanzadeh-Samani, Sahel;Minabi, Mohammad A. S.;Mahmoudinia, Erfan;Lee, Victor Y.;Rohban, Mohammad Hossein
    • The korean journal of orthodontics
    • /
    • v.52 no.2
    • /
    • pp.112-122
    • /
    • 2022
  • Objective: This study aimed to present and evaluate a new deep learning model for determining cervical vertebral maturation (CVM) degree and growth spurts by analyzing lateral cephalometric radiographs. Methods: The study sample included 890 cephalograms. The images were classified into six cervical stages independently by two orthodontists. The images were also categorized into three degrees on the basis of the growth spurt: pre-pubertal, growth spurt, and post-pubertal. Subsequently, the samples were fed to a transfer learning model implemented using the Python programming language and PyTorch library. In the last step, the test set of cephalograms was randomly coded and provided to two new orthodontists in order to compare their diagnosis to the artificial intelligence (AI) model's performance using weighted kappa and Cohen's kappa statistical analyses. Results: The model's validation and test accuracy for the six-class CVM diagnosis were 62.63% and 61.62%, respectively. Moreover, the model's validation and test accuracy for the three-class classification were 75.76% and 82.83%, respectively. Furthermore, substantial agreements were observed between the two orthodontists as well as one of them and the AI model. Conclusions: The newly developed AI model had reasonable accuracy in detecting the CVM stage and high reliability in detecting the pubertal stage. However, its accuracy was still less than that of human observers. With further improvements in data quality, this model should be able to provide practical assistance to practicing dentists in the future.

Force-deformation relationship prediction of bridge piers through stacked LSTM network using fast and slow cyclic tests

  • Omid Yazdanpanah;Minwoo Chang;Minseok Park;Yunbyeong Chae
    • Structural Engineering and Mechanics
    • /
    • v.85 no.4
    • /
    • pp.469-484
    • /
    • 2023
  • A deep recursive bidirectional Cuda Deep Neural Network Long Short Term Memory (Bi-CuDNNLSTM) layer is recruited in this paper to predict the entire force time histories, and the corresponding hysteresis and backbone curves of reinforced concrete (RC) bridge piers using experimental fast and slow cyclic tests. The proposed stacked Bi-CuDNNLSTM layers involve multiple uncertain input variables, including horizontal actuator displacements, vertical actuators axial loads, the effective height of the bridge pier, the moment of inertia, and mass. The functional application programming interface in the Keras Python library is utilized to develop a deep learning model considering all the above various input attributes. To have a robust and reliable prediction, the dataset for both the fast and slow cyclic tests is split into three mutually exclusive subsets of training, validation, and testing (unseen). The whole datasets include 17 RC bridge piers tested experimentally ten for fast and seven for slow cyclic tests. The results bring to light that the mean absolute error, as a loss function, is monotonically decreased to zero for both the training and validation datasets after 5000 epochs, and a high level of correlation is observed between the predicted and the experimentally measured values of the force time histories for all the datasets, more than 90%. It can be concluded that the maximum mean of the normalized error, obtained through Box-Whisker plot and Gaussian distribution of normalized error, associated with unseen data is about 10% and 3% for the fast and slow cyclic tests, respectively. In recapitulation, it brings to an end that the stacked Bi-CuDNNLSTM layer implemented in this study has a myriad of benefits in reducing the time and experimental costs for conducting new fast and slow cyclic tests in the future and results in a fast and accurate insight into hysteretic behavior of bridge piers.

Geometry optimization of a double-layered inertial reactive armor configured with rotating discs

  • Bekzat Ajan;Dichuan Zhang;Christos Spitas;Elias Abou Fakhr;Dongming Wei
    • Advances in Computational Design
    • /
    • v.8 no.4
    • /
    • pp.309-325
    • /
    • 2023
  • An innovative inertial reactive armor is being developed through a multi-discipline project. Unlike the well-known explosive or non-explosive reactive armour that uses high-energy explosives or bulging effect, the proposed inertial reactive armour uses active disc elements that is set to rotate rapidly upon impact to effectively deflect and disrupt shaped charges and kinetic energy penetrators. The effectiveness of the proposed armour highly depends on the tangential velocity of the impact point on the rotating disc. However,for a single layer armour with an array of high-speed rotating discs, the tangential velocity is relatively low near the center of the disc and is not available between the gap of the discs. Therefore, it is necessary to configure the armor with double layers to increase the tangential velocity at the point of impact. This paper explores a multi-objective geometry design optimization for the double-layered armor using Nelder-Mead optimization algorithm and integration tools of the python programming language. The optimization objectives include maximizing both average tangential velocity and high tangential velocity areas and minimizing low tangential velocity area. The design parameters include the relative position (translation and rotation) of the disc element between two armor layers. The optimized design results in a significant increase of the average tangential velocity (38%), increase of the high tangential velocity area (71.3%), and decrease of the low tangential velocity area (86.2%) as comparing to the single layer armor.

A Study on Applying Novel Reverse N-Gram for Construction of Natural Language Processing Dictionary for Healthcare Big Data Analysis (헬스케어 분야 빅데이터 분석을 위한 개체명 사전구축에 새로운 역 N-Gram 적용 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.391-396
    • /
    • 2024
  • This study proposes a novel reverse N-Gram approach to overcome the limitations of traditional N-Gram methods and enhance performance in building an entity dictionary specialized for the healthcare sector. The proposed reverse N-Gram technique allows for more precise analysis and processing of the complex linguistic features of healthcare-related big data. To verify the efficiency of the proposed method, big data on healthcare and digital health announced during the Consumer Electronics Show (CES) held each January was collected. Using the Python programming language, 2,185 news titles and summaries mentioned from January 1 to 31 in 2010 and from January 1 to 31 in 2024 were preprocessed with the new reverse N-Gram method. This resulted in the stable construction of a dictionary for natural language processing in the healthcare field.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Development of Teaching Model for 'Problem-solving methods and procedures' section in the 2012's revised Informatics curriculum (2012년 신 개정 정보 교육과정의 '문제 해결 방법과 절차' 영역을 위한 수업 모형 개발)

  • Hyun, Tae-Ik;Choi, Jae-Hyuk;Lee, Jong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.189-201
    • /
    • 2012
  • The purpose of this study is to develop an effective teaching model for the "Problem solving methods and procedures" section in the revised academic high school informatics curriculum, verify its effectiveness, make the subject more effective and appealing to teachers as well as students. The model includes a middle school level informatics curriculum for the students who have yet to learn the section. This development follows the ADDIE model, and the Python programming language is adopted for the model. Using the model, classes were conducted with two groups: high school computer club students and undergraduate students majoring in computer education. Of the undergraduate students 75% responded positively to the model. This model was applied in the actual high school classroom teaching for 23 class-hours in the spring semester 2012. The Pearson correlation coefficient that verifies the correspondence between the PSI score and the informatics midterm exam grade is .247, which reflects a weak positive correlation. The result of the study showed that the developed teaching model is an effective tool in educating students about the "problem solving methods and procedures". The model is to be a cornerstone of teaching/learning plans for informatics at academic high school as well as training materials for pre-service teachers.

Proposal of a Hypothesis Test Prediction System for Educational Social Precepts using Deep Learning Models

  • Choi, Su-Youn;Park, Dea-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.37-44
    • /
    • 2020
  • AI technology has developed in the form of decision support technology in law, patent, finance and national defense and is applied to disease diagnosis and legal judgment. To search real-time information with Deep Learning, Big data Analysis and Deep Learning Algorithm are required. In this paper, we try to predict the entrance rate to high-ranking universities using a Deep Learning model, RNN(Recurrent Neural Network). First, we analyzed the current status of private academies in administrative districts and the number of students by age in administrative districts, and established a socially accepted hypothesis that students residing in areas with a high educational fever have a high rate of enrollment in high-ranking universities. This is to verify based on the data analyzed using the predicted hypothesis and the government's public data. The predictive model uses data from 2015 to 2017 to learn to predict the top enrollment rate, and the trained model predicts the top enrollment rate in 2018. A prediction experiment was performed using RNN, a Deep Learning model, for the high-ranking enrollment rate in the special education zone. In this paper, we define the correlation between the high-ranking enrollment rate by analyzing the household income and the participation rate of private education about the current status of private institutes in regions with high education fever and the effect on the number of students by age.

An Study of Pedestrian Efficiency in Apartment Complexes - Focused on Pedestrian Path in Apartment Complexes - (아파트 단지의 보행효율성에 관한 연구 - 단지 내 보행로를 중심으로 -)

  • Yang, Dongwoo;Yu, Sang-Gyun
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.11
    • /
    • pp.85-94
    • /
    • 2018
  • This study aims to investigate how easy pedestrians get around within/through the "Apartment Complexes (AC), " a common style of high-rise multi-family housing in Korea. Over the past six decades, the AC has been the most conventional way to provide standardized housing efficiently to address the problems of the shortage of housing and the substandard housing, due to the explosion of urban population with the rapid industrialization. The AC is a huge chunk of homeogenous multi-family housing, mostly condos with decent infrastructure, including parks, pedestrian passages, schools, ect. Both in the new town development and urban renewal programs have utilized the advantages of the AC. Since the design principals of AC tend to adopt the "protective design" to prevent cars and pedestrians coming outside from passing it, it has been criticised for dissecting the continuity of socioeconomic context in neighborhoods. The neo-traditional planning urbanists, including Jane Jacobs, emphasize that smaller blocks and grid road newtworks are the key in improving social, cultural, and economic vitality of the neighborhoods, because these design concepts allow more pedestrians and different types of people to be mixed in a neighborhood. In this study, we first adopted objective measures for pedestrian accessibility and pedestrian efficiency. These measures were used to calculate the lengths of shortest paths from residential buildings to the edges of AC. We tested the difference in shortest paths between the current pedestrian networks of AC and hypothetical grid networks on the AC, and the relative difference is considered as the pedestrian efficiency, using the network analysis function of Geographic Information Systems (GIS) and Python programming. We found from the randomly selected 30 ACs that the existing non-grid road networks in ACs are worse than the hypothesized grid networks, in terms of pedestrian efficiency. In average, pedestrians in AC with the conventional road networks have to walk than 25%, 26%, and 27% longer than the networks of $125{\times}45m$, $100{\times}45m$, and $75{\times}45m$, respectively. With the t-test analysis, we found the pedestrian efficiency of AC with the conventional network is lower than grid-networks. Many new urbanists stress, easiness of walking is one of the most import elements for community building and social bonds. With the findings from the objective measures of pedestrian accessibility and efficiency, the AC would have limitations to attract people outside into the AC itself, which would increase dis-connectivity with adjacent areas.

Tea Leaf Disease Classification Using Artificial Intelligence (AI) Models (인공지능(AI) 모델을 사용한 차나무 잎의 병해 분류)

  • K.P.S. Kumaratenna;Young-Yeol Cho
    • Journal of Bio-Environment Control
    • /
    • v.33 no.1
    • /
    • pp.1-11
    • /
    • 2024
  • In this study, five artificial intelligence (AI) models: Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc were used to classify tea leaf diseases. Eight image categories were used: healthy, algal leaf spot, anthracnose, bird's eye spot, brown blight, gray blight, red leaf spot, and white spot. Software used in this study was Orange 3 which functions as a Python library for visual programming, that operates through an interface that generates workflows to visually manipulate and analyze the data. The precision of each AI model was recorded to select the ideal AI model. All models were trained using the Adam solver, rectified linear unit activation function, 100 neurons in the hidden layers, 200 maximum number of iterations in the neural network, and 0.0001 regularizations. To extend the functionality of Orange 3, new add-ons can be installed and, this study image analytics add-on was newly added which is required for image analysis. For the training model, the import image, image embedding, neural network, test and score, and confusion matrix widgets were used, whereas the import images, image embedding, predictions, and image viewer widgets were used for the prediction. Precisions of the neural networks of the five AI models (Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc) were 0.807, 0.901, 0.780, 0.800, and 0.771, respectively. Finally, the SqueezeNet (local) model was selected as the optimal AI model for the detection of tea diseases using tea leaf images owing to its high precision and good performance throughout the confusion matrix.