• 제목/요약/키워드: Conventional machine learning

검색결과 282건 처리시간 0.029초

Machine Learning Based Neighbor Path Selection Model in a Communication Network

  • Lee, Yong-Jin
    • International journal of advanced smart convergence
    • /
    • 제10권1호
    • /
    • pp.56-61
    • /
    • 2021
  • Neighbor path selection is to pre-select alternate routes in case geographically correlated failures occur simultaneously on the communication network. Conventional heuristic-based algorithms no longer improve solutions because they cannot sufficiently utilize historical failure information. We present a novel solution model for neighbor path selection by using machine learning technique. Our proposed machine learning neighbor path selection (ML-NPS) model is composed of five modules- random graph generation, data set creation, machine learning modeling, neighbor path prediction, and path information acquisition. It is implemented by Python with Keras on Tensorflow and executed on the tiny computer, Raspberry PI 4B. Performance evaluations via numerical simulation show that the neighbor path communication success probability of our model is better than that of the conventional heuristic by 26% on the average.

An Approach to Applying Multiple Linear Regression Models by Interlacing Data in Classifying Similar Software

  • Lim, Hyun-il
    • Journal of Information Processing Systems
    • /
    • 제18권2호
    • /
    • pp.268-281
    • /
    • 2022
  • The development of information technology is bringing many changes to everyday life, and machine learning can be used as a technique to solve a wide range of real-world problems. Analysis and utilization of data are essential processes in applying machine learning to real-world problems. As a method of processing data in machine learning, we propose an approach based on applying multiple linear regression models by interlacing data to the task of classifying similar software. Linear regression is widely used in estimation problems to model the relationship between input and output data. In our approach, multiple linear regression models are generated by training on interlaced feature data. A combination of these multiple models is then used as the prediction model for classifying similar software. Experiments are performed to evaluate the proposed approach as compared to conventional linear regression, and the experimental results show that the proposed method classifies similar software more accurately than the conventional model. We anticipate the proposed approach to be applied to various kinds of classification problems to improve the accuracy of conventional linear regression.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

웨어러블 동작센서와 인공지능 학습모델 기반에서 행동인지의 개선 (Improvement of Activity Recognition Based on Learning Model of AI and Wearable Motion Sensors)

  • 안정욱;강운구;이영호;이병문
    • 한국멀티미디어학회논문지
    • /
    • 제21권8호
    • /
    • pp.982-990
    • /
    • 2018
  • In recent years, many wearable devices and mobile apps related to life care have been developed, and a service for measuring the movement during walking and showing the amount of exercise has been provided. However, they do not measure walking in detail, so there may be errors in the total calorie consumption. If the user's behavior is measured by a multi-axis sensor and learned by a machine learning algorithm to recognize the kind of behavior, the detailed operation of walking can be autonomously distinguished and the total calorie consumption can be calculated more than the conventional method. In order to verify this, we measured activities and created a model using a machine learning algorithm. As a result of the comparison experiment, it was confirmed that the average accuracy was 12.5% or more higher than that of the conventional method. Also, in the measurement of the momentum, the calorie consumption accuracy is more than 49.53% than that of the conventional method. If the activity recognition is performed using the wearable device and the machine learning algorithm, the accuracy can be improved and the energy consumption calculation accuracy can be improved.

PSO 알고리즘을 이용한 퍼지 Extreme Learning Machine 최적화 (Optimization of Fuzzy Learning Machine by Using Particle Swarm Optimization)

  • 노석범;왕계홍;김용수;안태천
    • 한국지능시스템학회논문지
    • /
    • 제26권1호
    • /
    • pp.87-92
    • /
    • 2016
  • 본 논문에서는 일반적인 신경회로망의 단점인 느린 학습속도를 획기적으로 개선한 네트워크인 Extreme Learning Machine과 전문가들의 언어적 정보들을 기술 할 수 있는 퍼지 이론을 접목한 퍼지 Extreme Learning Machine을 최적화하기 위하여 Particle Swarm Optimization 알고리즘을 이용하였다. 퍼지 Extreme Learning Machine의 활성화 함수를 일반적인 시그모이드 함수를 사용하지 않고, 퍼지 C-Means 클러스터링 알고리즘의 활성화 레벨 함수를 이용하였다. Particle Swarm Optimization 알고리즘과 같은 최적화 알고리즘을 통하여 퍼지 Extreme Learning Machine의 활성화 함수의 파라미터들을 최적화 한다. Particle Swarm Optimization과 같은 최적화 알고리즘을 통한 제안된 모델의 최적화 하고 최적화된 모델의 분류성능을 평가하기 위하여 다양한 머신 러닝 데이터 집합을 사용하여 평가한다.

Performance Comparison of Machine Learning Algorithms for Received Signal Strength-Based Indoor LOS/NLOS Classification of LTE Signals

  • Lee, Halim;Seo, Jiwon
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제11권4호
    • /
    • pp.361-368
    • /
    • 2022
  • An indoor navigation system that utilizes long-term evolution (LTE) signals has the benefit of no additional infrastructure installation expenses and low base station database management costs. Among the LTE signal measurements, received signal strength (RSS) is particularly appealing because it can be easily obtained with mobile devices. Propagation channel models can be used to estimate the position of mobile devices with RSS. However, conventional channel models have a shortcoming in that they do not discriminate between line-of-sight (LOS) and non-line-of-sight (NLOS) conditions of the received signal. Accordingly, a previous study has suggested separated LOS and NLOS channel models. However, a method for determining LOS and NLOS conditions was not devised. In this study, a machine learning-based LOS/NLOS classification method using RSS measurements is developed. We suggest several machine-learning features and evaluate various machine-learning algorithms. As an indoor experimental result, up to 87.5% classification accuracy was achieved with an ensemble algorithm. Furthermore, the range estimation accuracy with an average error of 13.54 m was demonstrated, which is a 25.3% improvement over the conventional channel model.

기계학습을 활용한 이종망에서의 Wi-Fi 성능 개선 연구 동향 분석 (Research Trends in Wi-Fi Performance Improvement in Coexistence Networks with Machine Learning)

  • 강영명
    • Journal of Platform Technology
    • /
    • 제10권3호
    • /
    • pp.51-59
    • /
    • 2022
  • 최근 혁신적으로 발전하고 있는 기계학습은 다양한 최적화 문제를 해결할 수 있는 중요한 기술이 되었다. 본 논문에서는 기계학습을 활용하여 이종망의 채널 공용화 문제를 해결하는 최신 연구 논문들을 소개하고 주된 기술의 특성을 분석하여 향후 연구 방향에 대해 가이드를 제시한다. 기존 연구들은 대체로 온라인 및 오프라인으로 빠른 학습이 가능한 Q-learning을 활용하는 경우가 많았다. 반면 다양한 공존 시나리오를 고려하지 않거나 망 성능에 큰 영향을 줄 수 있는 기계학습 컨트롤러의 위치에 대한 고려는 제한적이었다. 이런 단점을 극복할 수 있는 유력한 방안으로는 ITU에서 제안한 기계학습용 논리적 망구조를 기반으로 망 환경 변화에 따라 기계학습 알고리즘을 선택적으로 사용할 수 있는 방법이 있다.

Machine Learning Model to Predict Osteoporotic Spine with Hounsfield Units on Lumbar Computed Tomography

  • Nam, Kyoung Hyup;Seo, Il;Kim, Dong Hwan;Lee, Jae Il;Choi, Byung Kwan;Han, In Ho
    • Journal of Korean Neurosurgical Society
    • /
    • 제62권4호
    • /
    • pp.442-449
    • /
    • 2019
  • Objective : Bone mineral density (BMD) is an important consideration during fusion surgery. Although dual X-ray absorptiometry is considered as the gold standard for assessing BMD, quantitative computed tomography (QCT) provides more accurate data in spine osteoporosis. However, QCT has the disadvantage of additional radiation hazard and cost. The present study was to demonstrate the utility of artificial intelligence and machine learning algorithm for assessing osteoporosis using Hounsfield units (HU) of preoperative lumbar CT coupling with data of QCT. Methods : We reviewed 70 patients undergoing both QCT and conventional lumbar CT for spine surgery. The T-scores of 198 lumbar vertebra was assessed in QCT and the HU of vertebral body at the same level were measured in conventional CT by the picture archiving and communication system (PACS) system. A multiple regression algorithm was applied to predict the T-score using three independent variables (age, sex, and HU of vertebral body on conventional CT) coupling with T-score of QCT. Next, a logistic regression algorithm was applied to predict osteoporotic or non-osteoporotic vertebra. The Tensor flow and Python were used as the machine learning tools. The Tensor flow user interface developed in our institute was used for easy code generation. Results : The predictive model with multiple regression algorithm estimated similar T-scores with data of QCT. HU demonstrates the similar results as QCT without the discordance in only one non-osteoporotic vertebra that indicated osteoporosis. From the training set, the predictive model classified the lumbar vertebra into two groups (osteoporotic vs. non-osteoporotic spine) with 88.0% accuracy. In a test set of 40 vertebrae, classification accuracy was 92.5% when the learning rate was 0.0001 (precision, 0.939; recall, 0.969; F1 score, 0.954; area under the curve, 0.900). Conclusion : This study is a simple machine learning model applicable in the spine research field. The machine learning model can predict the T-score and osteoporotic vertebrae solely by measuring the HU of conventional CT, and this would help spine surgeons not to under-estimate the osteoporotic spine preoperatively. If applied to a bigger data set, we believe the predictive accuracy of our model will further increase. We propose that machine learning is an important modality of the medical research field.

딥 러닝을 이용한 버그 담당자 자동 배정 연구 (Study on Automatic Bug Triage using Deep Learning)

  • 이선로;김혜민;이찬근;이기성
    • 정보과학회 논문지
    • /
    • 제44권11호
    • /
    • pp.1156-1164
    • /
    • 2017
  • 기존의 버그 담당자 자동 배정 연구들은 대부분 기계학습 알고리즘을 기반으로 예측 시스템을 구축하는 방식이었다. 따라서, 고성능의 기계학습 모델을 적용하는 것이 담당자 자동 배정 시스템 성능의 핵심이 된다고 할 수 있으며 관련 연구에서는 높은 성능을 보이는 SVM, Naive Bayes 등의 기계학습 모델들이 주로 사용되고 있다. 본 논문에서는 기계학습 분야에서 최근 좋은 성능을 보이고 있는 딥 러닝을 버그 담당자 자동 배정에 적용하고 그 성능을 평가한다. 실험 결과, 딥 러닝 기반 Bug Triage 시스템이 활성 개발자 대상 실험에서 48%의 정확도를 달성했으며 이는 기존의 기계학습 대비 최대 69%향상된 결과이다.

다양한 기계학습 기법의 암상예측 적용성 비교 분석 (Comparative Application of Various Machine Learning Techniques for Lithology Predictions)

  • 정진아;박은규
    • 한국지하수토양환경학회지:지하수토양환경
    • /
    • 제21권3호
    • /
    • pp.21-34
    • /
    • 2016
  • In the present study, we applied various machine learning techniques comparatively for prediction of subsurface structures based on multiple secondary information (i.e., well-logging data). The machine learning techniques employed in this study are Naive Bayes classification (NB), artificial neural network (ANN), support vector machine (SVM) and logistic regression classification (LR). As an alternative model, conventional hidden Markov model (HMM) and modified hidden Markov model (mHMM) are used where additional information of transition probability between primary properties is incorporated in the predictions. In the comparisons, 16 boreholes consisted with four different materials are synthesized, which show directional non-stationarity in upward and downward directions. Futhermore, two types of the secondary information that is statistically related to each material are generated. From the comparative analysis with various case studies, the accuracies of the techniques become degenerated with inclusion of additive errors and small amount of the training data. For HMM predictions, the conventional HMM shows the similar accuracies with the models that does not relies on transition probability. However, the mHMM consistently shows the highest prediction accuracy among the test cases, which can be attributed to the consideration of geological nature in the training of the model.