• Title/Summary/Keyword: Training data generation

Search Result 228, Processing Time 0.021 seconds

Walking path design considering with Slope for Mountain Terrain Open space

  • Seul-ki Kang;Ju-won Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.103-111
    • /
    • 2023
  • Mountains area, especially walking in open space is important for special active field which is based on mountain terrain. Recent research on pedestrian-path includes elements about pedestrian and various environment by analyzing network, but it is mainly focusing on limited space except for data-poor terrain like a mountain terrain. This paper proposes an architecture to generate walking path considering the slope for mountain terrain open space through virtual network made of mesh. This architecture shows that it reflects real terrain more effective when measuring distance using slope and is possible to generate mountain walking path using open space unlike other existing services, and is verified through the test. The proposed architecture is expected to utilize for pedestrian-path generation way considering mountain terrain open space in case of distress, mountain rescue and tactical training and so on.

A Study on Machine Learning Algorithms based on Embedded Processors Using Genetic Algorithm (유전 알고리즘을 이용한 임베디드 프로세서 기반의 머신러닝 알고리즘에 관한 연구)

  • So-Haeng Lee;Gyeong-Hyu Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.2
    • /
    • pp.417-426
    • /
    • 2024
  • In general, the implementation of machine learning requires prior knowledge and experience with deep learning models, and substantial computational resources and time are necessary for data processing. As a result, machine learning encounters several limitations when deployed on embedded processors. To address these challenges, this paper introduces a novel approach where a genetic algorithm is applied to the convolution operation within the machine learning process, specifically for performing a selective convolution operation.In the selective convolution operation, the convolution is executed exclusively on pixels identified by a genetic algorithm. This method selects and computes pixels based on a ratio determined by the genetic algorithm, effectively reducing the computational workload by the specified ratio. The paper thoroughly explores the integration of genetic algorithms into machine learning computations, monitoring the fitness of each generation to ascertain if it reaches the target value. This approach is then compared with the computational requirements of existing methods.The learning process involves iteratively training generations to ensure that the fitness adequately converges.

Analysis and study of Deep Reinforcement Learning based Resource Allocation for Renewable Powered 5G Ultra-Dense Networks

  • Hamza Ali Alshawabkeh
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.226-234
    • /
    • 2024
  • The frequent handover problem and playing ping-pong effects in 5G (5th Generation) ultra-dense networking cannot be effectively resolved by the conventional handover decision methods, which rely on the handover thresholds and measurement reports. For instance, millimetre-wave LANs, broadband remote association techniques, and 5G/6G organizations are instances of group of people yet to come frameworks that request greater security, lower idleness, and dependable principles and correspondence limit. One of the critical parts of 5G and 6G innovation is believed to be successful blockage the board. With further developed help quality, it empowers administrator to run many systems administration recreations on a solitary association. To guarantee load adjusting, forestall network cut disappointment, and give substitute cuts in case of blockage or cut frustration, a modern pursuing choices framework to deal with showing up network information is require. Our goal is to balance the strain on BSs while optimizing the value of the information that is transferred from satellites to BSs. Nevertheless, due to their irregular flight characteristic, some satellites frequently cannot establish a connection with Base Stations (BSs), which further complicates the joint satellite-BS connection and channel allocation. SF redistribution techniques based on Deep Reinforcement Learning (DRL) have been devised, taking into account the randomness of the data received by the terminal. In order to predict the best capacity improvements in the wireless instruments of 5G and 6G IoT networks, a hybrid algorithm for deep learning is being used in this study. To control the level of congestion within a 5G/6G network, the suggested approach is put into effect to a training set. With 0.933 accuracy and 0.067 miss rate, the suggested method produced encouraging results.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Deep Learning-Based Prediction of the Quality of Multiple Concurrent Beams in mmWave Band (밀리미터파 대역 딥러닝 기반 다중빔 전송링크 성능 예측기법)

  • Choi, Jun-Hyeok;Kim, Mun-Suk
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.13-20
    • /
    • 2022
  • IEEE 802.11ay Wi-Fi is the next generation wireless technology and operates in mmWave band. It supports the MU-MIMO (Multiple User Multiple Input Multiple Output) transmission in which an AP (Access Point) can transmit multiple data streams simultaneously to multiple STAs (Stations). To this end, the AP should perform MU-MIMO beamforming training with the STAs. For efficient MU-MIMO beamforming training, it is important for the AP to estimate signal strength measured at each STA at which multiple beams are used simultaneously. Therefore, in the paper, we propose a deep learning-based link quality estimation scheme. Our proposed scheme estimates the signal strength with high accuracy by utilizing a deep learning model pre-trained for a certain indoor or outdoor propagation scenario. Specifically, to estimate the signal strength of the multiple concurrent beams, our scheme uses the signal strengths of the respective single beams, which can be obtained without additional signaling overhead, as the input of the deep learning model. For performance evaluation, we utilized a Q-D (Quasi-Deterministic) Channel Realization open source software and extensive channel measurement campaigns were conducted with NIST (National Institute of Standards and Technology) to implement the millimeter wave (mmWave) channel. Our simulation results demonstrate that our proposed scheme outperforms comparison schemes in terms of the accuracy of the signal strength estimation.

Machine Learning Model to Predict Osteoporotic Spine with Hounsfield Units on Lumbar Computed Tomography

  • Nam, Kyoung Hyup;Seo, Il;Kim, Dong Hwan;Lee, Jae Il;Choi, Byung Kwan;Han, In Ho
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.442-449
    • /
    • 2019
  • Objective : Bone mineral density (BMD) is an important consideration during fusion surgery. Although dual X-ray absorptiometry is considered as the gold standard for assessing BMD, quantitative computed tomography (QCT) provides more accurate data in spine osteoporosis. However, QCT has the disadvantage of additional radiation hazard and cost. The present study was to demonstrate the utility of artificial intelligence and machine learning algorithm for assessing osteoporosis using Hounsfield units (HU) of preoperative lumbar CT coupling with data of QCT. Methods : We reviewed 70 patients undergoing both QCT and conventional lumbar CT for spine surgery. The T-scores of 198 lumbar vertebra was assessed in QCT and the HU of vertebral body at the same level were measured in conventional CT by the picture archiving and communication system (PACS) system. A multiple regression algorithm was applied to predict the T-score using three independent variables (age, sex, and HU of vertebral body on conventional CT) coupling with T-score of QCT. Next, a logistic regression algorithm was applied to predict osteoporotic or non-osteoporotic vertebra. The Tensor flow and Python were used as the machine learning tools. The Tensor flow user interface developed in our institute was used for easy code generation. Results : The predictive model with multiple regression algorithm estimated similar T-scores with data of QCT. HU demonstrates the similar results as QCT without the discordance in only one non-osteoporotic vertebra that indicated osteoporosis. From the training set, the predictive model classified the lumbar vertebra into two groups (osteoporotic vs. non-osteoporotic spine) with 88.0% accuracy. In a test set of 40 vertebrae, classification accuracy was 92.5% when the learning rate was 0.0001 (precision, 0.939; recall, 0.969; F1 score, 0.954; area under the curve, 0.900). Conclusion : This study is a simple machine learning model applicable in the spine research field. The machine learning model can predict the T-score and osteoporotic vertebrae solely by measuring the HU of conventional CT, and this would help spine surgeons not to under-estimate the osteoporotic spine preoperatively. If applied to a bigger data set, we believe the predictive accuracy of our model will further increase. We propose that machine learning is an important modality of the medical research field.

Development of integrated disaster mapping method (I) : expansion and verification of grid-based model (통합 재해지도 작성 기법 개발(I) : 그리드 기반 모형의 확장 및 검증)

  • Park, Jun Hyung;Han, Kun-Yeun;Kim, Byunghyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.1
    • /
    • pp.71-84
    • /
    • 2022
  • The objective of this study is to develop a two-dimensional (2D) flood model that can perform accurate flood analysis with simple input data. The 2D flood inundation models currently used to create flood forecast maps require complex input data and grid generation tools. This sometimes requires a lot of time and effort for flood modeling, and there may be difficulties in constructing input data depending on the situation. In order to compensate for these shortcomings, in this study, a grid-based model that can derive accurate and rapid flood analysis by reflecting correct topography as simple input data was developed. The calculation efficiency was improved by extending the existing 2×2 sub-grid model to a 5×5. In order to examine the accuracy and applicability of the model, it was applied to the Gamcheon Basin where both urban and river flooding occurred due to Typhoon Rusa. For efficient flood analysis according to user's selection, flood wave propagation patterns, accuracy and execution time according to grid size and number of sub-grids were investigated. The developed model is expected to be highly useful for flood disaster mapping as it can present the results of flooding analysis for various situations, from the flood inundation map showing accurate flooding to the flood risk map showing only approximate flooding.

A study on frost prediction model using machine learning (머신러닝을 사용한 서리 예측 연구)

  • Kim, Hyojeoung;Kim, Sahm
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.543-552
    • /
    • 2022
  • When frost occurs, crops are directly damaged. When crops come into contact with low temperatures, tissues freeze, which hardens and destroys the cell membranes or chloroplasts, or dry cells to death. In July 2020, a sudden sub-zero weather and frost hit the Minas Gerais state of Brazil, the world's largest coffee producer, damaging about 30% of local coffee trees. As a result, coffee prices have risen significantly due to the damage, and farmers with severe damage can produce coffee only after three years for crops to recover, which is expected to cause long-term damage. In this paper, we tried to predict frost using frost generation data and weather observation data provided by the Korea Meteorological Administration to prevent severe frost. A model was constructed by reflecting weather factors such as wind speed, temperature, humidity, precipitation, and cloudiness. Using XGB(eXtreme Gradient Boosting), SVM(Support Vector Machine), Random Forest, and MLP(Multi Layer perceptron) models, various hyper parameters were applied as training data to select the best model for each model. Finally, the results were evaluated as accuracy(acc) and CSI(Critical Success Index) in test data. XGB was the best model compared to other models with 90.4% ac and 64.4% CSI, followed by SVM with 89.7% ac and 61.2% CSI. Random Forest and MLP showed similar performance with about 89% ac and about 60% CSI.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Genetic Analysis Strategies for Improving Race Performance of Thoroughbred Racehorse and Jeju Horse (서러브레드 경주마와 제주마의 경주 능력 향상을 위한 유전체 분석 전략)

  • Baek, Kyung-Wan;Gim, Jeong-An;Park, Jung-Jun
    • Journal of Life Science
    • /
    • v.28 no.1
    • /
    • pp.130-139
    • /
    • 2018
  • In ancient times, horse racing was done in ancient European countries in the form of wagon races or mountain races, and wagon racing was adopted as a regular event at the Greek Olympic Games. Thoroughbred horse has been bred since 17th century by intensive selective breeding for its speed, stamina, and racing ability. Then, in the 18th century, horse racing using the Thoroughbred species began to gain popularity among nobles. Since then, horse racing has developed into various forms in various countries and have developed into flat racing, steeplechasing, and harness racing. Thoroughbred racehorse has excellent racing abilities because of powerful selection breeding strategy for 300 years. It is necessary to maintain and maximize horses' ability to race, because horse industries produce enormous economic benefits through breeding, training, and horse racing. Next-generation sequencing (NGS) methods which process large amounts of genomic data have been developed recently. Based on the remarkable development of these genomic analytical techniques, it is now possible to easily carry out animal breeding strategies with superior traits. In order to select breeding racehorse with superior racing traits, the latest genomic analysis techniques have to be introduced. In this paper, we will review the current efforts to improve race performance for racehorses and to examine the research trends of genomic analysis. Finally, we suggest to utilize genomic analysis in Thoroughbred racehorse and Jeju horse, and propose a strategy for selective breeding for Jeju horse, which contributes job creation of Korea.