• Title/Summary/Keyword: Adaptive Optimization

Search Result 585, Processing Time 0.025 seconds

Capacity Optimization of a 802.16e OPDMA/TDD Cellular System using the Joint Allocation Algorithm of Sub-charmel and Transmit Power - Part II : Sub-channel Allocation in the Uplink Using the Channel Sounding and Initial Transmit Power Decision Algorithm According to the User's Throughput (802.16e OFDMA/TDD 셀룰러 시스템의 성능 최적화를 위한 부채널과 전송전력 결합 할당 알고리즘 - Part II : 상향링크에서 Channel Sounding을 통한 부채널 할당 및 사용자의 수율에 따른 초기전송전력 결정 알고리즘)

  • Ko, Sang-Jun;Chang, Kyung-Hi;Kim, Jae-Hyeong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9A
    • /
    • pp.888-897
    • /
    • 2007
  • In this paper, we propose an uplink dynamic resource allocation algorithm to increase sector throughput and fairness among users in 802.16e OFDMA TDD system. In uplink, we address the difference between uplink and downlink channel state information in 802.16e OFDMA TDD system. The simulation results show that not only an increment of 10% of sector throughput but higher level of fairness is achieved by round-robin using the FLR and the rate / margin adaptive inner closed-loop power control algorithm. The FLR algorithm determines the number of sub-channels to be allocated to the user according to the user's position. Also, we get 31.8% more sector throughput compared with the round-robin using FLR by FASA algorithm using uplink channel state information. User selection, sub-channel allocation, power allocation algorithms and simulation methodology are mentioned in Part I.

Adaptive Ontology Matching Methodology for an Application Area (응용환경 적응을 위한 온톨로지 매칭 방법론에 관한 연구)

  • Kim, Woo-Ju;Ahn, Sung-Jun;Kang, Ju-Young;Park, Sang-Un
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.4
    • /
    • pp.91-104
    • /
    • 2007
  • Ontology matching technique is one of the most important techniques in the Semantic Web as well as in other areas. Ontology matching algorithm takes two ontologies as input, and finds out the matching relations between the two ontologies by using some parameters in the matching process. Ontology matching is very useful in various areas such as the integration of large-scale ontologies, the implementation of intelligent unified search, and the share of domain knowledge for various applications. In general cases, the performance of ontology matching is estimated by measuring the matching results such as precision and recall regardless of the requirements that came from the matching environment. Therefore, most research focuses on controlling parameters for the optimization of precision and recall separately. In this paper, we focused on the harmony of precision and recall rather than independent performance of each. The purpose of this paper is to propose a methodology that determines parameters for the desired ratio of precision and recall that is appropriate for the requirements of the matching environment.

  • PDF

Digital signal change through artificial intelligence machine learning method comparison and learning (인공지능 기계학습 방법 비교와 학습을 통한 디지털 신호변화)

  • Yi, Dokkyun;Park, Jieun
    • Journal of Digital Convergence
    • /
    • v.17 no.10
    • /
    • pp.251-258
    • /
    • 2019
  • In the future, various products are created in various fields using artificial intelligence. In this age, it is a very important problem to know the operation principle of artificial intelligence learning method and to use it correctly. This paper introduces artificial intelligence learning methods that have been known so far. Learning of artificial intelligence is based on the fixed point iteration method of mathematics. The GD(Gradient Descent) method, which adjusts the convergence speed based on the fixed point iteration method, the Momentum method to summate the amount of gradient, and finally, the Adam method that mixed these methods. This paper describes the advantages and disadvantages of each method. In particularly, the Adam method having adaptivity controls learning ability of machine learning. And we analyze how these methods affect digital signals. The changes in the learning process of digital signals are the basis of accurate application and accurate judgment in the future work and research using artificial intelligence.

Anisotropic Total Variation Denoising Technique for Low-Dose Cone-Beam Computed Tomography Imaging

  • Lee, Ho;Yoon, Jeongmin;Lee, Eungman
    • Progress in Medical Physics
    • /
    • v.29 no.4
    • /
    • pp.150-156
    • /
    • 2018
  • This study aims to develop an improved Feldkamp-Davis-Kress (FDK) reconstruction algorithm using anisotropic total variation (ATV) minimization to enhance the image quality of low-dose cone-beam computed tomography (CBCT). The algorithm first applies a filter that integrates the Shepp-Logan filter into a cosine window function on all projections for impulse noise removal. A total variation objective function with anisotropic penalty is then minimized to enhance the difference between the real structure and noise using the steepest gradient descent optimization with adaptive step sizes. The preserving parameter to adjust the separation between the noise-free and noisy areas is determined by calculating the cumulative distribution function of the gradient magnitude of the filtered image obtained by the application of the filtering operation on each projection. With these minimized ATV projections, voxel-driven backprojection is finally performed to generate the reconstructed images. The performance of the proposed algorithm was evaluated with the catphan503 phantom dataset acquired with the use of a low-dose protocol. Qualitative and quantitative analyses showed that the proposed ATV minimization provides enhanced CBCT reconstruction images compared with those generated by the conventional FDK algorithm, with a higher contrast-to-noise ratio (CNR), lower root-mean-square-error, and higher correlation. The proposed algorithm not only leads to a potential imaging dose reduction in repeated CBCT scans via lower mA levels, but also elicits high CNR values by removing noisy corrupted areas and by avoiding the heavy penalization of striking features.

A deep learning-based approach for feeding behavior recognition of weanling pigs

  • Kim, MinJu;Choi, YoHan;Lee, Jeong-nam;Sa, SooJin;Cho, Hyun-chong
    • Journal of Animal Science and Technology
    • /
    • v.63 no.6
    • /
    • pp.1453-1463
    • /
    • 2021
  • Feeding is the most important behavior that represents the health and welfare of weanling pigs. The early detection of feed refusal is crucial for the control of disease in the initial stages and the detection of empty feeders for adding feed in a timely manner. This paper proposes a real-time technique for the detection and recognition of small pigs using a deep-leaning-based method. The proposed model focuses on detecting pigs on a feeder in a feeding position. Conventional methods detect pigs and then classify them into different behavior gestures. In contrast, in the proposed method, these two tasks are combined into a single process to detect only feeding behavior to increase the speed of detection. Considering the significant differences between pig behaviors at different sizes, adaptive adjustments are introduced into a you-only-look-once (YOLO) model, including an angle optimization strategy between the head and body for detecting a head in a feeder. According to experimental results, this method can detect the feeding behavior of pigs and screen non-feeding positions with 95.66%, 94.22%, and 96.56% average precision (AP) at an intersection over union (IoU) threshold of 0.5 for YOLOv3, YOLOv4, and an additional layer and with the proposed activation function, respectively. Drinking behavior was detected with 86.86%, 89.16%, and 86.41% AP at a 0.5 IoU threshold for YOLOv3, YOLOv4, and the proposed activation function, respectively. In terms of detection and classification, the results of our study demonstrate that the proposed method yields higher precision and recall compared to conventional methods.

Simulated Annealing for Overcoming Data Imbalance in Mold Injection Process (사출성형공정에서 데이터의 불균형 해소를 위한 담금질모사)

  • Dongju Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.233-239
    • /
    • 2022
  • The injection molding process is a process in which thermoplastic resin is heated and made into a fluid state, injected under pressure into the cavity of a mold, and then cooled in the mold to produce a product identical to the shape of the cavity of the mold. It is a process that enables mass production and complex shapes, and various factors such as resin temperature, mold temperature, injection speed, and pressure affect product quality. In the data collected at the manufacturing site, there is a lot of data related to good products, but there is little data related to defective products, resulting in serious data imbalance. In order to efficiently solve this data imbalance, undersampling, oversampling, and composite sampling are usally applied. In this study, oversampling techniques such as random oversampling (ROS), minority class oversampling (SMOTE), ADASYN(Adaptive Synthetic Sampling), etc., which amplify data of the minority class by the majority class, and complex sampling using both undersampling and oversampling, are applied. For composite sampling, SMOTE+ENN and SMOTE+Tomek were used. Artificial neural network techniques is used to predict product quality. Especially, MLP and RNN are applied as artificial neural network techniques, and optimization of various parameters for MLP and RNN is required. In this study, we proposed an SA technique that optimizes the choice of the sampling method, the ratio of minority classes for sampling method, the batch size and the number of hidden layer units for parameters of MLP and RNN. The existing sampling methods and the proposed SA method were compared using accuracy, precision, recall, and F1 Score to prove the superiority of the proposed method.

Soft computing based mathematical models for improved prediction of rock brittleness index

  • Abiodun I. Lawal;Minju Kim;Sangki Kwon
    • Geomechanics and Engineering
    • /
    • v.33 no.3
    • /
    • pp.279-289
    • /
    • 2023
  • Brittleness index (BI) is an important property of rocks because it is a good index to predict rockburst. Due to its importance, several empirical and soft computing (SC) models have been proposed in the literature based on the punch penetration test (PPT) results. These models are very important as there is no clear-cut experimental means for measuring BI asides the PPT which is very costly and time consuming to perform. This study used a novel Multivariate Adaptive regression spline (MARS), M5P, and white-box ANN to predict the BI of rocks using the available data in the literature for an improved BI prediction. The rock density, uniaxial compressive strength (σc) and tensile strength (σt) were used as the input parameters into the models while the BI was the targeted output. The models were implemented in the MATLAB software. The results of the proposed models were compared with those from existing multilinear regression, linear and nonlinear particle swarm optimization (PSO) and genetic algorithm (GA) based models using similar datasets. The coefficient of determination (R2), adjusted R2 (Adj R2), root-mean squared error (RMSE) and mean absolute percentage error (MAPE) were the indices used for the comparison. The outcomes of the comparison revealed that the proposed ANN and MARS models performed better than the other models with R2 and Adj R2 values above 0.9 and least error values while the M5P gave similar performance to those of the existing models. Weight partitioning method was also used to examine the percentage contribution of model predictors to the predicted BI and tensile strength was found to have the highest influence on the predicted BI.

Optimization of image reconstruction method for dual-particle time-encode imager through adaptive response correction

  • Dong Zhao;Wenbao Jia;Daqian Hei;Can Cheng;Wei Cheng;Xuwen Liang;Ji Li
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1587-1592
    • /
    • 2023
  • Time-encoded imagers (TEI) are important class of instruments to search for potential radioactive sources to prevent illicit transportation and trafficking of nuclear materials and other radioactive sources. The energy of the radiation cannot be known in advance due to the type and shielding of source is unknown in practice. However, the response function of the time-encoded imagers is related to the energy of neutrons or gamma-rays. An improved image reconstruction method based on MLEM was proposed to correct for the energy induced response difference. In this method, the count vector versus time was first smoothed. Then, the preset response function was adaptively corrected according to the measured counts. Finally, the smoothed count vector and corrected response were used in MLEM to reconstruct the source distribution. A one-dimensional dual-particle time-encode imager was developed and used to verify the improved method through imaging an Am-Be neutron source. The improvement of this method was demonstrated by the image reconstruction results. For gamma-ray and neutron images, the angular resolution improved by 17.2% and 7.0%; the contrast-to-noise ratio improved by 58.7% and 14.9%; the signal-to-noise ratio improved by 36.3% and 11.7%, respectively.

An optimized ANFIS model for predicting pile pullout resistance

  • Yuwei Zhao;Mesut Gor;Daria K. Voronkova;Hamed Gholizadeh Touchaei;Hossein Moayedi;Binh Nguyen Le
    • Steel and Composite Structures
    • /
    • v.48 no.2
    • /
    • pp.179-190
    • /
    • 2023
  • Many recent attempts have sought accurate prediction of pile pullout resistance (Pul) using classical machine learning models. This study offers an improved methodology for this objective. Adaptive neuro-fuzzy inference system (ANFIS), as a popular predictor, is trained by a capable metaheuristic strategy, namely equilibrium optimizer (EO) to predict the Pul. The used data is collected from laboratory investigations in previous literature. First, two optimal configurations of EO-ANFIS are selected after sensitivity analysis. They are next evaluated and compared with classical ANFIS and two neural-based models using well-accepted accuracy indicators. The results of all five models were in good agreement with laboratory Puls (all correlations > 0.99). However, it was shown that both EO-ANFISs not only outperform neural benchmarks but also enjoy a higher accuracy compared to the classical version. Therefore, utilizing the EO is recommended for optimizing this predictive tool. Furthermore, a comparison between the selected EO-ANFISs, where one employs a larger population, revealed that the model with the population size of 75 is more efficient than 300. In this relation, root mean square error and the optimization time for the EO-ANFIS (75) were 19.6272 and 1715.8 seconds, respectively, while these values were 23.4038 and 9298.7 seconds for EO-ANFIS (300).

Evaluating LIMU System Quality with Interval Evidence and Input Uncertainty

  • Xiangyi Zhou;Zhijie Zhou;Xiaoxia Han;Zhichao Ming;Yanshan Bian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.2945-2965
    • /
    • 2023
  • The laser inertial measurement unit is a precision device widely used in rocket navigation system and other equipment, and its quality is directly related to navigation accuracy. In the quality evaluation of laser inertial measurement unit, there is inevitably uncertainty in the index input information. First, the input numerical information is in interval form. Second, the index input grade and the quality evaluation result grade are given according to different national standards. So, it is a key step to transform the interval information input by the index into the data form consistent with the evaluation result grade. In the case of uncertain input, this paper puts forward a method based on probability distribution to solve the problem of asymmetry between the reference grade given by the index and the evaluation result grade when evaluating the quality of laser inertial measurement unit. By mapping the numerical relationship between the designated reference level and the evaluation reference level of the index information under different distributions, the index evidence symmetrical with the evaluation reference level is given. After the uncertain input information is transformed into evidence of interval degree distribution by this method, the information fusion of interval degree distribution evidence is carried out by interval evidential reasoning algorithm, and the evaluation result is obtained by projection covariance matrix adaptive evolution strategy optimization. Taking a five-meter redundant laser inertial measurement unit as an example, the applicability and effectiveness of this method are verified.