• Title/Summary/Keyword: Intelligence Optimization

Search Result 384, Processing Time 0.03 seconds

Match Field based Algorithm Selection Approach in Hybrid SDN and PCE Based Optical Networks

  • Selvaraj, P.;Nagarajan, V.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5723-5743
    • /
    • 2018
  • The evolving internet-based services demand high-speed data transmission in conjunction with scalability. The next generation optical network has to exploit artificial intelligence and cognitive techniques to cope with the emerging requirements. This work proposes a novel way to solve the dynamic provisioning problem in optical network. The provisioning in optical network involves the computation of routes and the reservation of wavelenghs (Routing and Wavelength assignment-RWA). This is an extensively studied multi-objective optimization problem and its complexity is known to be NP-Complete. As the exact algorithms incurs more running time, the heuristic based approaches have been widely preferred to solve this problem. Recently the software-defined networking has impacted the way the optical pipes are configured and monitored. This work proposes the dynamic selection of path computation algorithms in response to the changing service requirements and network scenarios. A software-defined controller mechanism with a novel packet matching feature was proposed to dynamically match the traffic demands with the appropriate algorithm. A software-defined controller with Path Computation Element-PCE was created in the ONOS tool. A simulation study was performed with the case study of dynamic path establishment in ONOS-Open Network Operating System based software defined controller environment. A java based NOX controller was configured with a parent path computation element. The child path computation elements were configured with different path computation algorithms under the control of the parent path computation element. The use case of dynamic bulk path creation was considered. The algorithm selection method is compared with the existing single algorithm based method and the results are analyzed.

Technology of Lessons Learned Analysis using Artificial intelligence: Focused on the 'L2-OODA Ensemble Algorithm' (인공지능형 전훈분석기술: 'L2-OODA 앙상블 알고리즘'을 중심으로)

  • Yang, Seong-sil;Shin, Jin
    • Convergence Security Journal
    • /
    • v.21 no.2
    • /
    • pp.67-79
    • /
    • 2021
  • Lessons Learned(LL) is a military term defined as all activities that promote future development by finding problems and need improvement in education and reality in the field of warfare development. In this paper, we focus on presenting actual examples and applying AI analysis inference techniques to solve revealed problems in promoting LL activities, such as long-term analysis, budget problems, and necessary expertise. AI legal advice services using cognitive computing-related technologies that have already been practical and in use, were judged to be the best examples to solve the problems of LL. This paper presents intelligent LL inference techniques, which utilize AI. To this end, we want to explore theoretical backgrounds such as LL analysis definitions and examples, evolution of AI into Machine Learning, cognitive computing, and apply it to new technologies in the defense sector using the newly proposed L2-OODA ensemble algorithm to contribute to implementing existing power improvement and optimization.

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

A novel radioactive particle tracking algorithm based on deep rectifier neural network

  • Dam, Roos Sophia de Freitas;dos Santos, Marcelo Carvalho;do Desterro, Filipe Santana Moreira;Salgado, William Luna;Schirru, Roberto;Salgado, Cesar Marques
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2334-2340
    • /
    • 2021
  • Radioactive particle tracking (RPT) is a minimally invasive nuclear technique that tracks a radioactive particle inside a volume of interest by means of a mathematical location algorithm. During the past decades, many algorithms have been developed including ones based on artificial intelligence techniques. In this study, RPT technique is applied in a simulated test section that employs a simplified mixer filled with concrete, six scintillator detectors and a137Cs radioactive particle emitting gamma rays of 662 keV. The test section was developed using MCNPX code, which is a mathematical code based on Monte Carlo simulation, and 3516 different radioactive particle positions (x,y,z) were simulated. Novelty of this paper is the use of a location algorithm based on a deep learning model, more specifically a 6-layers deep rectifier neural network (DRNN), in which hyperparameters were defined using a Bayesian optimization method. DRNN is a type of deep feedforward neural network that substitutes the usual sigmoid based activation functions, traditionally used in vanilla Multilayer Perceptron Networks, for rectified activation functions. Results show the great accuracy of the DRNN in a RPT tracking system. Root mean squared error for x, y and coordinates of the radioactive particle is, respectively, 0.03064, 0.02523 and 0.07653.

Design and Implementation of IoT Platform-based Digital Twin Prototype (IoT 플랫폼 기반 디지털 트윈 프로토타입 설계 및 구현)

  • Kim, Jeehyeong;Choi, Wongi;Song, Minhwan;Lee, Sangshin
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.356-367
    • /
    • 2021
  • With the recent development of IoT and artificial intelligence technology, research and applications for optimization of real-world problems by collecting and analyzing data in real-time have increased in various fields such as manufacturing and smart city. Representatively, the digital twin platform that supports real-time synchronization in both directions with the virtual world digitized from the real world has been drawing attention. In this paper, we define a digital twin concept and propose a digital twin platform prototype that links real objects and predicted results from the virtual world in real-time by utilizing the oneM2M-based IoT platform. In addition, we implement an application that can predict accidents from object collisions in advance with the prototype. By performing predefined test cases, we present that the proposed digital twin platform could predict the crane's motion in advance, detect the collision risk, perform optimal controls, and that it can be applied in the real environment.

Personalized Diabetes Risk Assessment Through Multifaceted Analysis (PD- RAMA): A Novel Machine Learning Approach to Early Detection and Management of Type 2 Diabetes

  • Gharbi Alshammari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.17-25
    • /
    • 2023
  • The alarming global prevalence of Type 2 Diabetes Mellitus (T2DM) has catalyzed an urgent need for robust, early diagnostic methodologies. This study unveils a pioneering approach to predicting T2DM, employing the Extreme Gradient Boosting (XGBoost) algorithm, renowned for its predictive accuracy and computational efficiency. The investigation harnesses a meticulously curated dataset of 4303 samples, extracted from a comprehensive Chinese research study, scrupulously aligned with the World Health Organization's indicators and standards. The dataset encapsulates a multifaceted spectrum of clinical, demographic, and lifestyle attributes. Through an intricate process of hyperparameter optimization, the XGBoost model exhibited an unparalleled best score, elucidating a distinctive combination of parameters such as a learning rate of 0.1, max depth of 3, 150 estimators, and specific colsample strategies. The model's validation accuracy of 0.957, coupled with a sensitivity of 0.9898 and specificity of 0.8897, underlines its robustness in classifying T2DM. A detailed analysis of the confusion matrix further substantiated the model's diagnostic prowess, with an F1-score of 0.9308, illustrating its balanced performance in true positive and negative classifications. The precision and recall metrics provided nuanced insights into the model's ability to minimize false predictions, thereby enhancing its clinical applicability. The research findings not only underline the remarkable efficacy of XGBoost in T2DM prediction but also contribute to the burgeoning field of machine learning applications in personalized healthcare. By elucidating a novel paradigm that accentuates the synergistic integration of multifaceted clinical parameters, this study fosters a promising avenue for precise early detection, risk stratification, and patient-centric intervention in diabetes care. The research serves as a beacon, inspiring further exploration and innovation in leveraging advanced analytical techniques for transformative impacts on predictive diagnostics and chronic disease management.

Design of Optimal Thermal Structure for DUT Shell using Fluid Analysis (유동해석을 활용한 DUT Shell의 최적 방열구조 설계)

  • Jeong-Gu Lee;Byung-jin Jin;Yong-Hyeon Kim;Young-Chul Bae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.4
    • /
    • pp.641-648
    • /
    • 2023
  • Recently, the rapid growth of artificial intelligence among the 4th industrial revolution has progressed based on the performance improvement of semiconductor, and circuit integration. According to transistors, which help operation of internal electronic devices and equipment that have been progressed to be more complicated and miniaturized, the control of heat generation and improvement of heat dissipation efficiency have emerged as new performance indicators. The DUT(Device Under Test) Shell is equipment which detects malfunction transistor by evaluating the durability of transistor through heat dissipation in a state where the power is cut off at an arbitrary heating point applying the rating current to inspect the transistor. Since the DUT shell can test more transistor at the same time according to the heat dissipation structure inside the equipment, the heat dissipation efficiency has a direct relationship with the malfunction transistor detection efficiency. Thus, in this paper, we propose various method for PCB configuration structure to optimize heat dissipation of DUT shell and we also propose various transformation and thermal analysis of optimal DUT shell using computational fluid dynamics.

AI based complex sensor application study for energy management in WTP (정수장에서의 에너지 관리를 위한 AI 기반 복합센서 적용 연구)

  • Hong, Sung-Taek;An, Sang-Byung;Kim, Kuk-Il;Sung, Min-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.322-323
    • /
    • 2022
  • The most necessary thing for the optimal operation of a water purification plant is to accurately predict the pattern and amount of tap water used by consumers. The required amount of tap water should be delivered to the drain using a pump and stored, and the required flow rate should be supplied in a timely manner using the minimum amount of electrical energy. The short-term demand forecasting required from the point of view of energy optimization operation among water purification plant volume predictions has been made in consideration of seasons, major periods, and regional characteristics using time series analysis, regression analysis, and neural network algorithms. In this paper, we analyzed energy management methods through AI-based complex sensor applicability analysis such as LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Units), which are types of cyclic neural networks.

  • PDF

Can Artificial Intelligence Boost Developing Electrocatalysts for Efficient Water Splitting to Produce Green Hydrogen?

  • Jaehyun Kim;Ho Won Jang
    • Korean Journal of Materials Research
    • /
    • v.33 no.5
    • /
    • pp.175-188
    • /
    • 2023
  • Water electrolysis holds great potential as a method for producing renewable hydrogen fuel at large-scale, and to replace the fossil fuels responsible for greenhouse gases emissions and global climate change. To reduce the cost of hydrogen and make it competitive against fossil fuels, the efficiency of green hydrogen production should be maximized. This requires superior electrocatalysts to reduce the reaction energy barriers. The development of catalytic materials has mostly relied on empirical, trial-and-error methods because of the complicated, multidimensional, and dynamic nature of catalysis, requiring significant time and effort to find optimized multicomponent catalysts under a variety of reaction conditions. The ultimate goal for all researchers in the materials science and engineering field is the rational and efficient design of materials with desired performance. Discovering and understanding new catalysts with desired properties is at the heart of materials science research. This process can benefit from machine learning (ML), given the complex nature of catalytic reactions and vast range of candidate materials. This review summarizes recent achievements in catalysts discovery for the hydrogen evolution reaction (HER) and oxygen evolution reaction (OER). The basic concepts of ML algorithms and practical guides for materials scientists are also demonstrated. The challenges and strategies of applying ML are discussed, which should be collaboratively addressed by materials scientists and ML communities. The ultimate integration of ML in catalyst development is expected to accelerate the design, discovery, optimization, and interpretation of superior electrocatalysts, to realize a carbon-free ecosystem based on green hydrogen.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.