• Title/Summary/Keyword: AI learning data

Search Result 845, Processing Time 0.025 seconds

Research Trends of Multi-agent Collaboration Technology for Artificial Intelligence Bots (AI Bots를 위한 멀티에이전트 협업 기술 동향)

  • D., Kang;J.Y., Jung;C.H., Lee;M., Park;J.W., Lee;Y.J., Lee
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.6
    • /
    • pp.32-42
    • /
    • 2022
  • Recently, decentralized approaches to artificial intelligence (AI) development, such as federated learning are drawing attention as AI development's cost and time inefficiency increase due to explosive data growth and rapid environmental changes. Collaborative AI technology that dynamically organizes collaborative groups between different agents to share data, knowledge, and experience and uses distributed resources to derive enhanced knowledge and analysis models through collaborative learning to solve given problems is an alternative to centralized AI. This article investigates and analyzes recent technologies and applications applicable to the research of multi-agent collaboration of AI bots, which can provide collaborative AI functionality autonomously.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Applying NIST AI Risk Management Framework: Case Study on NTIS Database Analysis Using MAP, MEASURE, MANAGE Approaches (NIST AI 위험 관리 프레임워크 적용: NTIS 데이터베이스 분석의 MAP, MEASURE, MANAGE 접근 사례 연구)

  • Jung Sun Lim;Seoung Hun, Bae;Taehoon Kwon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.21-29
    • /
    • 2024
  • Fueled by international efforts towards AI standardization, including those by the European Commission, the United States, and international organizations, this study introduces a AI-driven framework for analyzing advancements in drone technology. Utilizing project data retrieved from the NTIS DB via the "drone" keyword, the framework employs a diverse toolkit of supervised learning methods (Keras MLP, XGboost, LightGBM, and CatBoost) enhanced by BERTopic (natural language analysis tool). This multifaceted approach ensures both comprehensive data quality evaluation and in-depth structural analysis of documents. Furthermore, a 6T-based classification method refines non-applicable data for year-on-year AI analysis, demonstrably improving accuracy as measured by accuracy metric. Utilizing AI's power, including GPT-4, this research unveils year-on-year trends in emerging keywords and employs them to generate detailed summaries, enabling efficient processing of large text datasets and offering an AI analysis system applicable to policy domains. Notably, this study not only advances methodologies aligned with AI Act standards but also lays the groundwork for responsible AI implementation through analysis of government research and development investments.

The Core Concepts of Mathematics for AI and An Analysis of Mathematical Contents in the Textbook (수학과 인공지능(AI) 핵심 개념과 <인공지능 수학> 내용 체계 분석)

  • Kim, Changil;Jeon, Youngju
    • Journal of the Korean School Mathematics Society
    • /
    • v.24 no.4
    • /
    • pp.391-405
    • /
    • 2021
  • In this study, 'data collection', 'data expression', 'data analysis, and 'optimization and decision-making' were selected as the core AI concepts to be dealt with in the mathematics for AI education. Based on this, the degree of reflection of AI core concepts was investigated and analyzed compared to the mathematical core concepts and content of each area of the elective course. In addition, the appropriateness of the content of was examined with a focus on core concepts and related learning contents. The results provided some suggestions for answering the following four critical questions. First, How to set the learning path for ? Second, is it necessary to discuss the redefinition of the nature of ? Third, is it appropriate to select core concepts and terms for ? Last, is it appropriate to present the relevant learning contents of the content system of ?

AI-Based Intelligent CCTV Detection Performance Improvement (AI 기반 지능형 CCTV 이상행위 탐지 성능 개선 방안)

  • Dongju Ryu;Kim Seung Hee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.117-123
    • /
    • 2023
  • Recently, as the demand for Generative Artificial Intelligence (AI) and artificial intelligence has increased, the seriousness of misuse and abuse has emerged. However, intelligent CCTV, which maximizes detection of abnormal behavior, is of great help to prevent crime in the military and police. AI performs learning as taught by humans and then proceeds with self-learning. Since AI makes judgments according to the learned results, it is necessary to clearly understand the characteristics of learning. However, it is often difficult to visually judge strange and abnormal behaviors that are ambiguous even for humans to judge. It is very difficult to learn this with the eyes of artificial intelligence, and the result of learning is very many False Positive, False Negative, and True Negative. In response, this paper presented standards and methods for clarifying the learning of AI's strange and abnormal behaviors, and presented learning measures to maximize the judgment ability of intelligent CCTV's False Positive, False Negative, and True Negative. Through this paper, it is expected that the artificial intelligence engine performance of intelligent CCTV currently in use can be maximized, and the ratio of False Positive and False Negative can be minimized..

Case Analysis on AI-Based Learning Assistance Systems (인공지능 기반 학습 지원 시스템에 관한 사례 분석)

  • Chee, Hyunkyung;Kim, Minji;Lee, Gayoung;Huh, Sunyoung;Kim, Myung sun
    • Journal of Engineering Education Research
    • /
    • v.27 no.4
    • /
    • pp.3-11
    • /
    • 2024
  • This study classified domestic and international systems by type, presenting their key features and examples, with the aim of outlining future directions for system development and research. AI-based learning assistance systems can be categorized into instructional-learning evaluation types and academic recommendation types, depending on their purpose. Instructional-learning evaluation types measure learners' levels through initial diagnostic assessments, provide customized learning, and offer adaptive feedback visualized based on learners' misconceptions identified through learning data. Academic recommendation types provide personalized academic pathways and a variety of information and functions to assist with overall school life, based on the big data held by schools. Based on these characteristics, future system development should clearly define the development purpose from the planning stage, considering data ethics and stability, and should not only approach from a technological perspective but also sufficiently reflect educational contexts.

Determination of High-pass Filter Frequency with Deep Learning for Ground Motion (딥러닝 기반 지반운동을 위한 하이패스 필터 주파수 결정 기법)

  • Lee, Jin Koo;Seo, JeongBeom;Jeon, SeungJin
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.183-191
    • /
    • 2024
  • Accurate seismic vulnerability assessment requires high quality and large amounts of ground motion data. Ground motion data generated from time series contains not only the seismic waves but also the background noise. Therefore, it is crucial to determine the high-pass cut-off frequency to reduce the background noise. Traditional methods for determining the high-pass filter frequency are based on human inspection, such as comparing the noise and the signal Fourier Amplitude Spectrum (FAS), f2 trend line fitting, and inspection of the displacement curve after filtering. However, these methods are subject to human error and unsuitable for automating the process. This study used a deep learning approach to determine the high-pass filter frequency. We used the Mel-spectrogram for feature extraction and mixup technique to overcome the lack of data. We selected convolutional neural network (CNN) models such as ResNet, DenseNet, and EfficientNet for transfer learning. Additionally, we chose ViT and DeiT for transformer-based models. The results showed that ResNet had the highest performance with R2 (the coefficient of determination) at 0.977 and the lowest mean absolute error (MAE) and RMSE (root mean square error) at 0.006 and 0.074, respectively. When applied to a seismic event and compared to the traditional methods, the determination of the high-pass filter frequency through the deep learning method showed a difference of 0.1 Hz, which demonstrates that it can be used as a replacement for traditional methods. We anticipate that this study will pave the way for automating ground motion processing, which could be applied to the system to handle large amounts of data efficiently.

Research on the application of Machine Learning to threat assessment of combat systems

  • Seung-Joon Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.47-55
    • /
    • 2023
  • This paper presents a method for predicting the threat index of combat systems using Gradient Boosting Regressors and Support Vector Regressors among machine learning models. Currently, combat systems are software that emphasizes safety and reliability, so the application of AI technology that is not guaranteed to be reliable is restricted by policy, and as a result, the electrified domestic combat systems are not equipped with AI technology. However, in order to respond to the policy direction of the Ministry of National Defense, which aims to electrify AI, we conducted a study to secure the basic technology required for the application of machine learning in combat systems. After collecting the data required for threat index evaluation, the study determined the prediction accuracy of the trained model by processing and refining the data, selecting the machine learning model, and selecting the optimal hyper-parameters. As a result, the model score for the test data was over 99 points, confirming the applicability of machine learning models to combat systems.

Why Data Capability is Important to become an AI Matured Organization?

  • Gyeung-min Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.31 no.3
    • /
    • pp.165-179
    • /
    • 2024
  • Although firms with advanced analytics and machine learning (which is often called AI) capabilities are considered to be highly successful in the market by making decisions and actions based on quantitative analysis using data, the scarcity of historical data and the lack of right data infrastructure are the problems for the organizations to perform such projects. The objective of this study, is to identify a road map for the organization to reach data capability maturity to become AI matured organizations. First, this study defines the terms, AI capability, data capability and AI matured organization. Then using content analyses, organizations' data practices performed for AI system development and operation are analyzed to infer a data capability roadmap to become an AI matured organization.

Application and Potential of Artificial Intelligence in Heart Failure: Past, Present, and Future

  • Minjae Yoon;Jin Joo Park;Taeho Hur;Cam-Hao Hua;Musarrat Hussain;Sungyoung Lee;Dong-Ju Choi
    • International Journal of Heart Failure
    • /
    • v.6 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • The prevalence of heart failure (HF) is increasing, necessitating accurate diagnosis and tailored treatment. The accumulation of clinical information from patients with HF generates big data, which poses challenges for traditional analytical methods. To address this, big data approaches and artificial intelligence (AI) have been developed that can effectively predict future observations and outcomes, enabling precise diagnoses and personalized treatments of patients with HF. Machine learning (ML) is a subfield of AI that allows computers to analyze data, find patterns, and make predictions without explicit instructions. ML can be supervised, unsupervised, or semi-supervised. Deep learning is a branch of ML that uses artificial neural networks with multiple layers to find complex patterns. These AI technologies have shown significant potential in various aspects of HF research, including diagnosis, outcome prediction, classification of HF phenotypes, and optimization of treatment strategies. In addition, integrating multiple data sources, such as electrocardiography, electronic health records, and imaging data, can enhance the diagnostic accuracy of AI algorithms. Currently, wearable devices and remote monitoring aided by AI enable the earlier detection of HF and improved patient care. This review focuses on the rationale behind utilizing AI in HF and explores its various applications.