• Title/Summary/Keyword: AI network

Search Result 747, Processing Time 0.024 seconds

Establishment of a public safety network app security system (재난안전망 앱 보안 체계 구축)

  • Baik, Nam-Kyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1375-1380
    • /
    • 2021
  • Korea's security response to application service app is still insufficient due to the initial opening of the public safety network. Therefore, preemptive security measures are essential. In this study, we proposed to establish a 'public safety network app security system' to prevent potential vulnerabilities to the app store that distributes app in public safety network and android operating system that operate app on dedicated terminal devices. In order for an application service app to be listed on the public safety network mobile app store, a dataset of malicious and normal app is first established to extract characteristics and select the most effective AI model to perform static and dynamic analysis. According to the analysis results, 'Safety App Certificate' is certified for non-malicious app to secure reliability for listed apps. Ultimately, it minimizes the security blind spots of public safety network app. In addition, the safety of the network can be secured by supporting public safety application service of certified apps.

A Local Tuning Scheme of RED using Genetic Algorithm for Efficient Network Management in Muti-Core CPU Environment (멀티코어 CPU 환경하에서 능률적인 네트워크 관리를 위한 유전알고리즘을 이용한 국부적 RED 조정 기법)

  • Song, Ja-Young;Choe, Byeong-Seog
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.1-13
    • /
    • 2010
  • It is not easy to set RED(Random Early Detection) parameter according to environment in managing Network Device. Especially, it is more difficult to set parameter in the case of maintaining the constant service rate according to the change of environment. In this paper, we hypothesize the router that has Multi-core CPU in output queue and propose AI RED(Artificial Intelligence RED), which directly induces Genetic Algorithm of Artificial Intelligence in the output queue that is appropriate to the optimization of parameter according to RED environment, which is automatically adaptive to workload. As a result, AI RED Is simpler and finer than FuRED(Fuzzy-Logic-based RED), and RED parameter that AI RED searches through simulations is more adaptive to environment than standard RED parameter, providing the effective service. Consequently, the automation of management of RED parameter can provide a manager with the enhancement of efficiency in Network management.

Trends in AI Processor Technology (인공지능프로세서 기술 동향)

  • Lee, M.Y.;Chung, J.;Lee, J.H.;Han, J.H.;Kwon, Y.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.66-75
    • /
    • 2020
  • As the increasing expectations of a practical AI (Artificial Intelligence) service makes AI algorithms more complicated, an efficient processor to process AI algorithms is required. To meet this requirement, processors optimized for parallel processing, such as GPUs (Graphics Processing Units), have been widely employed. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted. This paper briefly introduces an AI processor especially for inference acceleration, developed by the Electronics and Telecommunications Research Institute, South Korea., and other global vendors for mobile and server platforms. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted.

40-TFLOPS artificial intelligence processor with function-safe programmable many-cores for ISO26262 ASIL-D

  • Han, Jinho;Choi, Minseok;Kwon, Youngsu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.468-479
    • /
    • 2020
  • The proposed AI processor architecture has high throughput for accelerating the neural network and reduces the external memory bandwidth required for processing the neural network. For achieving high throughput, the proposed super thread core (STC) includes 128 × 128 nano cores operating at the clock frequency of 1.2 GHz. The function-safe architecture is proposed for a fault-tolerance system such as an electronics system for autonomous cars. The general-purpose processor (GPP) core is integrated with STC for controlling the STC and processing the AI algorithm. It has a self-recovering cache and dynamic lockstep function. The function-safe design has proved the fault performance has ASIL D of ISO26262 standard fault tolerance levels. Therefore, the entire AI processor is fabricated via the 28-nm CMOS process as a prototype chip. Its peak computing performance is 40 TFLOPS at 1.2 GHz with the supply voltage of 1.1 V. The measured energy efficiency is 1.3 TOPS/W. A GPP for control with a function-safe design can have ISO26262 ASIL-D with the single-point fault-tolerance rate of 99.64%.

A Model of Artificial Intelligence in Cyber Security of SCADA to Enhance Public Safety in UAE

  • Omar Abdulrahmanal Alattas Alhashmi;Mohd Faizal Abdullah;Raihana Syahirah Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.2
    • /
    • pp.173-182
    • /
    • 2023
  • The UAE government has set its sights on creating a smart, electronic-based government system that utilizes AI. The country's collaboration with India aims to bring substantial returns through AI innovation, with a target of over $20 billion in the coming years. To achieve this goal, the UAE launched its AI strategy in 2017, focused on improving performance in key sectors and becoming a leader in AI investment. To ensure public safety as the role of AI in government grows, the country is working on developing integrated cyber security solutions for SCADA systems. A questionnaire-based study was conducted, using the AI IQ Threat Scale to measure the variables in the research model. The sample consisted of 200 individuals from the UAE government, private sector, and academia, and data was collected through online surveys and analyzed using descriptive statistics and structural equation modeling. The results indicate that the AI IQ Threat Scale was effective in measuring the four main attacks and defense applications of AI. Additionally, the study reveals that AI governance and cyber defense have a positive impact on the resilience of AI systems. This study makes a valuable contribution to the UAE government's efforts to remain at the forefront of AI and technology exploitation. The results emphasize the need for appropriate evaluation models to ensure a resilient economy and improved public safety in the face of automation. The findings can inform future AI governance and cyber defense strategies for the UAE and other countries.

A Study on AI Evolution Trend based on Topic Frame Modeling (인공지능발달 토픽 프레임 연구 -계열화(seriation)와 통합화(skeumorph)의 사회구성주의 중심으로-)

  • Kweon, Sang-Hee;Cha, Hyeon-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.66-85
    • /
    • 2020
  • The purpose of this study is to explain and predict trends the AI development process based on AI technology patents (total) and AI reporting frames in major newspapers. To that end, a summary of South Korean and U.S. technology patents filed over the past nine years and the AI (Artificial Intelligence) news text of major domestic newspapers were analyzed. In this study, Topic Modeling and Time Series Return Analysis using Big Data were used, and additional network agenda correlation and regression analysis techniques were used. First, the results of this study were confirmed in the order of artificial intelligence and algorithm 5G (hot AI technology) in the AI technical patent summary, and in the news report, AI industrial application and data analysis market application were confirmed in the order, indicating the trend of reporting on AI's social culture. Second, as a result of the time series regression analysis, the social and cultural use of AI and the start of industrial application were derived from the rising trend topics. The downward trend was centered on system and hardware technology. Third, QAP analysis using correlation and regression relationship showed a high correlation between AI technology patents and news reporting frames. Through this, AI technology patents and news reporting frames have tended to be socially constructed by the determinants of media discourse in AI development.

JPEG AI의 부호화 프레임워크들의 분석 및 활용 사례에 대한 소개

  • 한승진;김영섭
    • Broadcasting and Media Magazine
    • /
    • v.28 no.1
    • /
    • pp.13-28
    • /
    • 2023
  • 이미지 압축은 이미지 및 영상처리에서 주요한 역할을 하며, 자율주행, 클라우드, 영상 송출 등의 분야에서 빅데이터를 처리해야 하는 수요가 늘어남에 따라 지속적인 연구가 진행 중이다. 그 중심에는 딥러닝(deep learning)의 발전이 자리잡고 있으며, 심층 신경망(deep neural network)을 효과적으로 학습하는 알고리즘들을 적용한 논문들은 기존 압축 포맷인 JPEG, JPEG 2000, MPEG 등의 압축 성능을 뛰어넘는 결과를 보여 주고 있다. 이에 따라 JPEG AI는 딥러닝 기반 학습 이미지 압축의 표준을 제정하는 일을 진행 중이다. 본 기고에서는 JPEG AI가 표준화하고자 하는 기술과 JPEG AI에 제안한 압축 프레임워크들을 분석하고, 활용 사례들을 소개하여 JPEG AI 기반 학습 이미지 압축 모델의 동향에 대해 알아보고자 한다.

  • PDF

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

Changes in the Structure of Collaboration Network in Artificial Intelligence by National R&D Stage

  • Hyun, Mi Hwan;Lee, Hye Jin;Lim, Seok Jong;Lee, KangSan DaJeong
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.12-24
    • /
    • 2022
  • This study attempted to investigate changes in collaboration structure for each stage of national Research and Development (R&D) in the artificial intelligence (AI) field through analysis of a co-author network for papers written under national R&D projects. For this, author information was extracted from national R&D outcomes in AI from 2014 to 2019. For such R&D outcomes, NTIS (National Science & Technology Information Service) information from the KISTI (Korea Institute of Science and Technology Information) was utilized. In research collaboration in AI, power function structure, in which research efforts are led by some influential researchers, is found. In other words, less than 30 percent is linked to the largest cluster, and a segmented network pattern in which small groups are primarily developed is observed. This means a large research group with high connectivity and a small group are connected with each other, and a sporadic link is found. However, the largest cluster grew larger and denser over time, which means that as research became more intensified, new researchers joined a mainstream network, expanding a scope of collaboration. Such research intensification has expanded the scale of a collaborative researcher group and increased the number of large studies. Instead of maintaining conventional collaborative relationships, in addition, the number of new researchers has risen, forming new relationships over time.

Implementation of Autonomous IoT Integrated Development Environment based on AI Component Abstract Model (AI 컴포넌트 추상화 모델 기반 자율형 IoT 통합개발환경 구현)

  • Kim, Seoyeon;Yun, Young-Sun;Eun, Seong-Bae;Cha, Sin;Jung, Jinman
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.71-77
    • /
    • 2021
  • Recently, there is a demand for efficient program development of an IoT application support frameworks considering heterogeneous hardware characteristics. In addition, the scope of hardware support is expanding with the development of neuromorphic architecture that mimics the human brain to learn on their own and enables autonomous computing. However, most existing IoT IDE(Integrated Development Environment), it is difficult to support AI(Artificial Intelligence) or to support services combined with various hardware such as neuromorphic architectures. In this paper, we design an AI component abstract model that supports the second-generation ANN(Artificial Neural Network) and the third-generation SNN(Spiking Neural Network), and implemented an autonomous IoT IDE based on the proposed model. IoT developers can automatically create AI components through the proposed technique without knowledge of AI and SNN. The proposed technique is flexible in code conversion according to runtime, so development productivity is high. Through experimentation of the proposed method, it was confirmed that the conversion delay time due to the VCL(Virtual Component Layer) may occur, but the difference is not significant.