• Title/Summary/Keyword: AI 반도체

Search Result 72, Processing Time 0.026 seconds

A Study on Privacy Preserving Machine Learning (프라이버시 보존 머신러닝의 연구 동향)

  • Han, Woorim;Lee, Younghan;Jun, Sohee;Cho, Yungi;Paek, Yunheung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.924-926
    • /
    • 2021
  • AI (Artificial Intelligence) is being utilized in various fields and services to give convenience to human life. Unfortunately, there are many security vulnerabilities in today's ML (Machine Learning) systems, causing various privacy concerns as some AI models need individuals' private data to train them. Such concerns lead to the interest in ML systems which can preserve the privacy of individuals' data. This paper introduces the latest research on various attacks that infringe data privacy and the corresponding defense techniques.

Research on Semiconductor Technology Roadmap by the Institute of Semiconductor Engineers (반도체공학회의 반도체 기술 발전 로드맵 연구 )

  • Hyunchol Shin;Ilku Nam;Jun-Mo Yang;Byung-Wook Min;Kyuho Lee;Chiweon Yoon;Jean Ho Song
    • Transactions on Semiconductor Engineering
    • /
    • v.2 no.3
    • /
    • pp.19-26
    • /
    • 2024
  • Semiconductors are considered as one of the essential technologies in modern electronic devices and systems. Thus, it is required to predict and propose the semiconductor technology development roadmap. This study describes the key semiconductor technology issues, research and development trends, and their future roadmap, in the four areas such as the semiconductor device More-Moore integration technology, system-specific application processor technology, artificial intelligence/machine learning (AI/ML) processor technology, and outside system connectivity via optical and wireless communication.

A Study on Design Space Exploration on AI accelerator (AI 가속기 설계 영역 탐색에 대한 연구)

  • Lee, Dong-Ju;Paek, Yun-Heung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.535-537
    • /
    • 2022
  • AI 가속기는 머신 러닝 및 딥 러닝을 포함한 인공 지능 및 기계 학습 응용 프로그램의 연산을 더 빠르게 수행하도록 설계된 일종의 하드웨어 가속기 또는 컴퓨터 시스템이다. 가속기를 설계하기 위해선 설계 영역 탐색(Design Space Exploration)을 하여야 하고 여러 인공지능 중에서도 합성 곱 신경망(CNN)에 대한 설계 영역 탐색을 소개한다.

Trends in AI Computing Processor Semiconductors Including ETRI's Autonomous Driving AI Processor (인공지능 컴퓨팅 프로세서 반도체 동향과 ETRI의 자율주행 인공지능 프로세서)

  • Yang, J.M.;Kwon, Y.S.;Kang, S.W.
    • Electronics and Telecommunications Trends
    • /
    • v.32 no.6
    • /
    • pp.57-65
    • /
    • 2017
  • Neural network based AI computing is a promising technology that reflects the recognition and decision operation of human beings. Early AI computing processors were composed of GPUs and CPUs; however, the dramatic increment of a floating point operation requires an energy efficient AI processor with a highly parallelized architecture. In this paper, we analyze the trends in processor architectures for AI computing. Some architectures are still composed using GPUs. However, they reduce the size of each processing unit by allowing a half precision operation, and raise the processing unit density. Other architectures concentrate on matrix multiplication, and require the construction of dedicated hardware for a fast vector operation. Finally, we propose our own inAB processor architecture and introduce domestic cutting-edge processor design capabilities.

Topic Modeling on Patent and Article Big Data Using BERTopic and Analyzing Technological Trends of AI Semiconductor Industry (BERTopic을 활용한 텍스트마이닝 기반 인공지능 반도체 기술 및 연구동향 분석)

  • Hyeonkyeong Kim;Junghoon Lee;Sunku Kang
    • Journal of Information Technology Applications and Management
    • /
    • v.31 no.1
    • /
    • pp.139-161
    • /
    • 2024
  • The Fourth Industrial Revolution has spurred widespread adoption of AI-based services, driving global interest in AI semiconductors for efficient large-scale computation. Text mining research, historically using LDA, has evolved with machine learning integration, exemplified by the 2021 BERTopic technology. This study employs BERTopic to analyze AI semiconductor-related patents and research data, generating 48 topics from 2,256 patents and 40 topics from 1,112 publications. While providing valuable insights into technology trends, the study acknowledges limitations in taking a macro approach to the entire AI semiconductor industry. Future research may explore specific technologies for more nuanced insights as the industry matures.

Implementation of Target Object Tracking Method using Unity ML-Agent Toolkit (Unity ML-Agents Toolkit을 활용한 대상 객체 추적 머신러닝 구현)

  • Han, Seok Ho;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.110-113
    • /
    • 2022
  • Non-playable game character plays an important role in improving the concentration of the game and the interest of the user, and recently implementation of NPC with reinforcement learning has been in the spotlight. In this paper, we estimate an AI target tracking method via reinforcement learning, and implement an AI-based tracking agency of specific target object with avoiding traps through Unity ML-Agents Toolkit. The implementation is built in Unity game engine, and simulations are conducted through a number of experiments. The experimental results show that outstanding performance of the tracking target with avoiding traps is shown with good enough results.

AI Accelerator Design for Edge Devices (엣지 디바이스를 위한 AI 가속기 설계 방법)

  • Whoi Ree, Ha;Hyunjun Kim;Yunheung Paek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.723-726
    • /
    • 2024
  • 단일 dataflow 를 지원하는 DNN 가속기는 자원 효율적인 성능을 보이지만, 여러 DNN 모델에 대해서 가속 효과가 제한적입니다. 반면에 모든 dataflow 를 지원하여 매 레이어마다 최적의 dataflow를 사용하여 가속하는 reconfigurable dataflow accelerator (RDA)는 굉장한 가속 효과를 보이지만 여러 dataflow 를 지원하는 과정에서 필요한 추가 하드웨어로 인하여 효율적이지 못합니다. 따라서 본 연구는 제한된 dataflow 만을 지원하여 추가 하드웨어 요구사항을 감소시키고, 중복되는 하드웨어의 재사용을 통해 최적화하는 새로운 가속기 설계를 제안합니다. 이 방식은 자원적 한계가 뚜렷한 엣지 디바이스에 RDA 방식을 적용하는데 필수적이며, 기존 RDA 의 단점을 최소화하여 성능과 자원 효율성의 최적점을 달성합니다. 실험 결과, 제안된 가속기는 기존 RDA 대비 32% 더 높은 에너지 효율을 보이며, latency 는 불과 1%의 차이를 보였습니다.

Trends of Compiler Development for AI Processor (인공지능 프로세서 컴파일러 개발 동향)

  • Kim, J.K.;Kim, H.J.;Cho, Y.C.P.;Kim, H.M.;Lyuh, C.G.;Han, J.;Kwon, Y.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.32-42
    • /
    • 2021
  • The rapid growth of deep-learning applications has invoked the R&D of artificial intelligence (AI) processors. A dedicated software framework such as a compiler and runtime APIs is required to achieve maximum processor performance. There are various compilers and frameworks for AI training and inference. In this study, we present the features and characteristics of AI compilers, training frameworks, and inference engines. In addition, we focus on the internals of compiler frameworks, which are based on either basic linear algebra subprograms or intermediate representation. For an in-depth insight, we present the compiler infrastructure, internal components, and operation flow of ETRI's "AI-Ware." The software framework's significant role is evidenced from the optimized neural processing unit code produced by the compiler after various optimization passes, such as scheduling, architecture-considering optimization, schedule selection, and power optimization. We conclude the study with thoughts about the future of state-of-the-art AI compilers.

Enhanced Machine Learning Preprocessing Techniques for Optimization of Semiconductor Process Data in Smart Factories (스마트 팩토리 반도체 공정 데이터 최적화를 위한 향상된 머신러닝 전처리 방법 연구)

  • Seung-Gyu Choi;Seung-Jae Lee;Choon-Sung Nam
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • The introduction of Smart Factories has transformed manufacturing towards more objective and efficient line management. However, most companies are not effectively utilizing the vast amount of sensor data collected every second. This study aims to use this data to predict product quality and manage production processes efficiently. Due to security issues, specific sensor data could not be verified, so semiconductor process-related training data from the "SAMSUNG SDS Brightics AI" site was used. Data preprocessing, including removing missing values, outliers, scaling, and feature elimination, was crucial for optimal sensor data. Oversampling was used to balance the imbalanced training dataset. The SVM (rbf) model achieved high performance (Accuracy: 97.07%, GM: 96.61%), surpassing the MLP model implemented by "SAMSUNG SDS Brightics AI". This research can be applied to various topics, such as predicting component lifecycles and process conditions.

A Study on Multimodal Neural Network for Intrusion Detection System (멀티 모달 침입 탐지 시스템에 관한 연구)

  • Ha, Whoi Ree;Ahn, Sunwoo;Cho, Myunghyun;Ahn, Seonggwan;Paek, Yunheung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.216-218
    • /
    • 2021
  • 최근 침입 탐지 시스템은 기존 시그니처 기반이 아닌 AI 기반 연구로 많이 진행되고 있다. 이는 시그니처 기반의 한계인 이전에 보지 못한 악성 행위의 탐지가 가능하기 때문이다. 또한 로그 정보는 시스템의 중요 이벤트를 기록하여 시스템의 상태를 반영하고 있기 때문에 로그 정보를 사용한 침입 탐지 시스템에 대한 연구가 활발히 이루어지고 있다. 하지만 로그 정보는 시스템 상태의 일부분만 반영하고 있기 때문에, 회피하기 쉬우며, 이를 보완하기 위해 system call 정보를 사용한 멀티 모달 기반 침입 시스템을 제안한다.