• Title/Summary/Keyword: Artificial intelligence Semiconductor

Search Result 73, Processing Time 0.019 seconds

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

A Study on CFD Result Analysis of Mist-CVD using Artificial Intelligence Method (인공지능기법을 이용한 초음파분무화학기상증착의 유동해석 결과분석에 관한 연구)

  • Joohwan Ha;Seokyoon Shin;Junyoung Kim;Changwoo Byun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.134-138
    • /
    • 2023
  • This study focuses on the analysis of the results of computational fluid dynamics simulations of mist-chemical vapor deposition for the growth of an epitaxial wafer in power semiconductor technology using artificial intelligence techniques. The conventional approach of predicting the uniformity of the deposited layer using computational fluid dynamics and design of experimental takes considerable time. To overcome this, artificial intelligence method, which is widely used for optimization, automation, and prediction in various fields, was utilized to analyze the computational fluid dynamics simulation results. The computational fluid dynamics simulation results were analyzed using a supervised deep neural network model for regression analysis. The predicted results were evaluated quantitatively using Euclidean distance calculations. And the Bayesian optimization was used to derive the optimal condition, which results obtained through deep neural network training showed a discrepancy of approximately 4% when compared to the results obtained through computational fluid dynamics analysis. resulted in an increase of 146.2% compared to the previous computational fluid dynamics simulation results. These results are expected to have practical applications in various fields.

  • PDF

Prediction Model for Solar Power Generation Using Measured Data (측정 데이터를 이용한 태양광 발전량 예측 모델)

  • Yeongseo Park;Sangmin kang;Juseok Moon;Seongjun Cho;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.3
    • /
    • pp.102-107
    • /
    • 2024
  • Previous research on solar power generation forecasting has generally relied on meteorological data, leading to lower prediction accuracy. This study, in contrast, uses actual measured power generation data to train various ANN (Artificial Neural Network) models and compares their prediction performance. Additionally, it describes the characteristics and advantages of each ANN model. The paper defines the principles of solar power generation, the characteristics of solar panels, and the model equations, and it also explains the I-V characteristics of solar cells. The results include a comparison between calculated and actual measured power generation, along with an evaluation of the accuracy of power generation predictions using artificial intelligence. The findings confirm that the LSTM (Long Short-Term Memory) model performs better than the MLP (Multi- Layer Perceptron) model in handling time-series data.

  • PDF

Performance Comparison of LSTM-Based Groundwater Level Prediction Model Using Savitzky-Golay Filter and Differential Method (Savitzky-Golay 필터와 미분을 활용한 LSTM 기반 지하수 수위 예측 모델의 성능 비교)

  • Keun-San Song;Young-Jin Song
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.84-89
    • /
    • 2023
  • In water resource management, data prediction is performed using artificial intelligence, and companies, governments, and institutions continue to attempt to efficiently manage resources through this. LSTM is a model specialized for processing time series data, which can identify data patterns that change over time and has been attempted to predict groundwater level data. However, groundwater level data can cause sen-sor errors, missing values, or outliers, and these problems can degrade the performance of the LSTM model, and there is a need to improve data quality by processing them in the pretreatment stage. Therefore, in pre-dicting groundwater data, we will compare the LSTM model with the MSE and the model after normaliza-tion through distribution, and discuss the important process of analysis and data preprocessing according to the comparison results and changes in the results.

  • PDF

ETRI AI Strategy #2: Strengthening Competencies in AI Semiconductor & Computing Technologies (ETRI AI 실행전략 2: AI 반도체 및 컴퓨팅시스템 기술경쟁력 강화)

  • Choi, S.S.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.13-22
    • /
    • 2020
  • There is no denying that computing power has been a crucial driving force behind the development of artificial intelligence today. In addition, artificial intelligence (AI) semiconductors and computing systems are perceived to have promising industrial value in the market along with rapid technological advances. Therefore, success in this field is also meaningful to the nation's growth and competitiveness. In this context, ETRI's AI strategy proposes implementation directions and tasks with the aim of strengthening the technological competitiveness of AI semiconductors and computing systems. The paper contains a brief background of ETRI's AI Strategy #2, research and development trends, and key tasks in four major areas: 1) AI processors, 2) AI computing systems, 3) neuromorphic computing, and 4) quantum computing.

Technical Trends in Hyperscale Artificial Intelligence Processors (초거대 인공지능 프로세서 반도체 기술 개발 동향)

  • W. Jeon;C.G. Lyuh
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.1-11
    • /
    • 2023
  • The emergence of generative hyperscale artificial intelligence (AI) has enabled new services, such as image-generating AI and conversational AI based on large language models. Such services likely lead to the influx of numerous users, who cannot be handled using conventional AI models. Furthermore, the exponential increase in training data, computations, and high user demand of AI models has led to intensive hardware resource consumption, highlighting the need to develop domain-specific semiconductors for hyperscale AI. In this technical report, we describe development trends in technologies for hyperscale AI processors pursued by domestic and foreign semiconductor companies, such as NVIDIA, Graphcore, Tesla, Google, Meta, SAPEON, FuriosaAI, and Rebellions.

Trends in Artificial Intelligence Semiconductor Memory Technology (인공지능 반도체 메모리 기술 동향)

  • K.D. Hwang;K.I. Oh;J.J. Lee;B.T. Koo
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.5
    • /
    • pp.21-30
    • /
    • 2024
  • Memory can refer to a storage device that collects data, and it has evolved to increase the reading/writing speed and reduce the power consumption. As large amounts of data are processed by artificial intelligence services, the memory data capacity requires expansion. Dynamic random-access memory (DRAM) is the most widely used type of memory. In particular, graphics double date rate and high-bandwidth memory allow to quickly transfer large amounts of data and are used as memory solutions for artificial intelligence semiconductors. We analyze development trends in DRAM from the perspectives of processing speed and power consumption. We summarize the characteristics required for next-generation memory by comparing DRAM and other types of memory implementations. Moreover, we examine the shortcomings of DRAM and infer a next-generation memory for their compensation. We also describe the operating principles of spin-torque transfer magnetic random access memory, which may replace DRAM in next-generation devices, and explain its characteristics and advantages.

AI Processor Technology Trends (인공지능 프로세서 기술 동향)

  • Kwon, Youngsu
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.5
    • /
    • pp.121-134
    • /
    • 2018
  • The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

Implementation of Probabilistic Predictive Artificial Intelligence for Remote Diagnosis in Aging Society (고령화 사회 원격 진료를 위한 확률론적 예측인공지능 연구)

  • Jeong, Jae-Seung;Ju, Hyunsu
    • Prospectives of Industrial Chemistry
    • /
    • v.23 no.6
    • /
    • pp.3-13
    • /
    • 2020
  • 저출산 고령화 사회로의 진입은 대한민국뿐만 아니라 전 세계적으로 많은 사회 문제를 야기하고 있다. 그 중에서 고령 인구 증가로 인한 의료 수요 증가와 이를 뒷받침 할 의료인력 부족은 곧 다가올 사회문제이다. 4차 산업 혁명으로 인해 다양한 사회문제에 대한 혁신적인 해법들이 제시되고 있는데, 본 기고문에서는 다가올 고령화 사회에서 의료인력 부족 등에 의한 해결법으로 원격의료 지원을 위한 인공지능 활용을 다루고자 한다. 병 진단 및 예측을 위한 여러 가지 인공지능 알고리즘은 이미 많이 개발 되어 있으나, 일반적으로 딥러닝에 많이 쓰이는 인공신경망 구조인 합성곱 뉴럴네트워크(convolution neural network)나 기존 퍼셉트론(perceptron) 구조에서 벗어나 확률론적 인공신경망 중에 하나인 베이지안 뉴럴네트워크(Bayesian neural network)를 다루고자 한다. 그중에서 연산효율적이며 뉴로모픽 하드웨어로 구현 가능성이 높고 실제 진단 예측(diagnosis prediction) 문제 해결에 강점을 보이는 알고리즘으로써 naive Bayes classifer를 활용한 연구를 소개하고자 한다.

Airborne Fine Particle Measurement Data Analysis and Statistical Significance Analysis (공기중 미세입자 측정 데이터 분석 및 통계 유의차 분석)

  • Sung Jun An;Moon Suk Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.1-5
    • /
    • 2023
  • Most of the production process is performed in a cleanroom in the case of facilities that produce semiconductor chips or display panels. Therefore, environmental management of cleanrooms is very important for product yield and quality control. Among them, airborne particles are a representative management item enough to be the standard for the actual cleanroom rating, and it is a part of the Fab or Facility monitoring system, and the sequential particle monitoring system is mainly used. However, this method has a problem in that measurement efficiency decreases as the length of the sampling tube increases. In addition, a statistically significant test of deterioration in efficiency has rarely been performed. Therefore, in this study, the statistically significant test between the number of particles measured by InSitu and the number of particles measured for each sampling tube ends(Remote). Through this, the efficiency degradation problem of the sequential particle monitoring system was confirmed by a statistical method.

  • PDF