• Title/Summary/Keyword: New Algorithm

Search Result 11,797, Processing Time 0.043 seconds

Process Fault Probability Generation via ARIMA Time Series Modeling of Etch Tool Data

  • Arshad, Muhammad Zeeshan;Nawaz, Javeria;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.241-241
    • /
    • 2012
  • Semiconductor industry has been taking the advantage of improvements in process technology in order to maintain reduced device geometries and stringent performance specifications. This results in semiconductor manufacturing processes became hundreds in sequence, it is continuously expected to be increased. This may in turn reduce the yield. With a large amount of investment at stake, this motivates tighter process control and fault diagnosis. The continuous improvement in semiconductor industry demands advancements in process control and monitoring to the same degree. Any fault in the process must be detected and classified with a high degree of precision, and it is desired to be diagnosed if possible. The detected abnormality in the system is then classified to locate the source of the variation. The performance of a fault detection system is directly reflected in the yield. Therefore a highly capable fault detection system is always desirable. In this research, time series modeling of the data from an etch equipment has been investigated for the ultimate purpose of fault diagnosis. The tool data consisted of number of different parameters each being recorded at fixed time points. As the data had been collected for a number of runs, it was not synchronized due to variable delays and offsets in data acquisition system and networks. The data was then synchronized using a variant of Dynamic Time Warping (DTW) algorithm. The AutoRegressive Integrated Moving Average (ARIMA) model was then applied on the synchronized data. The ARIMA model combines both the Autoregressive model and the Moving Average model to relate the present value of the time series to its past values. As the new values of parameters are received from the equipment, the model uses them and the previous ones to provide predictions of one step ahead for each parameter. The statistical comparison of these predictions with the actual values, gives us the each parameter's probability of fault, at each time point and (once a run gets finished) for each run. This work will be extended by applying a suitable probability generating function and combining the probabilities of different parameters using Dempster-Shafer Theory (DST). DST provides a way to combine evidence that is available from different sources and gives a joint degree of belief in a hypothesis. This will give us a combined belief of fault in the process with a high precision.

  • PDF

Application of neural network for airship take-off and landing mode by buoyancy control (기낭 부력 제어에 의한 비행선 이착륙의 인공신경망 적용)

  • Chang, Yong-Jin;Woo, Gui-Ae;Kim, Jong-Kwon;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.2
    • /
    • pp.84-91
    • /
    • 2005
  • For long time, the takeoff and landing control of airship was worked by human handling. With the development of the autonomous control system, the exact controls during the takeoff and landing were required and lots of methods and algorithms were suggested. This paper presents the result of airship take-off and landing by buoyancy control using air ballonet volume change and performance control of pitch angle for stable flight within the desired altitude. For the complexity of airship's dynamics, firstly, simple PID controller was applied. Due to the various atmospheric conditions, this controller didn't give satisfactory results. Therefore, new control method was designed to reduce rapidly the error between designed trajectory and actual trajectory by learning algorithm using an artificial neural network. Generally, ANN has various weaknesses such as large training time, selection of neuron and hidden layer numbers required to deal with complex problem. To overcome these drawbacks, in this paper, the RBFN (radial basis function network) controller developed. The weight value of RBFN is acquired by learning which to reduce the error between desired input output through and airship dynamics to impress the disturbance. As a result of simulation, the controller using the RBFN is superior to PID controller which maximum error is 15M.

Wind Estimation Power Control using Wind Turbine Power and Rotor speed (풍력터빈의 출력과 회전속도를 이용한 풍속예측 출력제어)

  • Ko, Seung-Youn;Kim, Ho-Chan;Huh, Jong-Chul;Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.4
    • /
    • pp.92-99
    • /
    • 2016
  • A wind turbine is controlled for the purpose of obtaining the maximum power below its rated wind speed. Among the methods of obtaining the maximum power, TSR (Tip Speed Ratio) optimal control and P&O (Perturbation and Observation) control are widely used. The P&O control algorithm using the turbine power and rotational speed is simple, but its slow response is a weak point. Whereas TSR control's response is fast, it requires the precise wind speed. A method of measuring or estimating the wind speed is used to obtain a precise value. However, estimation methods are mostly used, because it is difficult to avoid the blade interference when measuring the wind speed near the blades. Neural networks and various numerical methods have been applied for estimating the wind speed, because it involves an inverse problem. However, estimating the wind speed is still a difficult problem, even with these methods. In this paper, a new method is introduced to estimate the wind speed in the wind-power graph by using the turbine power and rotational speed. Matlab/Simulink is used to confirm that the proposed method can estimate the wind speed properly to obtain the maximum power.

Algorithms for Indexing and Integrating MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 인덱싱 및 결합 알고리즘)

  • Song, Chi-Ill;Nang, Jong-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a new indexing mechanism for MPEG-7 visual descriptors, especially Dominant Color and Contour Shape descriptors, that guarantees an efficient similarity search for the multimedia database whose visual meta-data are represented with MPEG-7. Since the similarity metric used in the Dominant Color descriptor is based on Gaussian mixture model, the descriptor itself could be transform into a color histogram in which the distribution of the color values follows the Gauss distribution. Then, the transformed Dominant Color descriptor (i.e., the color histogram) is indexed in the proposed indexing mechanism. For the indexing of Contour Shape descriptor, we have used a two-pass algorithm. That is, in the first pass, since the similarity of two shapes could be roughly measured with the global parameters such as eccentricity and circularity used in Contour shape descriptor, the dissimilar image objects could be excluded with these global parameters first. Then, the similarities between the query and remaining image objects are measured with the peak parameters of Contour Shape descriptor. This two-pass approach helps to reduce the computational resources to measure the similarity of image objects using Contour Shape descriptor. This paper also proposes two integration schemes of visual descriptors for an efficient retrieval of multimedia database. The one is to use the weight of descriptor as a yardstick to determine the number of selected similar image objects with respect to that descriptor, and the other is to use the weight as the degree of importance of the descriptor in the global similarity measurement. Experimental results show that the proposed indexing and integration schemes produce a remarkable speed-up comparing to the exact similarity search, although there are some losses in the accuracy because of the approximated computation in indexing. The proposed schemes could be used to build a multimedia database represented in MPEG-7 that guarantees an efficient retrieval.

Study on the methods of risk assessment of human exposure by using of PVC flooring (PVC 바닥재 인체 노출에 따른 위해성 평가 연구)

  • Kim, Woo Il;Cho, Yoon A;Kim, Min Sun;Lee, Ji Youmg;Kang, Young Yeul;Shin, Sun Kyoung;Jeong, Seong Kyoung;Yeon, Jin Mo
    • Analytical Science and Technology
    • /
    • v.27 no.5
    • /
    • pp.261-268
    • /
    • 2014
  • In advanced countries, a variety of consumer exposure assessment models including CONSEXPO, are developed to manage risks of consumer products containing hazardous materials. The models are used to assess the risks of exposure to hazardous chemicals in consumer products, which serves as a foundation for regulation standards. In this study, exposure assessment models applicable for various scenarios were reviewed and a proper model was applied for the selected products and risk assessment was conducted at each stage to establish a risk assessment procedure for different types of products. Based on the exposure scenario, exposure factor was selected and according to the algorithm produced based on CONSEXPO exposure model, some level of phthalates were detected from some types of PVC flooring. However, a correlation between phthalate content and migration rate showed r-square 0.0065, little correlation, which is inadequate for estimating standard value. For this reason, it seems valid that the current standard be in place. Additionally, any new standard was not suggested as VOCs were not found harmful to human health, allowing the existing standard to be continuously applied. No migration rate was found for heavy metals and risk assessment was not performed accordingly. In this aspect, it is presumed that hazards to health through dermal exposure would be very little.

Intelligent I/O Subsystem for Future A/V Embedded Device (멀티미디어 기기를 위한 지능형 입출력 서브시스템)

  • Jang, Hyung-Kyu;Won, Yoo-Jip;Ryu, Jae-Min;Shim, Jun-Seok;Boldyrev, Serguei
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.79-91
    • /
    • 2006
  • The intelligent disk can improve the overall performance of the I/O subsystem by processing the I/O operations in the disk side. At present time, however, realizing the intelligent disk seems to be impossible because of the limitation of the I/O subsystem and the lack of the backward compatibility with the traditional I/O interface scheme. In this paper, we proposed new model for the intelligent disk that dynamically optimizes the I/O subsystem using the information that is only related to the physical sector. In this way, the proposed model does not break the compatibility with the traditional I/O interface scheme. For these works, the boosting algorithm that upgrades a weak learner by repeating teaming is used. If the last learner classifies a recent I/O workload as the multimedia workload, the disk reads more sectors. Also, by embedding this functionality as a firmware or a embedded OS within the disk, the overall I/O subsystem can be operated more efficiently without the additional workload.

Stock prediction using combination of BERT sentiment Analysis and Macro economy index

  • Jang, Euna;Choi, HoeRyeon;Lee, HongChul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.47-56
    • /
    • 2020
  • The stock index is used not only as an economic indicator for a country, but also as an indicator for investment judgment, which is why research into predicting the stock index is ongoing. The task of predicting the stock price index involves technical, basic, and psychological factors, and it is also necessary to consider complex factors for prediction accuracy. Therefore, it is necessary to study the model for predicting the stock price index by selecting and reflecting technical and auxiliary factors that affect the fluctuation of the stock price according to the stock price. Most of the existing studies related to this are forecasting studies that use news information or macroeconomic indicators that create market fluctuations, or reflect only a few combinations of indicators. In this paper, this we propose to present an effective combination of the news information sentiment analysis and various macroeconomic indicators in order to predict the US Dow Jones Index. After Crawling more than 93,000 business news from the New York Times for two years, the sentiment results analyzed using the latest natural language processing techniques BERT and NLTK, along with five macroeconomic indicators, gold prices, oil prices, and five foreign exchange rates affecting the US economy Combination was applied to the prediction algorithm LSTM, which is known to be the most suitable for combining numeric and text information. As a result of experimenting with various combinations, the combination of DJI, NLTK, BERT, OIL, GOLD, and EURUSD in the DJI index prediction yielded the smallest MSE value.

A new approach to enhancement of ground penetrating radar target signals by pulse compression (파형압축 기법에 의한 GPR탐사 반사신호 분해능 향상을 위한 새로운 접근)

  • Gaballah, Mahmoud;Sato, Motoyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.77-84
    • /
    • 2009
  • Ground penetrating radar (GPR) is an effective tool for detecting shallow subsurface targets. In many GPR applications, these targets are veiled by the strong waves reflected from the ground surface, so that we need to apply a signal processing technique to separate the target signal from such strong signals. A pulse-compression technique is used in this research to compress the signal width so that it can be separated out from the strong contaminated clutter signals. This work introduces a filter algorithm to carry out pulse compression for GPR data, using a Wiener filtering technique. The filter is applied to synthetic and field GPR data acquired over a buried pipe. The discrimination method uses both the reflected signal from the target and the strong ground surface reflection as a reference signal for pulse compression. For a pulse-compression filter, reference signal selection is an important issue, because as the signal width is compressed the noise level will blow up, especially if the signal-to-noise ratio of the reference signal is low. Analysis of the results obtained from simulated and field GPR data indicates a significant improvement in the GPR image, good discrimination between the target reflection and the ground surface reflection, and better performance with reliable separation between them. However, at the same time the noise level slightly increases in field data, due to the wide bandwidth of the reference signal, which includes the higher-frequency components of noise. Using the ground-surface reflection as a reference signal we found that the pulse width could be compressed and the subsurface target reflection could be enhanced.

A Study on Correlation Analysis and Preference Prediction for Point-of-Interest Recommendation (Point-of-Interest 추천을 위한 매장 간 상관관계 분석 및 선호도 예측 연구)

  • Park, So-Hyun;Park, Young-Ho;Park, Eun-Young;Ihm, Sun-Young
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.871-880
    • /
    • 2018
  • Recently, the technology of recommendation of POI (Point of Interest) related technology is getting attention with the increase of big data related to consumers. Previous studies on POI recommendation systems have been limited to specific data sets. The problem is that if the study is carried out with this particular dataset, it may be suitable for the particular dataset. Therefore, this study analyzes the similarity and correlation between stores using the user visit data obtained from the integrated sensor installed in Seoul and Songjeong roads. Based on the results of the analysis, we study the preference prediction system which recommends the stores that new users are interested in. As a result of the experiment, various similarity and correlation analysis were carried out to obtain a list of relevant stores and a list of stores with low relevance. In addition, we performed a comparative experiment on the preference prediction accuracy under various conditions. As a result, it was confirmed that the jacquard similarity based item collaboration filtering method has higher accuracy than other methods.

Analysis of Foundation Procedure for Chosun Dynasty Based on Network (네트워크 기반 조선왕조 건국과정 분석)

  • Kim, Hak Yong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.582-591
    • /
    • 2015
  • Late-Koryeo people networks were constructed from four different history books that were written by various historic aspects in the period from king Kongmin to the final king of the Koryeo, Kongyang. All networks constructed in this study show scale free network properties as if most social networks do. Tajo-sillok preface is one of subjectively written history book that described personal history of the Lee Seong-gye and his ancestors. It is confirmed that the book is one of the most biased-written history books through network study. Jeong Do-jeon known as a Chosun dynasty projector is not greatly contributed for founding of a Chosun dynasty in network study and various historical documents as well. In this network study, we provide objective historical information in the historical situations of the late-Koryeo and during establishment procedure of Chosun dynasty. Hub nodes in network is denoted highly linked nodes, called degree. Stress centrality is a unit to measure positional importancy in the network. If we employ two factors, degree and stress centrality to determine hub node, it represents high connectivity and importancy as well. As comparing values of the degree and stress centrality, we elucidate more objective historical facts from late-Koryeo situations in this study. If we further develop and employ a new algorithm that is considered both degree and stress centrality, it is a very useful tool for determining hub node.