• Title/Summary/Keyword: Step input control

Search Result 389, Processing Time 0.028 seconds

A Study on the Compensation of the Inductance Parameters of Interior Permanent-Magnet Synchronous Motors Affected by the Magnet Size

  • Jang, Ik-Sang;Lee, Hyung-Woo;Kim, Won-Ho;Cho, Su-Yeon;Kim, Mi-Jung;Lee, Ki-Doek;Lee, Ju
    • Journal of Magnetics
    • /
    • v.16 no.1
    • /
    • pp.74-76
    • /
    • 2011
  • Interior permanent-magnet synchronous motors (IPMSMs) produce both magnetic and reluctance torques. The reluctance torque is due to the difference between the d- and q-axis inductances based on the geometric rotor structure. The steady-state performance analysis and precise control of the IPMSMs greatly depend on the accurate determination of the parameters. The three essential parameters of the IPMSMs are the armature flux linkage of the permanent magnet, the d-axis inductance, and the q-axis inductance. In the basic design step of an IPMSM, the inductance parameters are very important for determining the motor characteristics, such as the input voltage, torque, and efficiency. Thus, it is very important to accurately estimate the values of the motor inductances. The inductance parameters of IPMSMs have nonlinear characteristics along the magnet size because the iron core is saturated by the magnet and armature reaction fluxes. In this study, the inductance parameters were calculated using both the magnetic-equivalent-circuit method and the finite-element method (FEM). Then the calculated parameters were compensated by the saturation coefficient function, which was also calculated via the magnetic-equivalent-circuit method and FEM.

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature Engineering: A Novel Approach for Improved Accuracy and Robustness

  • Mulomba Mukendi Christian;Yun Seon Kim;Hyebong Choi;Jaeyoung Lee;SongHee You
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.393-405
    • /
    • 2023
  • Accurate prediction of wind speed and power is vital for enhancing the efficiency of wind energy systems. Numerous solutions have been implemented to date, demonstrating their potential to improve forecasting. Among these, deep learning is perceived as a revolutionary approach in the field. However, despite their effectiveness, the noise present in the collected data remains a significant challenge. This noise has the potential to diminish the performance of these algorithms, leading to inaccurate predictions. In response to this, this study explores a novel feature engineering approach. This approach involves altering the data input shape in both Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) and Autoregressive models for various forecasting horizons. The results reveal substantial enhancements in model resilience against noise resulting from step increases in data. The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps. Furthermore, this method consistently provides high accuracy for short, mid, and long-term forecasts, outperforming the performance of individual models. These findings pave the way for further research on noise reduction strategies at different forecasting horizons through shape-wise feature engineering.

Recurrent Neural Network Modeling of Etch Tool Data: a Preliminary for Fault Inference via Bayesian Networks

  • Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.239-240
    • /
    • 2012
  • With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.

  • PDF

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.

Multiple-inputs Dual-outputs Process Characterization and Optimization of HDP-CVD SiO2 Deposition

  • Hong, Sang-Jeen;Hwang, Jong-Ha;Chun, Sang-Hyun;Han, Seung-Soo
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.11 no.3
    • /
    • pp.135-145
    • /
    • 2011
  • Accurate process characterization and optimization are the first step for a successful advanced process control (APC), and they should be followed by continuous monitoring and control in order to run manufacturing processes most efficiently. In this paper, process characterization and recipe optimization methods with multiple outputs are presented in high density plasma-chemical vapor deposition (HDP-CVD) silicon dioxide deposition process. Five controllable process variables of Top $SiH_4$, Bottom $SiH_4$, $O_2$, Top RF Power, and Bottom RF Power, and two responses of interest, such as deposition rate and uniformity, are simultaneously considered employing both statistical response surface methodology (RSM) and neural networks (NNs) based genetic algorithm (GA). Statistically, two phases of experimental design was performed, and the established statistical models were optimized using performance index (PI). Artificial intelligently, NN process model with two outputs were established, and recipe synthesis was performed employing GA. Statistical RSM offers minimum numbers of experiment to build regression models and response surface models, but the analysis of the data need to satisfy underlying assumption and statistical data analysis capability. NN based-GA does not require any underlying assumption for data modeling; however, the selection of the input data for the model establishment is important for accurate model construction. Both statistical and artificial intelligent methods suggest competitive characterization and optimization results in HDP-CVD $SiO_2$ deposition process, and the NN based-GA method showed 26% uniformity improvement with 36% less $SiH_4$ gas usage yielding 20.8 ${\AA}/sec$ deposition rate.

Development of a Distributed Rainfall-Runoff System for the Guem River Basin Using an Object-oriented Hydrological Modeling System (객체지향형 수문 모델링 시스템을 이용한 금강유역 분포형 강우-유출 시스템의 개발)

  • Lee, Gi-Ha;Takara, Kaoru;Jung, Kwan-Sue;Kim, Jeong-Yup;Jeon, Ja-Hun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.149-153
    • /
    • 2009
  • Physics-based distributed rainfall-runoff models are now commonly used in a variety of hydrologic applications such as to estimate flooding, water pollutant transport, sedimentation yield and so on. Moreover, it is not surprising that GIS has become an integral part of hydrologic research since this technology offers abundant information about spatial heterogeneity for both model parameters and input data that control hydrological processes. This study presents the development of a distributed rainfall-runoff prediction system for the Guem river basin ($9,835km^2$) using an Object-oriented Hydrological Modeling System (OHyMoS). We developed three types of element modules: Slope Runoff Module (SRM), Channel Routing Module (CRM), and Dam Reservoir Module (DRM) and then incorporated them systemically into a catchment modeling system under the OHyMoS. The study basin delineated by the 250m DEM (resampled from SRTM90) was divided into 14 midsize catchments and 80 sub-catchments where correspond to the WAMIS digital map. Each sub-catchment was represented by rectangular slope and channel components; water flows among these components were simulated by both SRM and CRM. In addition, outflows of two multi-purpose dams: Yongdam and Daechung dams were calculated by DRM reflecting decision makers' opinions. Therefore, the Guem river basin rainfall-runoff modeling system can provide not only each sub-catchment outflow but also dam inand outflow at one hour (or less) time step such that users can obtain comprehensive hydrological information readily for the effective and efficient flood control during a flood season.

  • PDF

Variations of NO Concentration Released from Fertilized Japanese Upland Soil Under Different Soil Moisture Conditions

  • Kim, Deug-Soo;Haruo Tsuruta;Kazuyuki Inubushi
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.14 no.E
    • /
    • pp.9-17
    • /
    • 1998
  • Oxides of nitrogen play important roles in atmospheric chemistry. Soil has been recognized as a major natural source of NO, and its emission depends on soil parameters such as soil nitrogen availability, soil moisture and temperature. It is necessary to understand effects of these controlling parameters on soil NO emission. In order to understand soil moisture effects on NO emission, variations of NO concentration and existence of its equilibrium concentration were observed from ammonium fertilized Japanese upland soil prepared for different soil moisture conditions. The closed chamber technique was employed for this study. The significant increases in NO with soil moisture were found. Maximum was occurred at sample ID4 (55% of water-filled pore space (WFPS)), but it decreased as soil moisture increased. No significant NO concentration was emitted from soil sample without fertilizer, but there was significant NO in fertilized soil samples. The magnitudes of NO from soil increased with time and reached at steady state within ten minutes approximately. These results suggest that nitrogen input from fertilizer takes charge in the first step of sharp increase in NO emission, and then soil moisture becomes important factor to control NO emission from the soils. NO concentrations from soil were compared to those one-day after the experiment. Results from the comparison analysis suggest that the soil NO flux might have been stimulated by soil disturbances like mixing, and this is much more effective in dry soils rather than in wet soils. It was found that much less NO came out from soils after a day; suggesting that most of NO was released from the soils within a day after fertilizer application during our experiment. The length of NO releasing time span may depend on the amounts of fertilizer applied, soil moisture condition, and other soil physical parameters.

  • PDF

On-line Automatic Geometric Correction System of Landsat Imagery (Landsat 영상의 온라인 자동 기하보정 시스템)

  • Yun, YoungBo;Hwang, TaeHyun;Cho, Seong-Ik;Park, Jong-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.15-23
    • /
    • 2004
  • In order to utilize remote sensed images effectively, it is necessary to correct geometric distortion. Geometric correction is a critical step to remove geometric distortions in satellite images. For geometric correction, Ground Control Points (GCPs) have to be chosen carefully to guarantee the quality of geocoded satellite images, digital maps, GPS surveying or other data. Traditional approach to geometric correction used GCPs requires substantial human operations. Also that is necessary much time and manpower. In this paper, we presented an on-line automatic geometric correction by constructing GCP Chip database. The Proposed on-line automatic geometric correction system is consists of four part. Input image, control the GCP Chip, revision of selected GCP, and output setting part. In conclusion, developed system reduced the processing time and energy for tedious manual geometric correction and promoted usage of Landsat imagery.

  • PDF

A Study on Construction Method of AI based Situation Analysis Dataset for Battlefield Awareness

  • Yukyung Shin;Soyeon Jin;Jongchul Ahn
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.37-53
    • /
    • 2023
  • The AI based intelligent command and control system can automatically analyzes the properties of intricate battlefield information and tactical data. In addition, commanders can receive situation analysis results and battlefield awareness through the system to support decision-making. It is necessary to build a battlefield situation analysis dataset similar to the actual battlefield situation for learning AI in order to provide decision-making support to commanders. In this paper, we explain the next step of the dataset construction method of the existing previous research, 'A Virtual Battlefield Situation Dataset Generation for Battlefield Analysis based on Artificial Intelligence'. We proposed a method to build the dataset required for the final battlefield situation analysis results to support the commander's decision-making and recognize the future battlefield. We developed 'Dataset Generator SW', a software tool to build a learning dataset for battlefield situation analysis, and used the SW tool to perform data labeling. The constructed dataset was input into the Siamese Network model. Then, the output results were inferred to verify the dataset construction method using a post-processing ranking algorithm.