• Title/Summary/Keyword: soft error

Search Result 362, Processing Time 0.027 seconds

Analysis of Step-Down Converter with Low Ripple for Smart IoT Devices (스마트 사물인터넷 기기용 저리플 방식의 스텝다운 컨버터 분석)

  • Kim, Da-Sol;Al-Shidaifat, AlaaDdin;Gu, Jin-Seon;Kumar, Sandeep;Song, Han-Jung
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.5
    • /
    • pp.641-644
    • /
    • 2021
  • Wearable devices and IoT are being utilized in various fields, where all systems are developing in the direction of multi-functionality, low power consumption, and high speed. In this paper, we propose a DC -DC Step-down C onverter for IoT smart devices. The proposed DC -DC Step-down converter is composed of a control block of the power supply stage. It also consists of an overheat protection circuit, under-voltage protection circuit, an overvoltage protection circuit, a soft start circuit, a reference voltage circuit, a lamp generator, an error amplifier, and a hysteresis comparator. The proposed DC-DC converter was designed and fabricated using a Magnachip / Hynix 180nm CMOS process, 1-poly 6-metal, the measured results showed a good match with the simulation results.

A Study on the Railway Control by Creation of the Causal Loop Diagram- - Centering on railroad safety management system technical standards 11.7 - (인과순환지도(CLD) 작성을 통한 철도관제업무에 관한 연구 - 철도안전관리체계 기술기준 11.7 철도관제를 중심으로 -)

  • Park, Jung Soo;Lee, Sang Oh
    • Journal of The Korean Society For Urban Railway
    • /
    • v.6 no.4
    • /
    • pp.287-298
    • /
    • 2018
  • This research is intended to understand and make practical suggestions on various aspects of the railroad traffic controllers focusing on railroad control of the railroad safety management system's technical standards. The method of analysis is using the causal loop diagram, which is the soft method of System Dynamics. Content of analysis is about the system. 11.7 of the work of the railroad control system. In addition, we compared and analyzed railroad controllers by four railroad operators, suggesting the importance of railroad control and future practical improvement plans.

Resistance Performance Simulation of Simple Ship Hull Using Graph Neural Network (그래프 신경망을 이용한 단순 선박 선형의 저항성능 시뮬레이션)

  • TaeWon, Park;Inseob, Kim;Hoon, Lee;Dong-Woo, Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.59 no.6
    • /
    • pp.393-399
    • /
    • 2022
  • During the ship hull design process, resistance performance estimation is generally calculated by simulation using computational fluid dynamics. Since such hull resistance performance simulation requires a lot of time and computation resources, the time taken for simulation is reduced by CPU clusters having more than tens of cores in order to complete the hull design within the required deadline of the ship owner. In this paper, we propose a method for estimating resistance performance of ship hull by simulation using a graph neural network. This method converts the 3D geometric information of the hull mesh and the physical quantity of the surface into a mathematical graph, and is implemented as a deep learning model that predicts the future simulation state from the input state. The method proposed in the resistance performance experiment of simple hull showed an average error of about 3.5 % throughout the simulation.

Novel integrative soft computing for daily pan evaporation modeling

  • Zhang, Yu;Liu, LiLi;Zhu, Yongjun;Wang, Peng;Foong, Loke Kok
    • Smart Structures and Systems
    • /
    • v.30 no.4
    • /
    • pp.421-432
    • /
    • 2022
  • Regarding the high significance of correct pan evaporation modeling, this study introduces two novel neuro-metaheuristic approaches to improve the accuracy of prediction for this parameter. Vortex search algorithms (VSA), sunflower optimization (SFO), and stochastic fractal search (SFS) are integrated with a multilayer perceptron neural network to create the VSA-MLPNN, SFO-MLPNN, and SFS-MLPNN hybrids. The climate data of Arcata-Eureka station (operated by the US environmental protection agency) belonging to the years 1986-1989 and the year 1990 are used for training and testing the models, respectively. Trying different configurations revealed that the best performance of the VSA, SFO, and SFS is obtained for the population size of 400, 300, and 100, respectively. The results were compared with a conventionally trained MLPNN to examine the effect of the metaheuristic algorithms. Overall, all four models presented a very reliable simulation. However, the SFS-MLPNN (mean absolute error, MAE = 0.0997 and Pearson correlation coefficient, RP = 0.9957) was the most accurate model, followed by the VSA-MLPNN (MAE = 0.1058 and RP = 0.9945), conventional MLPNN (MAE = 0.1062 and RP = 0.9944), and SFO-MLPNN (MAE = 0.1305 and RP = 0.9914). The findings indicated that employing the VSA and SFS results in improving the accuracy of the neural network in the prediction of pan evaporation. Hence, the suggested models are recommended for future practical applications.

Soft Error Detection & Correction for VLIW Architecture (VLIW 프로세서를 위한 소프트에러 검출 및 수정 기법)

  • Li, Yunrong;Lee, Jongwon;Heo, Ingoo;Kwon, Yongin;Lee, Kyoungwoo;Paek, Yunheung
    • Annual Conference of KIPS
    • /
    • 2011.11a
    • /
    • pp.9-10
    • /
    • 2011
  • 임베디드 시스템에서 저전력 공급, 칩사이즈 축소, 낮은 노이즈 마진 등 설계기법이 날로 향상됨에 따라 소프트에러가 기하급수적으로 늘어나고 있다. 본 논문에서는 VLIW 아키텍처에서 치명적인 오류를 일으키는 이런 소프트에러들을 검출하고 수정하는 기법을 제안하고자 한다.

Assessment of maximum liquefaction distance using soft computing approaches

  • Kishan Kumar;Pijush Samui;Shiva S. Choudhary
    • Geomechanics and Engineering
    • /
    • v.37 no.4
    • /
    • pp.395-418
    • /
    • 2024
  • The epicentral region of earthquakes is typically where liquefaction-related damage takes place. To determine the maximum distance, such as maximum epicentral distance (Re), maximum fault distance (Rf), or maximum hypocentral distance (Rh), at which an earthquake can inflict damage, given its magnitude, this study, using a recently updated global liquefaction database, multiple ML models are built to predict the limiting distances (Re, Rf, or Rh) required for an earthquake of a given magnitude to cause damage. Four machine learning models LSTM (Long Short-Term Memory), BiLSTM (Bidirectional Long Short-Term Memory), CNN (Convolutional Neural Network), and XGB (Extreme Gradient Boosting) are developed using the Python programming language. All four proposed ML models performed better than empirical models for limiting distance assessment. Among these models, the XGB model outperformed all the models. In order to determine how well the suggested models can predict limiting distances, a number of statistical parameters have been studied. To compare the accuracy of the proposed models, rank analysis, error matrix, and Taylor diagram have been developed. The ML models proposed in this paper are more robust than other current models and may be used to assess the minimal energy of a liquefaction disaster caused by an earthquake or to estimate the maximum distance of a liquefied site provided an earthquake in rapid disaster mapping.

Comparison of Two Methods for Determining Initial Radius in the Sphere Decoder (스피어 디코더에서 초기 반지름을 결정하는 두 가지 방법에 대한 비교 연구)

  • Jeon, Eun-Sung;Kim, Yo-Han;Kim, Dong-Ku
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.4
    • /
    • pp.371-376
    • /
    • 2006
  • The initial radius of sphere decoder has great effect on the bit error rate performance and computational complexity. Until now, it has been determined either by considering the statistical property of channel or by using of MMSE solution. The initial radius obtained by using statistical property of channel includes the lattice point corresponding to the transmit signal vector with very high probability. The method using MMSE solution first calculates out the MMSE solution of the received signal, then maps the hard decision of this solution into the received signal space, and finally the distance between the mapped point and the received signal is selected as the initial radius of the sphere decoding. In this paper, we derive a simple equation for initial radius selection which uses statistical property of channel and compare it with the method using MMSE solution. To compare two methods we define new metric 'Tightness'. Through the simulation, we observe that in low and moderate SNR region, the method using MMSE solution provides more complexity reduction for decoding while in high SNR region, the method using channel statistics is better.

  • PDF

An Efficient Iterative Decoding Stop Criterion Algorithm using Error Probability Variance Value of Turbo Code (터보부호의 오류확률 분산값을 이용한 효율적인 반복중단 알고리즘)

  • Jeong Dae ho;Shim Byoung sup;Lim Soon Ja;Kim Tae hyung;Kim Hwan yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.10C
    • /
    • pp.1387-1394
    • /
    • 2004
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication systems. And it is well known about the fact that turbo code has better the BER performance as the number of decoding iterations increases in the AWGN channel environment. However, as the number of decoding iterations is increased under the several channel environments, any further iteration results in very little improvement, and it requires much delay, computation and power consumption in proportion to the number of decoding iterations. In this paper, it proposes the efficient iterative decoding stop criterion algorithm which can largely reduce the average number of decoding iterations of turbo code. Through simulations, it is verifying that the proposed algorithm can efficiently stop the iterative decoding by using the variance value of error probability for the soft output value, and can largely reduce the average number of decoding iterations without BER performance degradation. As a result of simulation, the average number of decoding iterations for the proposed algorithm is reduced by about 2.25% ~14.31% and 3.79% ~14.38% respectively compared to conventional schemes, and power consumption is saved in proportion to the number of decoding iterations.

Experimental Performance Analysis of BCJR-Based Turbo Equalizer in Underwater Acoustic Communication (수중음향통신에서 BCJR 기반의 터보 등화기 실험 성능 분석)

  • Ahn, Tae-Seok;Jung, Ji-Won
    • Journal of Navigation and Port Research
    • /
    • v.39 no.4
    • /
    • pp.293-297
    • /
    • 2015
  • Underwater acoustic communications has been limited use for military purposes in the past. However, the fields of underwater applications expend to detection, submarine and communication in recent. The excessive multipath encountered in underwater acoustic communication channel is creating inter symbol interference, which is limiting factor to achieve a high data rate and bit error rate performance. To improve the performance of a received signal in underwater communication, many researchers have been studied for channel coding scheme with excellent performance at low SNR. In this paper, we applied BCJR decoder based ( 2,1,7 ) convolution codes and to compensate for the distorted data induced by the multipath, we applying the turbo equalization method. Through the underwater experiment on the Gyeungcheun lake located in Mungyeng city, we confirmed that turbo equalization structure of BCJR has better performance than hard decision and soft decision of Viterbi decoding. We also confirmed that the error rate of decoder input is less than error rate of $10^{-1}$, all the data is decoded. We achieved sucess rate of 83% through the experiment.

Comparison of Error Rate and Prediction of Compression Index of Clay to Machine Learning Models using Orange Mining (오렌지마이닝을 활용한 기계학습 모델별 점토 압축지수의 오차율 및 예측 비교)

  • Yoo-Jae Woong;Woo-Young Kim;Tae-Hyung Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.23 no.3
    • /
    • pp.15-22
    • /
    • 2024
  • Predicting ground settlement during the improvement of soft ground and the construction of a structure is an crucial factor. Numerous studies have been conducted, and many prediction equations have been proposed to estimate settlement. Settlement can be calculated using the compression index of clay. In this study, data on water content, void ratio, liquid limit, plastic limit, and compression index from the Busan New Port area were collected to construct a dataset. Correlation analysis was conducted among the collected data. Machine learning algorithms, including Random Forest, Neural Network, Linear Regression, Ada Boost, and Gradient Boosting, were applied using the Orange mining program to propose compression index prediction models. The models' results were evaluated by comparing RMSE and MAPE values, which indicate error rates, and R2 values, which signify the models' significance. As a result, water content showed the highest correlation, while the plastic limit showed a somewhat lower correlation than other characteristics. Among the compared models, the AdaBoost model demonstrated the best performance. As a result of comparing each model, the AdaBoost model had the lowest error rate and a large coefficient of determination.