• Title/Summary/Keyword: Sampling Theorem

Search Result 58, Processing Time 0.025 seconds

Modeling of the friction in the tool-workpiece system in diamond burnishing process

  • Maximov, J.T.;Anchev, A.P.;Duncheva, G.V.
    • Coupled systems mechanics
    • /
    • v.4 no.4
    • /
    • pp.279-295
    • /
    • 2015
  • The article presents a theoretical-experimental approach developed for modeling the coefficient of sliding friction in the dynamic system tool-workpiece in slide diamond burnishing of low-alloy unhardened steels. The experimental setup, implemented on conventional lathe, includes a specially designed device, with a straight cantilever beam as body. The beam is simultaneously loaded by bending (from transverse slide friction force) and compression (from longitudinal burnishing force), which is a reason for geometrical nonlinearity. A method, based on the idea of separation of the variables (time and metric) before establishing the differential equation of motion, has been applied for dynamic modeling of the beam elastic curve. Between the longitudinal (burnishing force) and transverse (slide friction force) forces exists a correlation defined by Coulomb's law of sliding friction. On this basis, an analytical relationship between the beam deflection and the sought friction coefficient has been obtained. In order to measure the deflection of the beam, strain gauges connected in a "full bridge" type of circuit are used. A flexible adhesive is selected, which provides an opportunity for dynamic measurements through the constructed measuring system. The signal is proportional to the beam deflection and is fed to the analog input of USB DAQ board, from where the signal enters in a purposely created virtual instrument which is developed by means of Labview. The basic characteristic of the virtual instrument is the ability to record and visualize in a real time the measured deflection. The signal sampling frequency is chosen in accordance with Nyquist-Shannon sampling theorem. In order to obtain a regression model of the friction coefficient with the participation of the diamond burnishing process parameters, an experimental design with 55 experimental points is synthesized. A regression analysis and analysis of variance have been carried out. The influence of the factors on the friction coefficient is established using sections of the hyper-surface of the friction coefficient model with the hyper-planes.

A Gaussian Approach in Stabilizing Outputs of Electrical Control Systems (전기제어 설비의 출력 안정화를 위한 가우시안 접근법)

  • Basnet, Barun;Bang, Jun-ho;Ryu, In-ho;Kim, Tae-hyeong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1562-1569
    • /
    • 2018
  • Sensor readings always have a certain degree of randomness and fuzziness due to its intrinsic property, other electronic devices in the circuitry, wires and the rapidly changing environment. In an electrical control system, such readings will bring instability in the system and other undesired events especially if the signal hovers around the threshold. This paper proposes a Gaussian-based statistical approach in stabilizing the output through sampling the sensor data and automatic tuning the threshold to the range of multiple standard deviations. It takes advantage of the Central limit theorem and its properties assuming that a large number of sensor data samples will eventually converge to a Gaussian distribution. Experimental results demonstrate the effectiveness of the proposed algorithm in completely stabilizing the outputs over known filtering algorithms like Exponential smoothing and Kalman Filter.

Preconditions for High Speed Confocal Image Acquisition with DMD Scanning.

  • Shim, S.B.;Lee, K.J.;Lee, J.H.;Hwang, Y.H.;Han, S.O.;Pak, J.H.;Choi, S.E.;Milster, Tom D.;Kim, J.S.
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2006.07a
    • /
    • pp.39-40
    • /
    • 2006
  • Digital image-projection and several modifications are the classical applications of Digital Micromirror Devices (DMD), however further applications in the field of optical metrology are also available. Operated with certain patterns, a DMD can function, for instance, as an array of pinholes that may substitute the Galvanic mirror or the stage scanning system presently used for 2 dimensional scanning in confocal microscopes. The various process parameters that influence the result of measurement (e.g. pinhole size, lateral scanning pitch and the number of pinholes used simultaneously, etc.) should be configured precisely for individual measurements by appropriately operating the DMD. This paper presents suitable conditions for the diffraction limited analysis between DMD-optics-CCD to achieve the best performance. Also sampling theorem that is necessary for the image acquisition by scanning system is simulated with OPTISCAN which is the simulator based on the diffraction theory.

  • PDF

Reducing Power Consumption of Wireless Capsule Endoscopy Utilizing Compressive Sensing Under Channel Constraint

  • Saputra, Oka Danil;Murti, Fahri Wisnu;Irfan, Mohammad;Putri, Nadea Nabilla;Shin, Soo Young
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.130-134
    • /
    • 2018
  • Wireless capsule endoscopy (WCE) is considered as recent technology for the detection cancer cells in the human digestive system. WCE sends the captured information from inside the body to a sensor on the skin surface through a wireless medium. In WCE, the design of low-power consumption devices is a challenging topic. In the Shannon-Nyquist sampling theorem, the number of samples should be at least twice the highest transmission frequency to reconstruct precise signals. The number of samples is proportional to the power consumption in wireless communication. This paper proposes compressive sensing as a method to reduce power consumption in WCE, by means of a trade-off between samples and reconstruction accuracy. The proposed scheme is validated under channel constraints, expressed as the realistic human body path loss. The results show that the proposed scheme achieves a significant reduction in WCE power consumption and achieves a faster computation time with low signal error reconstruction.

Further study on the risk model with a continuous type investment (연속적으로 투자가 이루어지는 보험상품 리스크 모형의 추가 연구)

  • Choi, Seung Kyoung;Lee, Eui Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.6
    • /
    • pp.751-759
    • /
    • 2018
  • Cho et al. (Communications for Statistical Applications and Methods, 23, 423-432, 2016) introduced a risk model with a continuous type investment and studied the stationary distribution of the surplus process. In this paper, we extend the earlier analysis by assuming that additional instant investment is made when the surplus process reaches a certain sufficient level. We obtain the explicit form of the stationary distribution of the surplus process. The case is shown as an example, when the amount of claim is exponentially distributed.

Carbonation depth prediction of concrete bridges based on long short-term memory

  • Youn Sang Cho;Man Sung Kang;Hyun Jun Jung;Yun-Kyu An
    • Smart Structures and Systems
    • /
    • v.33 no.5
    • /
    • pp.325-332
    • /
    • 2024
  • This study proposes a novel long short-term memory (LSTM)-based approach for predicting carbonation depth, with the aim of enhancing the durability evaluation of concrete structures. Conventional carbonation depth prediction relies on statistical methodologies using carbonation influencing factors and in-situ carbonation depth data. However, applying in-situ data for predictive modeling faces challenges due to the lack of time-series data. To address this limitation, an LSTM-based carbonation depth prediction technique is proposed. First, training data are generated through random sampling from the distribution of carbonation velocity coefficients, which are calculated from in-situ carbonation depth data. Subsequently, a Bayesian theorem is applied to tailor the training data for each target bridge, which are depending on surrounding environmental conditions. Ultimately, the LSTM model predicts the time-dependent carbonation depth data for the target bridge. To examine the feasibility of this technique, a carbonation depth dataset from 3,960 in-situ bridges was used for training, and untrained time-series data from the Miho River bridge in the Republic of Korea were used for experimental validation. The results of the experimental validation demonstrate a significant reduction in prediction error from 8.19% to 1.75% compared with the conventional statistical method. Furthermore, the LSTM prediction result can be enhanced by sequentially updating the LSTM model using actual time-series measurement data.

An optimal management policy for the surplus process with investments (재투자가 있는 잉여금 과정의 최적 운용정책)

  • Lim, Se-Jin;Choi, Seungkyoung;Lee, Eui-Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1165-1172
    • /
    • 2016
  • In this paper, a surplus process with investments is introduced. Whenever the level of the surplus reaches a target value V > 0, amount S($0{\leq}S{\leq}V$) is invested into other business. After assigning three costs to the surplus process, a reward per unit amount of the investment, a penalty of the surplus being empty and the keeping (opportunity) cost per unit amount of the surplus per unit time, we obtain the long-run average cost per unit time to manage the surplus. We prove that there exists a unique value of S minimizing the long-run average cost per unit time for a given value of V, and also that there exists a unique value of V minimizing the long-run average cost per unit time for a given value of S. These two facts show that an optimal investment policy of the surplus exists when we manage the surplus in the long-run.

Design of a SQUID Sensor Array Measuring the Tangential Field Components in Magnetocardiogram (심자도용 접선성분자장 측정방식 스퀴드 센서열 설계)

  • Kim K.;Lee Y. H;Kwon H;Kim J. M;Kim I. S;Park Y. K;Lee K. W
    • Progress in Superconductivity
    • /
    • v.6 no.1
    • /
    • pp.56-63
    • /
    • 2004
  • We consider design factors for a SQUID sensor array to construct a 52-channel magnetocardiogram (MCG) system that can be used to measure tangential components of the cardiac magnetic fields. Nowadays, full-size multichannel MCG systems, which cover the whole signal area of a heart, are developed to improve the clinical analysis with high accuracy and to provide patients with comfort in the course of measurement. To design the full-size MCG system, we have to make a compromise between cost and performance. The cost is involved with the number of sensors, the number of the electronics, the size of a cooling dewar, the consumption of refrigerants for maintenance, and etc. The performance is the capability of covering the whole heart volume at once and of localizing current sources with a small error. In this study, we design the cost-effective arrangement of sensors for MCG by considering an adequate sensor interval and the confidence region of a tolerable localization error, which covers the heart. In order to fit the detector array on the cylindrical dewar economically, we removed the detectors that were located at the corners of the array square. Through simulations using the confidence region method, we verified that our design of the detector array was good enough to obtain whole information from the heart at a time. A result of the simulation also suggested that tangential-component MCG measurement could localize deeper current dipoles than normal-component MCG measurement with the same confidence volume; therefore, we conclude that measurement of the tangential component is more suitable to an MCG system than measurement of the normal component.

  • PDF