• Title/Summary/Keyword: 함수화

Search Result 3,075, Processing Time 0.037 seconds

A simple Demonstration of the Wiener-Khinchin Theorem using a Digital Oscilloscope and Personal Computer (디지털 오실로스코프에 의한 Wiener-Khinchin 정리의 시현)

  • Jung, Se-Min
    • Korean Journal of Optics and Photonics
    • /
    • v.24 no.5
    • /
    • pp.245-250
    • /
    • 2013
  • The Wiener-Khinchin theorem, which means that the autocorrelation function of a signal corresponds to the power spectrum of the signal, is very important in signal processing, spectroscopy and telecommunications engineering. However, because of needs for some relatively expensive equipments such as a correlator and the signal processing system, its demonstration in most undergraduate class is not easy so far. Recently, digital oscilloscopes whose functions can be replaced foresaid equipments are marketed with development of digital engineering. In this paper, a simple demonstration of the theorem is given by a digital storage oscilloscope and a personal computer with its theoretical background. The reason that deals again with this theorem which has been introduced in 1930 is that it has been not well informed yet to us and theoretical background of the demonstration is directly introduced from its driving process. Through deriving process of the theorem, some extended physical meanings of the impedance, power, power factor, Wiener spectrum, linear system response and, furthermore, basic idea of the Planck's quantization in the black body theory reveal themselves naturally. Hence it can be referred to lectures in general physics, modern physics, spectroscopy and material characterization experiment.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Comparison of an Analytic Solution of Wind-driven Current and all (x-$\sigma$) Numerical Model (취송류의 해석위와 (x-$\sigma$) 수치모형과의 비교)

  • 이종찬;최병호
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.4 no.4
    • /
    • pp.208-218
    • /
    • 1992
  • Analytic solutions for the gradient of surface elevation and vertical profiles of velocity driven by the wind stress in the one-dimensional rectangular basin were obtained under the assumption of steady-state. The approach treats the bottom frictional stress $\tau$$_{b}$ as known and includes vertically varying eddy viscosity $textsc{k}$$_{M}$, which is constant, linear and quadratic of water depth. When the $\tau$$_{b}$ is param-terized with surface stress, depth averaged velocity and bottom velocity, the result shows the relation of the no-slip bottom velocity condition and the bottom frictional stress $\tau$$_{b}$. The results of a mode splitted, (x-$\sigma$) coordinate, numerical model were compared with the derived analytic solutions. The comparison was made for the case such that $textsc{k}$$_{M}$ is the constant, linear and quadratic function of water depth. In the case of constant $textsc{k}$$_{M}$, the gradient of surface elevation and vertical profiles of velocity are discussed for a uniform depth, a mild slope and a relatively steep slope. When $textsc{k}$$_{M}$ is a linear and quadratic function of water depth, the vertical structures of velocities are discussed for various $\tau$$_{b}$. The result of the comparison shows that the vertical structure of velocities depends not only on the value of $textsc{k}$$_{M}$ but also on the profile of $textsc{k}$$_{M}$ and bottom stress $\tau$$_{b}$. Model results were in a good agreement with the analytic solutions considered in this study.his study.y.his study.

  • PDF

A Heuristic Model for Appropriation of Voyage Allocation under Specific Port Condition Using Regression Analyses - With a Case Analysis on POSCO-owned Port - (휴리스틱 회귀모델을 이용한 특정항만 조건하에서의 선형별 적정 항차배분에 관한 연구 - 포항제철(주) 전용항만 사례를 중심으로-)

  • Kim, Weonjae
    • Journal of Korea Port Economic Association
    • /
    • v.29 no.3
    • /
    • pp.159-174
    • /
    • 2013
  • This paper mainly deals with the appropriation of ship voyage allocation, using a heuristic regression model, in order to reduce total costs incurred in port, yard and at sea under the specific port condition. Because of different behavior of costs incurred in port, yard and at sea, an effort to minimize these costs by adjusting the number of voyages for three ship classes(50,000, 100,000, and 150,000-ton) should be made. For instance, if the port managers attempt to reduce the sea transport cost by increasing the annual allocated number of ship voyages classed 150,000-ton for economies of scale, they have no choice but to suffer a significant increase in queueing cost due to port congestion. To put it differently, there are trade-off relationships among the costs incurred in port, yard, and at sea. We utilized a computer simulation result to perform a couple of regression analyses in order to figure out the appropriate range of allocated number of voyages of each ship class using a heuristic approach. The detailed analytical results will be shown at the main paper. We also suggested a net present value(NPV) model to make a proper investment decision for an additional berth of 200,000-ton class that alleviates port congestion and reduces transport cost incurred both in port and at sea.

Locating Microseismic Events using a Single Vertical Well Data (단일 수직 관측정 자료를 이용한 미소진동 위치결정)

  • Kim, Dowan;Kim, Myungsun;Byun, Joongmoo;Seol, Soon Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.18 no.2
    • /
    • pp.64-73
    • /
    • 2015
  • Recently, hydraulic fracturing is used in various fields and microseismic monitoring is one of the best methods for judging where hydraulic fractures exist and how they are developing. When locating microseismic events using single vertical well data, distances from the vertical array and depths from the surface are generally decided using time differences between compressional (P) wave and shear (S) wave arrivals and azimuths are calculated using P wave hodogram analysis. However, in field data, it is sometimes hard to acquire P wave data which has smaller amplitude than S wave because microseismic data often have very low signal to noise (S/N) ratio. To overcome this problem, in this study, we developed a grid search algorithm which can find event location using all combinations of arrival times recorded at receivers. In addition, we introduced and analyzed the method which calculates azimuths using S wave. The tests of synthetic data show the inversion method using all combinations of arrival times and receivers can locate events without considering the origin time even using only single phase. In addition, the method can locate events with higher accuracy and has lower sensitivity on first arrival picking errors than conventional method. The method which calculates azimuths using S wave can provide reliable results when the dip between event and receiver is relatively small. However, this method shows the limitation when dip is greater than about $20^{\circ}$ in our model test.

Distributions of Hyperfine Parameters in Amorphous $Fe_{83}B_9Nb_7Cu_1$ Alloys (비정질 $Fe_{83}B_9Nb_7Cu_1$의 M$\)

  • 윤성현;김성백;김철성
    • Journal of the Korean Magnetics Society
    • /
    • v.9 no.6
    • /
    • pp.271-277
    • /
    • 1999
  • Amorphous $Fe_{83}B_9Nb_7Cu_1$ alloy has been studied by M$\"{o}$ssbauer spectroscopy. Revised Vincze method was used and distributions of hyperfine field, isomer shift, and quadrupole line broadening of the sample at various temperatures have been evaluated and Curie temperature and $H_{hf}\;(0)$ were calculated to be 393 K and 231 kOe, respectively. Temperature variation of reduced average hyperfine field shows a flattered curvein comparison with the Brillouin curve for S=1. This behavior can be explained on the basis of Handrich molecular field model, in which the parameter Δ, which is a measure of fluctuation in exchange interactions, is assumed to have the temperature dependence ${Delta}=0.75-0.64{\tau}+0.47{\tau}^2$ where $\tau$ is $T/T_C$. At low temperature, the average hyperfine field can be fitted to $H_{hf}\;(T)=H_{hf}\;(0)\;[1-0.44\;(T/T_C)^{3/2}-0.28(T/T_C)^{5/2}-… ]$, which indicates the presence long wave length spin wave excitations. At temperature near TC, reduced average hyperfine field varies as $1.00\;[1-T/T_C]^{0.39}$. It is also found that half-width of the hyperfine field distribution was 102 kOe (3.29 mm/s) at 13 K and decreased monotonically as temperature increased. Above the Curie temperature, an average quadrupole splitting value of 0.43 mm/s was found. Average line broadening due to quadrupole splitting distribution was 0.31 mm/s at 13 K and decreases monotonically to 0.23 mm/s at 320 K, whereas that due to the isomer shift distribution is 0.1 mm/s at 13 K and 0.072 mm/s at 320 K, which is much smaller than that of both hyperfine field and quadrupole splitting. The temperature dependence of the isomer shift can be fitted within the harmonic approximation to a Deybe model with a Debye temperature ${Theta}_D=424{\pm}5K$.TEX>.

  • PDF

A Runoff Parameter Estimation Using Spatially Distributed Rainfall and an Analysis of the Effect of Rainfall Errors on Runoff Computation (공간 분포된 강우를 사용한 유출 매개변수 추정 및 강우오차가 유출계산에 미치는 영향분석)

  • Yun, Yong-Nam;Kim, Jung-Hun;Yu, Cheol-Sang;Kim, Sang-Dan
    • Journal of Korea Water Resources Association
    • /
    • v.35 no.1
    • /
    • pp.1-12
    • /
    • 2002
  • This study was intended to investigate the rainfall-runoff relationship with spatially distributed rainfall data, and then, to analyze and quantify the uncertainty induced by spatially averaging rainfall data. For constructing spatially distributed rainfall data, several historical rainfall events were extended spatially by simple kriging method based on the semivariogram as a function of the relative distance. Runoff was computed by two models; one was the modified Clark model with spatially distributed rainfall data and the other was the conventional Clark model with spatially averaged rainfall data. Rainfall errors and discharge errors occurred through this process were defined and analyzed with respect to various rain-gage network densities. The following conclusions were derived as the results of this work; 1) The conventional Clark parameters could be appropriate for translating spatially distributed rainfall data. 2) The parameters estimated by the modified Clark model are more stable than those of the conventional Clark model. 3) Rainfall and discharge errors are shown to be reduced exponentially as the density of rain-gage network is increased. 4) It was found that discharge errors were affected largely by rainfall errors as the rain-gage network density was small.

원자력分野 에서의 破壞力學 現況 -법적 요구사항을 중심으로 (II)-

  • 송달호;손갑헌
    • Journal of the KSME
    • /
    • v.21 no.1
    • /
    • pp.21-31
    • /
    • 1981
  • 원자력발전소의 원자로냉각재 압력경계의 건전성과 안정성을 확보하기 위하여 법적 요구조건을 설정함에 있어 파괴역학이 어떻게 적용되었는 가를 설명하였다. 이를 요약하면 다음과 같다. 1) 압력경계에 사용되는 재료의 $RT_{NDT}$를 정의하였다. 이는 무연성천이온도와 같은 개 념의 것으로, 앞으로 재료의 파괴인성은 이 $RT_{NDT}$에 대한 상대온도의 함수로 주어진다. 2)비연성파괴를 방지하기 위한 설계조건으로서 선형탄성 파괴역학에 근거한 조건식을 인용하였다. 여기서 조건식이란 능력확대계수의 합계가 어떠한 조건에서도 이러한 조건식을 만족한다는 것을 해석적으로 확인하고 규제당국의 승인을 받아야 한다. 3) 가동중검사에 발견된 결함으로 합격수준을 초과하는 것은 파괴역학적으로 해석하여 구조적 으로 안전하다는 것은 파괴역학적으로 해석하여 구조적으로 안전하다는 것을 입증하여야 한다. 이때 결함은 원자로의 가동과 더불어 성장하므로 수명기간중 피로파괴에 이를 것인지의 여부도 평가하여야 한다. 이때의 대조균열성장률은 Paris의 power law에 따른다. 4) 고속중성자 (E>1. 0MeV)에 의한 조사취화를 감시하기 위하여 감시시험계획을 사전에 수립 하고 이에 따라 감시시험을 수행하여 조사에 수립하고 이에 따라 감시시험을 수행하여 조사에 의한 원자로용기 재료의 파괴인성의 저하를 평가하여 이를 고려한 충분한 안전여유를 갖는 운 전조건 즉, 압력-온도 한계곡선을 산출하여야 한다. 이때의 취화 정도는 DELTA. $RT_{NDT}$ 와 Upper Shelf Energy의 감소로 나타낸다. 또한, 압력-온도 한계곡선은 선형관성 파괴역학에 입각한 조건식을 이용하여 해당 온도에서의 압력을 산출한다. System을 개발 사용하기 위하여 기존 전자계산소를 이용하는 방법이 바람직하며 System의 도입은 자체운영을 결정하기 전에 경제적인 여건 등 여러가지 문제를 검토하여야 한다. 특히 Turn Key Base로 System를 도입할 경우에는 System의 도입목 적과 사용빈도, 앞으로의 확장성 현재 설계및 생산 과정과의 마찰가능성, 유지보수문제 등을 신 중히 검토하여야 한다. 이제 기계공업도 전자계산기를 이해하고 사용하므로 서 발전할 수 있는 단계가 되었다. 예로부터 좋은 공구를 개발하여 적절히 사용하는 것이 기계공업 발전의 첩경이 었다. 전자계산기는 현대 기술이 개발한 가장 강력하고 사용하기 좋은 공구이다.점에서 피로구열의 안정성장을 논하고, 과거 10여년간의 피로 crack문제에 대한 연구방법, 실험방법 등을 소개하는 방향으로 고 를 진행시켜 나가겠다.에 그 효과가 증대됨을 알 수 있었다.적용한 임상실험이 수행되어야 할 것이다. 또한 위치결정에서 획득한 좌표값의 정확성을 알아보기 위해서 팬톰을 이용한 방사선조사 실험이 추후에 실행되어져야 할 것이다. 그리고 제작된 프레임에 Rotating X선 시스템과 내부 장기의 움직임을 계량화하고 PTV에서의 최적 여유폭을 설정함으로써 정위 방사선수술 및 3 차원 업체 방사선치료에 대한 병소 위치측정과 환자의 자세에 대한 setup 오차측정 결정에 도움이 될 수 있을 것이라고 사료된다. 상대적으로 우수한 것으로 나타났으며, 혼합충전재는 암모니아의 경우 코코넛과 펄라이트의 비율이 7:3인 혼합 재료 3번과 소나무수피와 펄라이트의 비율이 7:3인 혼합 재료 6번에서 다른 혼합 재료에 비하여 우수한 것으로 나타났다. 4. 코코넛과 소나무수피의 경우 암모니아 가스에 대한 흡착 능력은 거의 비슷한 것으로 사료되며,

  • PDF

First-Principles Investigation of the Surface Properties of LiNiO2 as Cathode Material for Lithium-ion Batteries (제일원리계산을 이용한 리튬이차전지 양극활물질 LiNiO2의 표면 특성에 관한 연구)

  • Choi, Heesung;Lee, Maeng-Eun
    • Journal of the Korean Electrochemical Society
    • /
    • v.16 no.3
    • /
    • pp.169-176
    • /
    • 2013
  • Solid state lithium oxide compounds of layered structure, which has high stability of structure, are mainly used as the cathode materials in lithium-ion batteries (LIBs). Recently, the investigation of Solid Electrolyte Interphase (SEI) between active materials and electrolyte has been focusing to improve the performance of lithium-ion batteries. For the investigation of the SEI, the study of surface properties of cathode materials and anode materials is also required in advance. $LiNiO_2$ and $LiCoO_2$ are very similar layered structure of cathode active materials and representative solid state lithium oxide compounds in LIBs. Various experimental and theoretical studies have been doing for $LiCoO_2$. The theoretical investigation of $LiNiO_2$ is not sufficient, however, even if experimental studies of $LiNiO_2$ are enough. In this study, the surface energies of nine facets of $LiNiO_2$ crystal facets were calculated by Density Functional Theory. In XRD data of $LiNiO_2$, (003), (104), (101), et al. facets are main surfaces in order. However, the results of calculation are different with XRD data. Thus, both (104) and (101) facets, which are energetically stable and measured in XRD, are mainly exposed in the surface of $LiNiO_2$ and it is expected that intercalation and de-intercalation of Li-ion will be affected by them.

Error Performance Analysis of Digital Radio Signals in an Electromagnetic Interference (EMI) Environment of Impulsive Noise Plus Disturbance (임펄스 잡음과 방해파에 의한 전자파 장해(EMI) 환경하에서의 디지털 무선통신 신호의 오율해석)

  • Cho, Sung-Eon;Leem, Kill-Yong;Cho, Sung-Joon;Lee, Jin
    • The Proceeding of the Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.6 no.3
    • /
    • pp.36-54
    • /
    • 1995
  • The error performance of digital radio signals (i.e., M-ary PSK signal, DQPSK signal, MSK signal, GMSK signal) interfered by impulsive noise and electromagnetic interference (EMI) is analyzed and discussed. In analysis at first, the error rate equations have been derived in an electromagnetic interference plus impulsive noise environment. And then, the error performance has been evaluated and shown in figures as a function of carrier-to-noise ratio, carrier-to-interference ratio, impu- lsive index, gaussian noise to impulsive noise power ratio, and interference index to measure the amount of error degradation in digital radio signals. From the obtained results we have known that in the presence of m-distributed tone interference plus inpulsive noise, the more significant the electromagnetic interference amplitude varies, the more significant performance degradation is produced. The listing the digital radio signals from the most degraded to the least is that DQPSK, GMSK, QPSK and MSK signal. In the constant amplitude tone interference plus impulsive noise environment, the effect of in- terference nearly disappears over about 20dB in CIR. The effect of constant tone interference on error rate performance is reduced more remarkably in the region from 10dB to 15dB in CIR. In both enviroments of m-distributed tone interference and constant amplitude tone interference, the more electromagnetic interference amplitude varies and CIR increases, the more error perfor- mance is improved. But it is found out that the performance can not be improved significantly even the electromagnetic interference becomes weak. This describes that the impulsive noise affects dominantly to the performance degradation.

  • PDF