• Title/Summary/Keyword: 행렬분해

Search Result 301, Processing Time 0.023 seconds

A Study on Design and Implementation of Scalable Angle Estimator Based on ESPRIT Algorithm (ESPRIT 알고리즘 기반 재구성 가능한 각도 추정기 설계에 관한 연구)

  • Dohyun Lee;Byunghyun Kim;Jongwha Chong;Sungjin Lee;Kyeongyuk Min
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.624-629
    • /
    • 2023
  • Estimation of signal parameters via rotational invariance techniques (ESPRIT) is an algorithm that estimates the angle of a signal arriving at an array antenna using the shift invariance property of an array antenna. ESPRIT offers the good trade-off between performance and complexity. However, the ESPRIT algorithm still requires high-complexity operations such as covariance matrix and eigenvalue decomposition, so implementation with a hardware processor is essential to estimate the angle of arrival in real time. In addition, ESPRIT processors should have high performance. The performance is related to the number of antennas, and the number of antennas required for each application are different. Therefore, we proposed an ESPRIT processor that provides 2 to 8 variable antenna configurations to meet the performance and complexity requirements according to the applied field. The proposed ESPRIT processor was designed using the Verilog-HDL and implemented on a field programmable gate array (FPGA).

Strategy of Multistage Gamma Knife Radiosurgery for Large Lesions (큰 병변에 대한 다단계 감마나이프 방사선수술의 전략)

  • Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.5
    • /
    • pp.801-809
    • /
    • 2019
  • Existing Gamma Knife Radiosurgery(GKRS) for large lesions is often conducted in stages with volume or dose partitions. Often in case of volume division the target used to be divided into sub-volumes which are irradiated under the determined prescription dose in multi-sessions separated by a day or two, 3~6 months. For the entire course of treatment, treatment informations of the previous stages needs to be reflected to subsequent sessions on the newly mounted stereotactic frame through coordinate transformation between sessions. However, it is practically difficult to implement the previous dose distributions with existing Gamma Knife system except in the same stereotactic space. The treatment area is expanding because it is possible to perform the multistage treatment using the latest Gamma Knife Platform(GKP). The purpose of this study is to introduce the image-coregistration based on the stereotactic spaces and the strategy of multistage GKRS such as the determination of prescription dose at each stage using new GKP. Usually in image-coregistration either surgically-embedded fiducials or internal anatomical landmarks are used to determine the transformation relationship. Author compared the accuracy of coordinate transformation between multi-sessions using four or six anatomical landmarks as an example using internal anatomical landmarks. Transformation matrix between two stereotactic spaces was determined using PseudoInverse or Singular Value Decomposition to minimize the discrepancy between measured and calculated coordinates. To evaluate the transformation accuracy, the difference between measured and transformed coordinates, i.e., ${\Delta}r$, was calculated using 10 landmarks. Four or six points among 10 landmarks were used to determine the coordinate transformation, and the rest were used to evaluate the approaching method. Each of the values of ${\Delta}r$ in two approaching methods ranged from 0.6 mm to 2.4 mm, from 0.17 mm to 0.57 mm. In addition, a method of determining the prescription dose to give the same effect as the treatment of the total lesion once in case of lesion splitting was suggested. The strategy of multistage treatment in the same stereotactic space is to design the treatment for the whole lesion first, and the whole treatment design shots are divided into shots of each stage treatment to construct shots of each stage and determine the appropriate prescription dose at each stage. In conclusion, author confirmed the accuracy of prescribing dose determination as a multistage treatment strategy and found that using as many internal landmarks as possible than using small landmarks to determine coordinate transformation between multi-sessions yielded better results. In the future, the proposed multistage treatment strategy will be a great contributor to the frameless fractionated treatment of several Gamma Knife Centers.

A Study for Improving Computational Efficiency in Method of Moments with Loop-Star Basis Functions and Preconditioner (루프-스타(Loop-Star) 기저 함수와 전제 조건(Preconditioner)을 이용한 모멘트법의 계산 효율 향상에 대한 연구)

  • Yeom, Jae-Hyun;Park, Hyeon-Gyu;Lee, Hyun-Suck;Chin, Hui-Cheol;Kim, Hyo-Tae;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.23 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • This paper uses loop-star basis functions to overcome the low frequency breakdown problem in method of moments (MoM) based on electric field integral equation(EFIE). In addition, p-Type Multiplicative Schwarz preconditioner (p-MUS) technique is employed to reduce the number of iterations required for the conjugate gradient method(CGM). Low frequency instability with Rao Wilton Glisson(RWG) basis functions in EFIE can be resolved using loop-start basis functions and frequency normalized techniques. However, loop-star basis functions, consisting of irrotational and solenoidal components of RWG basis functions, require a large number of iterations to calculate a solution through iterative methods, such as conjugate gradient method(CGM), due to high condition number. To circumvent this problem, in this paper, the pMUS preconditioner technique is proposed to reduce the number of iterations in CGM. Simulation results show that pMUS preconditioner is much faster than block diagonal preconditioner(BDP) when the sparsity of pMUS is the same as that of BDP.

Simulation of Low-Grazing-Angle Coherent Sea Clutter (Low Grazing Angle에서의 코히어런트 해상 클러터 시뮬레이션)

  • Choi, Sang-Hyun;Song, Ji-Min;Jeon, Hyeon-Mu;Chung, Yong-Seek;Kim, Jong-Mann;Hong, Seong-Won;Yang, Hoon-Gee
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.8
    • /
    • pp.615-623
    • /
    • 2018
  • The probability density function(PDF) for the amplitude of the reflectivity of low-grazing-angle sea clutter has generally been modeled by a compound-Gaussian distribution, rather than by the Rayleigh distribution, owing to the intensity variation of each clutter patch over time. The texture component forming the reflectivity has been simulated by combining Gamma distribution and memory-less nonlinear transformation(MNLT). On the other hand, there is no typical method available that can be used to simulate the speckle component. We first review Watt's method, wherein the speckle is simulated starting from the Doppler spectrum of the received echoes that is modeled as having a Gaussian shape. Then, we introduce a newly proposed method. The proposed method simulates the speckle by manipulating a clutter covariance matrix through the Cholesky decomposition after minimizing the effect of adjacent clutter patches using an equalizer. The feasibility of the proposed method is validated through simulation, wherein the results from two methods are compared in terms of the Doppler spectrum and the correlation function.

Gauss-Newton Based Estimation for Moving Emitter Location Using TDOA/FDOA Measurements and Its Analysis (TDOA/FDOA 정보를 이용한 Gauss-Newton 기법 기반의 이동 신호원 위치 및 속도 추정 방법과 성능 분석)

  • Kim, Yong-Hee;Kim, Dong-Gyu;Han, Jin-Woo;Song, Kyu-Ha;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.62-71
    • /
    • 2013
  • The passive emitter location method using TDOA and FDOA measurements has higher accuracy comparing to the single TDOA or FDOA based method. Moreover, it is able to estimate the velocity vector of a moving platform. Recently, several non-iterative methods were suggested using the nuisance parameter but the common reference sensor is needed for each pair of sensors. They show also relatively low performance in the case of a long range between the sensor groups and the emitter. To solve this, we derive the estimation method of the position and velocity of a moving platform based on the Gauss-Newton method. In addition, to analyze the estimation performance of the position and velocity, respectively, we decompose the CRLB matrix into each subspace. Simulation results show the estimation performance of the derived method and the CEP planes according to the given geometry of the sensors.

An efficient 2.5D inversion of loop-loop electromagnetic data (루프-루프 전자탐사자료의 효과적인 2.5차원 역산)

  • Song, Yoon-Ho;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.68-77
    • /
    • 2008
  • We have developed an inversion algorithm for loop-loop electromagnetic (EM) data, based on the localised non-linear or extended Born approximation to the solution of the 2.5D integral equation describing an EM scattering problem. Source and receiver configuration may be horizontal co-planar (HCP) or vertical co-planar (VCP). Both multi-frequency and multi-separation data can be incorporated. Our inversion code runs on a PC platform without heavy computational load. For the sake of stable and high-resolution performance of the inversion, we implemented an algorithm determining an optimum spatially varying Lagrangian multiplier as a function of sensitivity distribution, through parameter resolution matrix and Backus-Gilbert spread function analysis. Considering that the different source-receiver orientation characteristics cause inconsistent sensitivities to the resistivity structure in simultaneous inversion of HCP and VCP data, which affects the stability and resolution of the inversion result, we adapted a weighting scheme based on the variances of misfits between the measured and calculated datasets. The accuracy of the modelling code that we have developed has been proven over the frequency, conductivity, and geometric ranges typically used in a loop-loop EM system through comparison with 2.5D finite-element modelling results. We first applied the inversion to synthetic data, from a model with resistive as well as conductive inhomogeneities embedded in a homogeneous half-space, to validate its performance. Applying the inversion to field data and comparing the result with that of dc resistivity data, we conclude that the newly developed algorithm provides a reasonable image of the subsurface.

VLSI 설계와 CAD 기술개발 연구 전략 -다음 세대 컴퓨터 개발을 위한-

  • 이문기
    • The Magazine of the IEIE
    • /
    • v.11 no.5
    • /
    • pp.42-50
    • /
    • 1984
  • 국내의 다음세대 컴퓨터 개발을 위한 VLSI 설계와 CAD 분야에 대한 연구 방향을 제시한다. 연구의 목표는 국제적으로 경쟁할 수 있는 VLSI 설계능력과 백만개 정도의 트랜지스터로 자성된 회로를 경제적으로 설계하기 위한 CAD 기술과 System의 확립이다. ·새로운 회로 구조와 알고리즘에 대한 연구 · CAD 도구와 언어의 개발에 관한 첨단 CAD 기술개발연구 · VLSI 설계에 필요한 CAD 도구 이용과 개발에 필요한 표준 인터페이스, 네트워킹, 컴퓨팅 하드웨어. 시스템 소프트웨어에 대한 연구등의 부분으로 크게 나눌 수 있다. 이용 가능한 CAD system을 평가하고 개선하며 첨단 CAD에 대한 소프트웨어와 하드웨어에 대해 · 컴퓨팅 하드웨어 · 프로그램 분위기 · 네트워킹 능력 ·자료 교환을 위한 표준인터페이스 등에 관해 조사분석도 병행한다. CAD에 관한 세부적인 연구 과제는 · 시스템 사양언어 · 설계 검증 ·시스템시뮬레이션· 설계 합성 · 설계 해석· 설계 방법론·디바이스와 공정 모델링 프로그램 등이다. 고속 계산용 VLSI에 관한 구조와 알고리즘은 행렬 계산을 위한 ·분산 배열 처리 회로 ·시스토릭 (Systolic) 배열 회로 ·셀률라(Cellular) 논리 회로 · 3차원 배열 회로 와 · 비규칙적 계산 알고리즘을 갖는 VLSI가 있다. VLSI설계훈련과 CAD 기술 축적을 위해 CAD enter를 설립하여 전국적인 CAD 네트워킹을 관계 연구소와 여러 대학에 가설하며, MPC 계획을 추진한다. VLSI설계 가능성이 입증되면 VLSI 설계능력을 더욱 향상 시키기 위해 0.5∼1.0mm기술의 silicon faundary를 설립한다. 연구 개발 조직은 대학, 산업체. 연구소가 삼위일체가 되어 수행될 수 있도록 연구 개발 위원회를 설치 운영하며 경쟁적이며 경제적으로 연구 업무를 집행하는 것이 바람직하다.았다.형질에 관여하는 귀전자에 미치는 기구에 대하여 검토할 여타가 있다고 보여진다. 분해능의 특징으로 미루어 앞으로는 레이저를 이용한 계측 방법이 그 주류를 이룰 것으로 사료된다. 우선 본 해설은 기체의 온도 및 농도의 광학적 측정방법중 Raman산란광 검출법에 대하여 실제로 측정하는 입장에서 간단히 소개한다.lity)이, 높은 $GA_3$함량에 기인된다'는 주장은 본실험(本實驗)으로 부인(否認)되었다. 따라서, 응용학적(應用學的) 측면에서 고려해 볼 때, 리베스식물(植物)의 육종기간 단축을 위한 모든 화아분화(花芽分化) 촉진 조치는 P.J.-식물(植物)이 20. node이상 생육하였을 때 취하는 것이 효율적인 것으로 결론 지어진다.앞당겨진 7月 셋째 週였다. 8. Culex (Culex) tritaeniorhynchus summoro년의 最大發生 peak는 1981年, 1982年 모두 8月 둘째 週였다. 9. Anopheles (Anopheles) sinensis의 最大發生 peak는 1981年에 7月 다섯째 週, 1982年은 2週 앞당겨진 7月 셋째 週였다. 10. 重要 3種의 最大 peak를 比城하면 Culex (Culex) pipiens pallens와 Anopheles (Anopheles) sinensis는 1981年과 1982年 모두 最大 peak時期가 同一하였으며, Culex (Culex) tritaeniorhynchus summoro년는 2年間 모두 8月둘째 週에 나타났다.osterior to manubrium and anterior to aortic arch) replacing the normal mediastinal fat. (2) In benign thymoma, the marging of the mass was smooth and the normal fat

  • PDF

Regeneration of the Retarded Time Vector for Enhancing the Precision of Acoustic Pyrometry (온도장 측정 정밀도 향상을 위한 시간 지연 벡터의 재형성)

  • Kim, Tae-Kyoon;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.2
    • /
    • pp.118-125
    • /
    • 2014
  • An approximation of speed of sound in the measurement plane is essential for the inverse estimation of temperature. To this end, an inverse problem relating the measured retarded time data in between set of sensors and actuators array located on the wall is formulated. The involved transfer matrix and its coefficient vectors approximate speed of sound of the measurement plane by using the radial basis function with finite number of interpolation points deployed inside the target field. Then, the temperature field can be reconstructed by using spatial interpolation technique, which can achieve high spatial resolution with proper number of interpolation points. A large number of retarded time data of acoustic paths in between sensors and arrays are needed to obtain accurate reconstruction result. However, the shortage of interpolation points due to practical limitations can cause the decrease of spatial resolution and deterioration of the reconstruction result. In this works, a regeneration for obtaining the additional retarded time data for an arbitrary acoustic path is suggested to overcome the shortage of interpolation points. By applying the regeneration technique, many interpolation points can be deployed inside the field by increasing the number of retarded time data. As a simulation example, two rectangular duct sections having arbitrary temperature distribution are reconstructed by two different data set: measured data only, combination of measured and regenerated data. The result shows a decrease in reconstruction error by 15 % by combining the original and regenerated retarded time data.

Antibiotics-Resistant Bacteria Infection Prediction Based on Deep Learning (딥러닝 기반 항생제 내성균 감염 예측)

  • Oh, Sung-Woo;Lee, Hankil;Shin, Ji-Yeon;Lee, Jung-Hoon
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.1
    • /
    • pp.105-120
    • /
    • 2019
  • The World Health Organization (WHO) and other government agencies aroundthe world have warned against antibiotic-resistant bacteria due to abuse of antibiotics and are strengthening their care and monitoring to prevent infection. However, it is highly necessary to develop an expeditious and accurate prediction and estimating method for preemptive measures. Because it takes several days to cultivate the infecting bacteria to identify the infection, quarantine and contact are not effective to prevent spread of infection. In this study, the disease diagnosis and antibiotic prescriptions included in Electronic Health Records were embedded through neural embedding model and matrix factorization, and deep learning based classification predictive model was proposed. The f1-score of the deep learning model increased from 0.525 to 0.617when embedding information on disease and antibiotics, which are the main causes of antibiotic resistance, added to the patient's basic information and hospital use information. And deep learning model outperformed the traditional machine hospital use information. And deep learning model outperformed the traditional machine learning models.As a result of analyzing the characteristics of antibiotic resistant patients, resistant patients were more likely to use antibiotics in J01 than nonresistant patients who were diagnosed with the same diseases and were prescribed 6.3 times more than DDD.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF