• 제목/요약/키워드: vector fitting

검색결과 82건 처리시간 0.025초

S-파라메타를 이용한 절연 변압기의 고주파 파라메타 추출 (High-Frequency Parameter Extraction of Insulating Transformer Using S-Parameter Measurement)

  • 김성준;류수정;김태호;김종현;나완수
    • 한국전자파학회논문지
    • /
    • 제25권3호
    • /
    • pp.259-268
    • /
    • 2014
  • 본 논문에서는 S-파라메타를 이용한 절연 변압기의 고주파 파라메타 추출 방법을 제안한다. 정상상태에서 회로상수 추출은 고전적 방법인 무 부하, 단락 회로 시험을 통해 나온 측정값을 계산하여 추출하는 방법이 있으며, 본 논문에서는 VNA(Vector Network Analyzer)로 측정한 S-파라메타를 이용하여 추출하는 방법에 대한 연구를 수행하였다. 상용주파수인 60 Hz를 포함한 고주파 대역에서의 변압기 회로상수는 측정한 S-파라메타에 데이터 피팅(최적화) 방식을 이용하여 추출하였다. 기본적으로 절연변압기에서의 고주파 파라메타 추출은 기존에 제시하는 변압기 등가 회로에 표유정전용량(Stray capacitance)을 추가한 등가회로 형태로 제시된다. 이렇게 추출한 회로상수의 S-파라메타와 실제 측정한 S-파라메타 결과를 비교하여 유사함을 확인하였고, 변압기의 1차 측에 신호발생기를 입력한 후, 출력되는 2차 측의 전압과 고주파 등가회로를 이용하여 추출한 2차 측 전압을 비교하여 두 값이 일치하는 것을 확인하였다. 이 결과를 통해 S-파라메타를 이용한 절연 변압기의 고주파 파라메타 추출 방법의 타당성을 입증하였다.

MPEG-2 비트열로부터 객체 기반 MPEG-4 응용을 위한 고속 정보 추출 알고리즘 (Fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-streamaper)

  • 양종호;원치선
    • 한국통신학회논문지
    • /
    • 제26권12A호
    • /
    • pp.2109-2119
    • /
    • 2001
  • 본 논문에서는 MPEG-2 비트열로부터 객체 기반 MPEG-4로의 고속 변환을 위한 정보 추출 알고리즘을 소개한다. 객체 기반 MPEG-4로의 변환을 위한 정보로써 객체 영상과 형상 정보, 매크로블록 움직임 벡터, 헤더정보가 MPEG-2로부터 추출된다. 추출된 정보를 이용하면 객체 기반 MPEG-4로의 고속 변환이 가능하다. 가장 중요한 정보인 객체 영상 추출은 MPEG-2의 움직임 벡터와 워터쉐드 알고리즘을 이용하여 이루어진다. 사용자의 인지정보를 이용하여 프레임 내에서 객체를 추출하고, 추출된 객체로 연속된 프레임에서 객체를 추적하게 된다. 수행 중 객체의 빠른 움직임으로 만족스럽지 못한 결과를 내더라도, 사용자가 개입하여 다시 좋은 결과를 얻을 수 있도록 하였다. 객체 추적 과정은 크게 두 단계로 객체 추출 단계와 객체 추적 단계로 나누어져 있다. 객체 추출 단계는 블록분류와 워터쉐드 알고리즘으로 자동 분할된 영상에서 사용자가 직접 객체를 추출하는 단계이다. 사용자가 개입하는 단계이기 때문에, 번거로울 수 있으나 손쉽게 추출할 수 있도록 구현하였다. 객체 추적 단계는 연속된 프레임 에서 객체를 추적하는 단계로 MPEG-2 움직임 벡터와 객체 모양 정보를 이용하여 고속으로 구해지고 워터쉐드 알고리즘으로 윤곽선 보정작업을 하였다. 실험 결과 MPEG-2 비트스트림으로부터 객체 기반 MPEG-4로의 고속변환이 가능함을 알 수 있었다.

  • PDF

Reliability of mortar filling layer void length in in-service ballastless track-bridge system of HSR

  • Binbin He;Sheng Wen;Yulin Feng;Lizhong Jiang;Wangbao Zhou
    • Steel and Composite Structures
    • /
    • 제47권1호
    • /
    • pp.91-102
    • /
    • 2023
  • To study the evaluation standard and control limit of mortar filling layer void length, in this paper, the train sub-model was developed by MATLAB and the track-bridge sub-model considering the mortar filling layer void was established by ANSYS. The two sub-models were assembled into a train-track-bridge coupling dynamic model through the wheel-rail contact relationship, and the validity was corroborated by the coupling dynamic model with the literature model. Considering the randomness of fastening stiffness, mortar elastic modulus, length of mortar filling layer void, and pier settlement, the test points were designed by the Box-Behnken method based on Design-Expert software. The coupled dynamic model was calculated, and the support vector regression (SVR) nonlinear mapping model of the wheel-rail system was established. The learning, prediction, and verification were carried out. Finally, the reliable probability of the amplification coefficient distribution of the response index of the train and structure in different ranges was obtained based on the SVR nonlinear mapping model and Latin hypercube sampling method. The limit of the length of the mortar filling layer void was, thus, obtained. The results show that the SVR nonlinear mapping model developed in this paper has a high fitting accuracy of 0.993, and the computational efficiency is significantly improved by 99.86%. It can be used to calculate the dynamic response of the wheel-rail system. The length of the mortar filling layer void significantly affects the wheel-rail vertical force, wheel weight load reduction ratio, rail vertical displacement, and track plate vertical displacement. The dynamic response of the track structure has a more significant effect on the limit value of the length of the mortar filling layer void than the dynamic response of the vehicle, and the rail vertical displacement is the most obvious. At 250 km/h - 350 km/h train running speed, the limit values of grade I, II, and III of the lengths of the mortar filling layer void are 3.932 m, 4.337 m, and 4.766 m, respectively. The results can provide some reference for the long-term service performance reliability of the ballastless track-bridge system of HRS.

CHALLENGING APPLICATIONS FOR FT-NIR SPECTROSCOPY

  • Goode, Jon G.;Londhe, Sameer;Dejesus, Steve;Wang, Qian
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.4112-4112
    • /
    • 2001
  • The feasibility of NIR spectroscopy as a quick and nondestructive method for quality control of uniformity of coating thickness of pharmaceutical tablets was investigated. Near infrared spectra of a set of pharmaceutical tablets with varying coating thickness were measured with a diffuse reflectance fiber optic probe connected to a Broker IFS 28/N FT-NIR spectrometer. The challenging issues encountered in this study included: 1. The similarity of the formulation of the core and coating materials, 2. The lack of sufficient calibration samples and 3. The non-linear relationship between the NIR spectral intensity and coating: thickness. A peak at 7184 $cm^{-1}$ was identified that differed for the coating material and the core material when M spectra were collected at 2 $cm^{-1}$ resolution (0.4 nm at 7184 $cm^{-1}$). The study showed that the coating thickness can be analyzed by polynomial fitting of the peak area of the selected peak, while least squares calibration of the same data failed due to the lack of availability of sufficient calibration samples. Samples of coal powder and solid pieces of coal were analyzed by FT-NIR diffuse reflectance spectroscopy with the goal of predicting their ash content, percentage of volatile components, and energy content. The measurements were performed on a Broker Vector 22N spectrometer with a fiber optic probe. A partial least squares model was constructed for each of the parameters of interest for solid and powdered sample forms separately. Calibration models varied in size from 4 to 10 PLS ranks. Correlation coefficients for these models ranged from 86.6 to 95.0%, with root-mean-square errors of cross validation comparable to the corresponding reference measurement methods. The use of FT-NIR diffuse reflectance measurement techniques was found to be a significant improvement over existing measurement methodologies in terms of speed and ease of use, while maintaining the desired accuracy for all parameters and sample forms.(Figure Omitted).

  • PDF

컨볼루션 신경망을 이용한 도시 환경에서의 안전도 점수 예측 모델 연구 (A Safety Score Prediction Model in Urban Environment Using Convolutional Neural Network)

  • 강현우;강행봉
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권8호
    • /
    • pp.393-400
    • /
    • 2016
  • 최근, 컴퓨터 비전과 기계 학습 기술의 도움을 받아 효율적이고 자동적인 도시 환경에 대한 분석 방법의 개발에 대한 연구가 이루어지고 있다. 많은 분석들 중에서도 도시의 안전도 분석은 지역 사회의 많은 관심을 받고 있다. 더욱 정확한 안전도 점수 예측과 인간의 시각적 인지를 반영하기 위해서, 인간의 시각적 인지에서 가장 중요한 전역 정보와 지역 정보의 고려가 필요하다. 이를 위해 우리는 전역 칼럼과 지역 칼럼으로 구성된 Double-column Convolutional Neural Network를 사용한다. 전역 칼럼과 지역 칼럼 각각은 입력은 크기가 변환된 원 영상과 원 영상에서 무작위로 크로핑을 사용한다. 또한, 학습 과정에서 특정 칼럼에 오버피팅되는 문제를 해결하기 위한 새로운 학습방법을 제안한다. 우리의 DCNN 모델의 성능 비교를 위해 2개의 SVR 모델과 3개의 CNN 모델의 평균 제곱근 오차와 상관관계 분석을 측정하였다. 성능 비교 실험 결과 우리의 모델이 0.7432의 평균 제곱근 오차와 0.853/0.840 피어슨/스피어맨 상관 계수로 가장 좋은 성능을 보여주었다.

3D Building Reconstruction and Visualization by Clustering Airborne LiDAR Data and Roof Shape Analysis

  • Lee, Dong-Cheon;Jung, Hyung-Sup;Yom, Jae-Hong
    • 한국측량학회지
    • /
    • 제25권6_1호
    • /
    • pp.507-516
    • /
    • 2007
  • Segmentation and organization of the LiDAR (Light Detection and Ranging) data of the Earth's surface are difficult tasks because the captured LiDAR data are composed of irregularly distributed point clouds with lack of semantic information. The reason for this difficulty in processing LiDAR data is that the data provide huge amount of the spatial coordinates without topological and/or relational information among the points. This study introduces LiDAR data segmentation technique by utilizing histograms of the LiDAR height image data and analyzing roof shape for 3D reconstruction and visualization of the buildings. One of the advantages in utilizing LiDAR height image data is no registration required because the LiDAR data are geo-referenced and ortho-projected data. In consequence, measurements on the image provide absolute reference coordinates. The LiDAR image allows measurement of the initial building boundaries to estimate locations of the side walls and to form the planar surfaces which represent approximate building footprints. LiDAR points close to each side wall were grouped together then the least-square planar surface fitting with the segmented point clouds was performed to determine precise location of each wall of an building. Finally, roof shape analysis was performed by accumulated slopes along the profiles of the roof top. However, simulated LiDAR data were used for analyzing roof shape because buildings with various shapes of the roof do not exist in the test area. The proposed approach has been tested on the heavily built-up urban residential area. 3D digital vector map produced by digitizing complied aerial photographs was used to evaluate accuracy of the results. Experimental results show efficiency of the proposed methodology for 3D building reconstruction and large scale digital mapping especially for the urban area.

항공 라이다 수치지면자료의 오분류 영역 탐지 알고리즘 (Misclassified Area Detection Algorithm for Aerial LiDAR Digital Terrain Data)

  • 김민철;노명종;조우석;방기인;박준구
    • 대한공간정보학회지
    • /
    • 제19권1호
    • /
    • pp.79-86
    • /
    • 2011
  • 최근 수치표고모델(DEM : Digital Elevation Model)을 구축하기 위한 목적으로 항공레이저측량(LiDAR : Light Detection And Ranging) 기술이 주목받고 있다. DEM은 항공레이저측량으로부터 획득된 라이다 데이터에서 지면점만 추출한 수치지면자료(DTD : Digital Terrain Data)의 정확성에 의해 그 품질이 좌우된다. 하지만 원시자료에서 수치지면자료를 추출하기 위한 자동 필터링 작업은 필터링 알고리즘의 한계 및 라이다 데이터의 고유한 특성으로 인하여 항상 오분류 영역이 발생한다. 따라서 이를 보완하기 위해서는 작업자에 의한 수동분류 작업이 반드시 필요하다. 본 연구에서는 수동 작업이 원활하게 이루어 질 수 있도록 자동 필터링 작업에서 얻어진 수치지면자료에서 오분류 될 가능성이 있는 영역을 자동으로 탐지하는 알고리즘을 제안한다. 제안된 알고리즘은 2D 격자 구조를 적용하였으며 'Slope Angle', 'Slope DeltaH', 'NNMaxDH(Nearest Neighbor Max Delta Height)'로 명명한 매개변수를 사용하였다. 실험 결과, 제안된 알고리즘은 지형형태나 라이다 데이터 평균 점밀도에 제한받지 않는 안정적인 결과를 보여주었다.

SAVITZKY-GOLAY DERIVATIVES : A SYSTEMATIC APPROACH TO REMOVING VARIABILITY BEFORE APPLYING CHEMOMETRICS

  • Hopkins, David W.
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.1041-1041
    • /
    • 2001
  • Removal of variability in spectra data before the application of chemometric modeling will generally result in simpler (and presumably more robust) models. Particularly for sparsely sampled data, such as typically encountered in diode array instruments, the use of Savitzky-Golay (S-G) derivatives offers an effective method to remove effects of shifting baselines and sloping or curving apparent baselines often observed with scattering samples. The application of these convolution functions is equivalent to fitting a selected polynomial to a number of points in the spectrum, usually 5 to 25 points. The value of the polynomial evaluated at its mid-point, or its derivative, is taken as the (smoothed) spectrum or its derivative at the mid-point of the wavelength window. The process is continued for successive windows along the spectrum. The original paper, published in 1964 [1] presented these convolution functions as integers to be used as multipliers for the spectral values at equal intervals in the window, with a normalization integer to divide the sum of the products, to determine the result for each point. Steinier et al. [2] published corrections to errors in the original presentation [1], and a vector formulation for obtaining the coefficients. The actual selection of the degree of polynomial and number of points in the window determines whether closely situated bands and shoulders are resolved in the derivatives. Furthermore, the actual noise reduction in the derivatives may be estimated from the square root of the sums of the coefficients, divided by the NORM value. A simple technique to evaluate the actual convolution factors employed in the calculation by the software will be presented. It has been found that some software packages do not properly account for the sampling interval of the spectral data (Equation Ⅶ in [1]). While this is not a problem in the construction and implementation of chemometric models, it may be noticed in comparing models at differing spectral resolutions. Also, the effects on parameters of PLS models of choosing various polynomials and numbers of points in the window will be presented.

  • PDF

작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석 (Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size)

  • 김예슬;곽근호;이경도;나상일;박찬원;박노욱
    • 대한원격탐사학회지
    • /
    • 제34권5호
    • /
    • pp.811-827
    • /
    • 2018
  • 본 연구의 목적은 다중시기 원격탐사 자료를 이용한 작물분류에서 기계학습 알고리즘과 딥러닝 알고리즘의 비교에 있다. 이를 위해 전라남도 해남군과 미국 Illinois 주의 작물 재배지를 대상으로 기계학습 알고리즘과 딥러닝 알고리즘에 대해 (1) 하이퍼파라미터와 (2) 훈련자료의 크기에 따른 영향을 비교 분석하였다. 비교 실험에는 기계학습 알고리즘으로 support vector machine(SVM)을 적용하고 딥러닝 알고리즘으로 convolutional neural network(CNN)를 적용하였다. 특히 CNN에서 2차원의 공간정보를 고려하는 2D-CNN과 시간차원을 확장한 구조의 3D-CNN을 적용하였다. 비교 실험 결과, 다양한 하이퍼파라미터를 고려해야 하는 CNN의 경우 SVM과 다르게 두 지역에서 정의된 하이퍼파라미터 값이 유사한 것으로 나타났다. 이러한 결과를 바탕으로 모델 최적화에 많은 시간이 소요되지만 최적화된 CNN 모델을 다른 지역으로 확장할 수 있는 전이학습의 적용 가능성이 높을 것으로 판단된다. 다음 훈련자료 크기에 따른 비교 실험 결과, SVM 보다 CNN에서 훈련자료 크기의 영향이 큰 것으로 나타났는데 특히 다양한 공간특성을 갖는 Illinois 주에서 이러한 경향이 두드러지게 나타났다. 또한 Illinois 주에서 3D-CNN의 분류 성능이 저하되는 것으로 나타났는데, 이는 모델 복잡도가 증가하면서 과적합의 영향이 발생한 것으로 판단된다. 즉 모델의 훈련 정확도는 높지만 다양한 공간특성이나 입력 자료의 잡음 효과 등으로 오히려 분류 성능이 저하된 것으로 나타났다. 이러한 결과는 대상 지역의 공간특성을 고려해 적절한 분류 알고리즘을 선택해야 하는 것을 의미한다. 또한 CNN에서 특히, 3D-CNN에서 일정 수준의 분류 성능을 담보하기 위해 다량의 훈련자료 수집이 필요하다는 것을 의미한다.

다양한 다분류 SVM을 적용한 기업채권평가 (Corporate Bond Rating Using Various Multiclass Support Vector Machines)

  • 안현철;김경재
    • Asia pacific journal of information systems
    • /
    • 제19권2호
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.