• Title/Summary/Keyword: Linear Space Algorithm

Search Result 325, Processing Time 0.025 seconds

Review on the Three-Dimensional Inversion of Magnetotelluric Date (MT 자료의 3차원 역산 개관)

  • Kim Hee Joon;Nam Myung Jin;Han Nuree;Choi Jihyang;Lee Tae Jong;Song Yoonho;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.3
    • /
    • pp.207-212
    • /
    • 2004
  • This article reviews recent developments in three-dimensional (3-D) magntotelluric (MT) imaging. The inversion of MT data is fundamentally ill-posed, and therefore the resultant solution is non-unique. A regularizing scheme must be involved to reduce the non-uniqueness while retaining certain a priori information in the solution. The standard approach to nonlinear inversion in geophysis has been the Gauss-Newton method, which solves a sequence of linearized inverse problems. When running to convergence, the algorithm minimizes an objective function over the space of models and in the sense produces an optimal solution of the inverse problem. The general usefulness of iterative, linearized inversion algorithms, however is greatly limited in 3-D MT applications by the requirement of computing the Jacobian(partial derivative, sensitivity) matrix of the forward problem. The difficulty may be relaxed using conjugate gradients(CG) methods. A linear CG technique is used to solve each step of Gauss-Newton iterations incompletely, while the method of nonlinear CG is applied directly to the minimization of the objective function. These CG techniques replace computation of jacobian matrix and solution of a large linear system with computations equivalent to only three forward problems per inversion iteration. Consequently, the algorithms are efficient in computational speed and memory requirement, making 3-D inversion feasible.

Research and development on image luminance meter of road tunnel internal and external (도로터널 내/외부의 영상휘도 측정기 연구개발)

  • Jang, Soon-Chul;Park, Sung-Lim;Ko, Seok-Yong;Lee, Mi-Ae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.17 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • This paper introduces the development of imaging luminance meter which measures the luminances of external/internal road tunnel. The developed imaging luminance meter complies with both L20 method and Veiling luminance method of the international standards, CIE88. In this paper the L20 method is mainly presented because most of tunnels currently adapt L20 method. The developed system has an embedded computer to operate at stand-alone. The system has a ethernet port, a heater, a fan, a defroster, a wiper and sun shielder. Compensation algorithm is applied for correcting non-linear responses to the luminance and integration time. The accuracy of measurement is less than 1% when it calibrated at the public certification institute. The developed system was also tested at the real field, road tunnel. The test results were very similar with the reference luminance meter and showed that the developed system operates well at the real field. Partial sensor saturations were happened to show the less luminance, because there were the high reflecting objects in the real field. Further study should be followed for high luminance measurement.

Design of pRBFNNs Pattern Classifier-based Face Recognition System Using 2-Directional 2-Dimensional PCA Algorithm ((2D)2PCA 알고리즘을 이용한 pRBFNNs 패턴분류기 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Jin, Yong-Tak
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.1
    • /
    • pp.195-201
    • /
    • 2014
  • In this study, face recognition system was designed based on polynomial Radial Basis Function Neural Networks(pRBFNNs) pattern classifier using 2-directional 2-dimensional principal component analysis algorithm. Existing one dimensional PCA leads to the reduction of dimension of image expressed by the multiplication of rows and columns. However $(2D)^2PCA$(2-Directional 2-Dimensional Principal Components Analysis) is conducted to reduce dimension to each row and column of image. and then the proposed intelligent pattern classifier evaluates performance using reduced images. The proposed pRBFNNs consist of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned with the aid of fuzzy c-means clustering. In the conclusion part of rules. the connection weight of RBFNNs is represented as the linear type of polynomial. The essential design parameters (including the number of inputs and fuzzification coefficient) of the networks are optimized by means of Differential Evolution. Using Yale and AT&T dataset widely used in face recognition, the recognition rate is obtained and evaluated. Additionally IC&CI Lab dataset is experimented with for performance evaluation.

Design of Digit Recognition System Realized with the Aid of Fuzzy RBFNNs and Incremental-PCA (퍼지 RBFNNs와 증분형 주성분 분석법으로 실현된 숫자 인식 시스템의 설계)

  • Kim, Bong-Youn;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.56-63
    • /
    • 2016
  • In this study, we introduce a design of Fuzzy RBFNNs-based digit recognition system using the incremental-PCA in order to recognize the handwritten digits. The Principal Component Analysis (PCA) is a widely-adopted dimensional reduction algorithm, but it needs high computing overhead for feature extraction in case of using high dimensional images or a large amount of training data. To alleviate such problem, the incremental-PCA is proposed for the computationally efficient processing as well as the incremental learning of high dimensional data in the feature extraction stage. The architecture of Fuzzy Radial Basis Function Neural Networks (RBFNN) consists of three functional modules such as condition, conclusion, and inference part. In the condition part, the input space is partitioned with the use of fuzzy clustering realized by means of the Fuzzy C-Means (FCM) algorithm. Also, it is used instead of gaussian function to consider the characteristic of input data. In the conclusion part, connection weights are used as the extended diverse types in polynomial expression such as constant, linear, quadratic and modified quadratic. Experimental results conducted on the benchmarking MNIST handwritten digit database demonstrate the effectiveness and efficiency of the proposed digit recognition system when compared with other studies.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Analysis of 3D Accuracy According to Determination of Calibration Initial Value in Close-Range Digital Photogrammetry Using VLBI Antenna and Mobile Phone Camera (VLBI 안테나와 모바일폰 카메라를 활용한 근접수치사진측량의 캘리브레이션 초기값 결정에 따른 3차원 정확도 분석)

  • Kim, Hyuk Gi;Yun, Hong Sik;Cho, Jae Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • This study had been aimed to conduct the camera calibration on VLBI antenna in the Space Geodetic Observation Center of Sejong City with a low-cost digital camera, which embedded in a mobile phone to determine the three-dimension position coordinates of the VLBI antenna, based on stereo images. The initial values for the camera calibration have been obtained by utilizing the Direct Linear Transformation algorithm and the commercial digital photogrammetry system, PhotoModeler $Scanner^{(R)}$ ver. 6.0, respectively. The accuracy of camera calibration results was compared with that the camera calibration results, acquired by a bundle adjustment with nonlinear collinearity condition equation. Although two methods showed significant differences in the initial value, the final calibration demonstrated the consistent results whichever methods had been performed for obtaining the initial value. Furthermore, those three-dimensional coordinates of feature points of the VLBI antenna were respectively calculated using the camera calibration by the two methods to be compared with the reference coordinates obtained from a total station. In fact, both methods have resulted out a same standard deviation of $X=0.004{\pm}0.010m$, $Y=0.001{\pm}0.015m$, $Z=0.009{\pm}0.017m$, that of showing a high degree of accuracy in centimeters. From the result, we can conclude that a mobile phone camera opens up the way for a variety of image processing studies, such as 3D reconstruction from images captured.

Gauss-Newton Based Emitter Location Method Using Successive TDOA and FDOA Measurements (연속 측정된 TDOA와 FDOA를 이용한 Gauss-Newton 기법 기반의 신호원 위치추정 방법)

  • Kim, Yong-Hee;Kim, Dong-Gyu;Han, Jin-Woo;Song, Kyu-Ha;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.76-84
    • /
    • 2013
  • In the passive emitter localization using instantaneous TDOA (time difference of arrival) and FDOA (frequency difference of arrival) measurements, the estimation accuracy can be improved by collecting additional measurements. To achieve this goal, it is required to increase the number of the sensors. However, in electronic warfare environment, a large number of sensors cause the loss of military strength due to high probability of intercept. Also, the additional processes should be considered such as the data link and the clock synchronization between the sensors. Hence, in this paper, the passive localization of a stationary emitter is presented by using the successive TDOA and FDOA measurements from two moving sensors. In this case, since an independent pair of sensors is added in the data set at every instant of measurement, each pair of sensors does not share the common reference sensor. Therefore, the QCLS (quadratic correction least squares) methods cannot be applied, in which all pairs of sensor should include the common reference sensor. For this reason, a Gauss-Newton algorithm is adopted to solve the non-linear least square problem. In addition, to show the performance of the proposed method, we compare the RMSE (root mean square error) of the estimates with CRLB (Cramer-Rao lower bound) and derived the CEP (circular error probable) planes to analyze the expected estimation performance on the 2-dimensional space.