• Title/Summary/Keyword: Non-Linear theory.

Search Result 481, Processing Time 0.027 seconds

Numerical Analysis of Multi-dimensional Consolidation Based on Non-Linear Model (비선형 모델에 의한 다차원 압밀의 수치해석)

  • Jeong, Jin-Seop;Gang, Byeong-Seon;Nam, Gung-Mun
    • Geotechnical Engineering
    • /
    • v.1 no.1
    • /
    • pp.59-72
    • /
    • 1985
  • This paper deals with the numerical analysis by the (mite element method introducing Biot's theory of consolidation and the modified Cambridge model proposed by Roscoe school of Cambridge University as constitutive equation and using Christian-Boehner's technique. Especially, time interval and division of elements are investigated in vies of stability and economics. In order to check the validity of author's program, the program was tested with one-dimensional consolidation case followed by Terzaghi's exact solution and with the results of the Magnan's analysis for existing banking carried out for study at Cubzac-les-ports in France. The main conclusions obtained are summarized as follows: 1. In the case of one-dimensional consolidation, the more divided the elements are near the surface of the foundation, the higher the accuracy of the numerical analysis is. 2. For the time interval, it is stable to divide 20 times per 1-lg cycle. 3. At the element which has long drain distance, the Mandel-fryer effect appears due to time lag. 4. Lateral displacement at an initial loading stage predicted by author's program, in which the load was assumed as not concentrative. but rather in grid form, is well consistent with the value of observation. 5. The pore water pressure predicted by author's program has a better accordance with the value of observation compared with Magnan's results. 6. Optimum construction control by Matsuo-Kawamura's method is possible with the predicted lateral displacement and settlement by the program.

  • PDF

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

Improvement of Microphone Away Performance in the Low Frequencies Using Modulation Technique (변조 기법을 이용한 마이크로폰 어레이의 저주파 대역 특성 개선)

  • Kim, Gi-Bak;Cho, Nam-Ik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.111-118
    • /
    • 2005
  • In this paper, we employ the modulation technique for improving the characteristics of beamformer in the low frequencies and thus improving the overall noise reduction performance. In the 1-dimensional uniform linear microphone arrays, we can suppress the narrowband noise component using the delay-and-sum beamforming. But, for the wideband noise signal, the delay-and-sum beamformer does not work well for the reduction of low frequency component because the inter-element spacing is usually set to avoid spatial aliasing at high frequencies. Hence, the beamwidth is not uniform with respect to each frequency and it is usually wider at the low frequencies. In order to obtain the beamwidth independent of frequencies, subarray systems[1][2][3][4] and multi-beamforming[5] have been proposed. However these algorithms need large space and more microphones since they are based on the theory that the size of the array is proportional to the wavelength of the input signal. In the proposed beamformer, we reduce the low frequency noise by using modulation technique that does not need additional sensors or non-uniform spacing. More Precisely, the array signals are split into subbands, and the low frequency components are shifted to high frequencies by modulation and reduced by the delay-and-sum beamforming techniques with small size microphone array. Experimental results show that the proposed technique Provides better performance than the conventional ones, especially in the low frequency band.

Exploratory Study of Characterizing Scholarly Communication Patterns in Humanities for Facilitating Consilience in Cyberscholarship Environment: Based on Historians' Research Activities (사이버스칼러쉽 환경에서의 융복합 연구 촉진을 위한 인문학 분야 학술 커뮤니케이션 특성 파악에 관한 연구 - 역사학 분야를 중심으로 -)

  • Yu, So-Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.50 no.1
    • /
    • pp.331-351
    • /
    • 2016
  • Digitalized data and literature in scholarly community has developed the concept of digital humanities and cyberscholarship which indicate the characteristics of a new aspect and approach in scholarly activities with digitalized resources or new media. This study was performed in order to identify the changes in national research activities of art and humanities by using a multi-modal approach. The combined methodology of in-depth interview and content analysis on publishing and citing behaviors in literature was executed. The steps of research process is identified as a non-linear combination of 3 parts: developing research idea, developing the research idea to write, and submitting manuscript to publish. Prominent implementations of cyberscholarship were found in the 2nd step for accessing and using research data and literatures. Understanding the characteristics of scholar communication using cyberscholarhip factors in humanities for interdisciplinarity, sophisticating the environment of cyberscholarhip for data sharing, investing and developing archivist and archives, and providing a various platform for accelerating scholarly communication were derived by the panel discussion for developing interdisciplinary research for humanities.

Non-linear Relationship Between IP Proportion of Startup and Financing Performance: Moderating Role of Founder's Education Level (스타트업의 지식재산 비중과 자금조달의 비선형 관계: 창업자 지식수준의 조절효과)

  • Chung, Doohee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.5
    • /
    • pp.1-11
    • /
    • 2019
  • Financing plays an important role in the survival and growth of startups. This study investigates key factors that improve startup financing performance. To this end, we analyze the relationship between the proportion of intellectual property and the financing performance. In addition, this study also examine the impact of the founder's education level on the financing of startups, and the moderating effect of the founder's education level on the relationship between intellectual property proportion and financing. Based on the survey data of 331 startups, this study found that the proportion of intellectual property and the financing performance have an inverted U-shaped nonlinear relationship. While the founder's education level has a positive impact on the financing performance, it negatively moderate the relationship between the intellectual property proportion and the financing performance. Through these findings, this study suggests that it is necessary to maintain an adequate proportion of intellectual property in order to maximize startup financing performance. The higher education level of founder enhances the startup financing. Since the founder's education level weaken the effect of intellectual proporty's effect on startup financing, however, startups need to control the proportion of intellectual property to improving financing according to the founder's education level. Based on signal theory, this study proposes a new strategy of intellectual property to enhance startup financing performance.

A Study of the Nonlinear Characteristics Improvement for a Electronic Scale using Multiple Regression Analysis (다항식 회귀분석을 이용한 전자저울의 비선형 특성 개선 연구)

  • Chae, Gyoo-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.6
    • /
    • pp.1-6
    • /
    • 2019
  • In this study, the development of a weight estimation model of electronic scale with nonlinear characteristics is presented using polynomial regression analysis. The output voltage of the load cell was measured directly using the reference mass. And a polynomial regression model was obtained using the matrix and curve fitting function of MS Office Excel. The weight was measured in 100g units using a load cell electronic scale measuring up to 5kg and the polynomial regression model was obtained. The error was calculated for simple($1^{st}$), $2^{nd}$ and $3^{rd}$ order polynomial regression. To analyze the suitability of the regression function for each model, the coefficient of determination was presented to indicate the correlation between the estimated mass and the measured data. Using the third order polynomial model proposed here, a very accurate model was obtained with a standard deviation of 10g and the determinant coefficient of 1.0. Based on the theory of multi regression model presented here, it can be used in various statistical researches such as weather forecast, new drug development and economic indicators analysis using logistic regression analysis, which has been widely used in artificial intelligence fields.

Rainfall Forecasting Using Satellite Information and Integrated Flood Runoff and Inundation Analysis (I): Theory and Development of Model (위성정보에 의한 강우예측과 홍수유출 및 범람 연계 해석 (I): 이론 및 모형의 개발)

  • Choi, Hyuk Joon;Han, Kun Yeun;Kim, Gwangseob
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6B
    • /
    • pp.597-603
    • /
    • 2006
  • The purpose of this study is to improve the short term rainfall forecast skill using neural network model that can deal with the non-linear behavior between satellite data and ground observation, and minimize the flood damage. To overcome the geographical limitation of Korean peninsula and get the long forecast lead time of 3 to 6 hour, the developed rainfall forecast model took satellite imageries and wide range AWS data. The architecture of neural network model is a multi-layer neural network which consists of one input layer, one hidden layer, and one output layer. Neural network is trained using a momentum back propagation algorithm. Flood was estimated using rainfall forecasts. We developed a dynamic flood inundation model which is associated with 1-dimensional flood routing model. Therefore the model can forecast flood aspect in a protected lowland by levee failure of river. In the case of multiple levee breaks at main stream and tributaries, the developed flood inundation model can estimate flood level in a river and inundation level and area in a protected lowland simultaneously.

Vehicle-Bridge Interaction Analysis of Railway Bridges by Using Conventional Trains (기존선 철도차량을 이용한 철도교의 상호작용해석)

  • Cho, Eun Sang;Kim, Hee Ju;Hwang, Won Sup
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.31-43
    • /
    • 2009
  • In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations of motion. The coupled equations of motion for the vehicle-bridge interaction are solved by the Newmark ${\beta}$ of a direct integration method, and by composing the effective stiffness matrix and the effective force vector according to a analysis step, those can be solved with the same manner of the solving procedure of equilibrium equations in static analysis. Also, the effective stiffness matrix is reconstructed by the Skyline method for increasing the analysis effectiveness. The Cholesky's matrix decomposition scheme is applied to the analysis procedure for minimizing the numerical errors that can be generated in directly calculating the inverse matrix. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 16 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by the PSD functions of the Federal Railroad Administration (FRA). The results of the vehicle-bridge interaction analysis are verified by the experimental results for the railway plate girder bridges of a span length with 12 m, 18 m, and the experimental and analytical data are applied to the low pass filtering scheme, and the basis frequency of the filtering is a 2 times of the 1st fundamental frequency of a bridge bending.

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.