• Title/Summary/Keyword: Nonlinear Analysis

Search Result 7,591, Processing Time 0.041 seconds

Seismic Performance Evaluation of Concrete-filled U-shaped Mega Composite Beams (콘크리트 채움 U형 메가 합성보의 내진성능 평가)

  • Lee, Cheol Ho;Ahn, Jae Kwon;Kim, Dae Kyung;Park, Ji-Hun;Lee, Seung Hwan
    • Journal of Korean Society of Steel Construction
    • /
    • v.29 no.2
    • /
    • pp.111-122
    • /
    • 2017
  • In this paper, the applicability of a 1900mm-deep concrete-filled U-shaped composite beam to composite ordinary moment frames (C-OMFs) was investigated based on existing test results from smaller-sized specimens and supplemental numerical studies since full-scale seismic testing of such a huge sized beam is practically impossible. The key issue was the web local buckling of concrete-filled U section under negative bending. Based on 13 existing test results compiled, the relationship between web slenderness and story drift capacity was obtained. From this relationship, a 1900mm-deep mega beam, fabricated with 25mm-thick plate was expected to experience the web local buckling at 2% story drift and eventually reach a story drift over 3%, thus much exceeding the requirements of C-OMFs. The limiting width to thickness ratio according to the 2010 AISC Specification was shown to be conservative for U section webs of this study. The test-validated supplemental nonlinear finite element analysis was also conducted to further investigate the effects of the horizontal stiffeners (used to tie two webs of a U section) on web local buckling and flexural strength. First, it is shown that the nominal plastic moment under negative bending can be developed without using the horizontal stiffeners, although the presence of the stiffeners can delay the occurrence of web local buckling and restrain its propagation. Considering all these, it is concluded that the 1900mm-deep concrete-filled U-shaped composite beam investigated can be conservatively applied to C-OMFs. Finally, some useful recommendations for the arrangement and design of the horizontal stiffeners are also recommended based on the numerical results.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Quantification of Brain Images Using Korean Standard Templates and Structural and Cytoarchitectonic Probabilistic Maps (한국인 뇌 표준판과 해부학적 및 세포구축학적 확률뇌지도를 이용한 뇌영상 정량화)

  • Lee, Jae-Sung;Lee, Dong-Soo;Kim, Yu-Kyeong;Kim, Jin-Su;Lee, Jong-Min;Koo, Bang-Bon;Kim, Jae-Jin;Kwon, Jun-Soo;Yoo, Tae-Woo;Chang, Ki-Hyun;Kim, Sun-I.;Kang, Hye-Jin;Kang, Eun-Joo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.241-252
    • /
    • 2004
  • Purpose: Population based structural and functional maps of the brain provide effective tools for the analysis and interpretation of complex and individually variable brain data. Brain MRI and PET standard templates and statistical probabilistic maps based on image data of Korean normal volunteers have been developed and probabilistic maps based on cytoarchitectonic data have been introduced. A quantification method using these data was developed for the objective assessment of regional intensity in the brain images. Materials and Methods: Age, gender and ethnic specific anatomical and functional brain templates based on MR and PET images of Korean normal volunteers were developed. Korean structural probabilistic maps for 89 brain regions and cytoarchitectonic probabilistic maps for 13 Brodmann areas were transformed onto the standard templates. Brain FDG PET and SPGR MR images of normal volunteers were spatially normalized onto the template of each modality and gender. Regional uptake of radiotracers in PET and gray matter concentration in MR images were then quantified by averaging (or summing) regional intensities weighted using the probabilistic maps of brain regions. Regionally specific effects of aging on glucose metabolism in cingulate cortex were also examined. Results: Quantification program could generate quantification results for single spatially normalized images per 20 seconds. Glucose metabolism change in cingulate gyrus was regionally specific: ratios of glucose metabolism in the rostral anterior cingulate vs. posterior cingulate and the caudal anterior cingulate vs. posterior cingulate were significantly decreased as the age increased. 'Rostral anterior'/'posterior' was decreased by 3.1% per decade of age ($P<10^{-11}$, r=0.81) and 'caudal anterior'/'posterior' was decreased by 1.7% ($P<10^{-8}$, r=0.72). Conclusion: Ethnic specific standard templates and probabilistic maps and quantification program developed in this study will be useful for the analysis of brain image of Korean people since the difference in shape of the hemispheres and the sulcal pattern of brain relative to age, gender, races, and diseases cannot be fully overcome by the nonlinear spatial normalization techniques.

Life Table Analysis of the Cabbage Aphide, Brevicoryne brassicae (Linnaeus) (Homoptera: Aphididae), on Tah Tsai Chinese Cabbages (다채를 기주로 양배추가루진딧물[Brevicoryne brassicae (Linnaeus)]의 생명표 분석)

  • Kim, So Hyung;Kim, Kwang-Ho;Hwang, Chang-Yeon;Lim, Ju-Rak;Kim, Kang-Hyeok;Jeon, Sung-Wook
    • Korean journal of applied entomology
    • /
    • v.53 no.4
    • /
    • pp.449-456
    • /
    • 2014
  • Life table analysis and temperature-dependent development experiments were conducted to understand the biological characteristics of the cabbage aphid, Brevicoryne brassicae (Linnaeus) on detached Tah Tsai Chinese cabbage (Brassica campestris var. narinosa) leaves at seven constant temperatures (15, 18, 21, 24, 27, 30 and $33{\pm}1^{\circ}C$; $65{\pm}5%$ RH; 16L:8D). Mortality was lowest at $24^{\circ}C$ with 18% and 0% at $1^{st}{\sim}2^{nd}$ and $3^{rd}{\sim}4^{th}$ nymphal stages, respectively. The developmental period of $1^{st}{\sim}2^{nd}$ nymphal stage was 8.4 days at $18^{\circ}C$, and it decreased with increasing temperature. The developmental period of the $3^{rd}{\sim}4^{th}$ nymphal stage was 6.7 days at $18^{\circ}C$. The lower threshold temperature calculated using a linear model was $7.8^{\circ}C$, and the effective accumulative temperature was 120.1DD. Adult longevity was 14.9 days at $21^{\circ}C$, and total fecundity was observed 58.5 at $24^{\circ}C$. According to the life table, the net reproduction rate was 47.5 at $24^{\circ}C$, and the intrinsic rate of increase and the finite rate of increase were 0.36 and 1.43, respectively, at $27^{\circ}C$. The doubling time was 1.95d at $27^{\circ}C$, and mean generation time was 7.43d at $30^{\circ}C$.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Optimum Design of Two Hinged Steel Arches with I Sectional Type (SUMT법(法)에 의(依)한 2골절(滑節) I형(形) 강재(鋼材) 아치의 최적설계(最適設計))

  • Jung, Young Chae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.65-79
    • /
    • 1992
  • This study is concerned with the optimal design of two hinged steel arches with I cross sectional type and aimed at the exact analysis of the arches and the safe and economic design of structure. The analyzing method of arches which introduces the finite difference method considering the displacements of structure in analyzing process is used to eliminate the error of analysis and to determine the sectional force of structure. The optimizing problems of arches formulate with the objective functions and the constraints which take the sectional dimensions(B, D, $t_f$, $t_w$) as the design variables. The object functions are formulated as the total weight of arch and the constraints are derived by using the criteria with respect to the working stress, the minimum dimension of flange and web based on the part of steel bridge in the Korea standard code of road bridge and including the economic depth constraint of the I sectional type, the upper limit dimension of the depth of web and the lower limit dimension of the breadth of flange. The SUMT method using the modified Newton Raphson direction method is introduced to solve the formulated nonlinear programming problems which developed in this study and tested out throught the numerical examples. The developed optimal design programming of arch is tested out and examined throught the numerical examples for the various arches. And their results are compared and analyzed to examine the possibility of optimization, the applicablity, the convergency of this algorithm and with the results of numerical examples using the reference(30). The correlative equations between the optimal sectional areas and inertia moments are introduced from the various numerical optimal design results in this study.

  • PDF

Analysis of Mosaic Image of Animation (애니메이션 <플랫 라이프>의 모자이크 이미지 분석)

  • Lee, Ji-Hyun
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.465-491
    • /
    • 2017
  • This paper analyzes the short animation film which faithfully follows the external dimension of cartoons, and studies the hidden subject and the way of narrative in the back of the figure. In this process, we analyze the theme of using the analysis method of mosaic image. The narrative of cartoons is usually the majority in discussing the differences in the aspectual part. However, this animation uses cartoons to reach the linear narrative of ordinary narrative movies. As Janet Murray explains, if a 'mosaic image' approaches a theme through a mosaic approach, we can limit the 'mosaic film' that are introduced in the film format among them. First of all, conceptually uses the characteristics of 'mosaic image', at the same time, it utilizes a work utilizing the narrative features of 'mosaic film'. By analyzing this animation film by bifurcation, the first half reveals the characteristics of open-minded mosaic video platform, and the second half introduces the linear narrative method of film narrative. This paper divides the narrative method of 'multi plot Film' into three types: mosaic narrative film, network narrative film, and multi-draft film. Thus, we can analyze the ending of as a narrative method of 'network narrative film' which is composed of parallel or juxtaposed stories. In other words, if the early part of the animation follows the 'mosaic narrative' as an 'extension of ensemble film', the latter part faithfully follows 'network narrative'. Even in the way of talking about the subject, this animated film uses the way of speaking the mosaic image. Considering the aspectual tendency of cartoons, it can be said that this film derives the meaning of 'humor' or 'satire' in an open way. If the first half refers to 'the ambiguous routine of modern man', the latter half draws a profound theme called 'the reality of human selfishness in modern society'. is a film for a wide range of social criticism designed for adults who can interpret meaning.

Failure Behavior and Separation Criterion for Strengthened Concrete Members with Steel Plates (강판과 콘크리트 접착계면의 파괴거동 및 박리특성)

  • 오병환;조재열;차수원
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.1
    • /
    • pp.126-135
    • /
    • 2002
  • Plate bonding technique has been widely used in strengthening of existing concrete structures, although it has often a serious problem of premature falure such as interface separation and rip-off. However, this premature failure problem has not been well explored yet especially in view of local failure mechanism around the interface of plate ends. The purpose of the present study is, therefore, to identify the local failure of strengthened plates and to derive a separation criterion at the interface of plates. To this end, a comprehensive experimental program has been set up. The double lap pull-out tests considering pure shear force and half beam tests considering combined flexure-shear force were performed. The main experimental parameters include plate thickness, adhesive thickness, and plate end arrangement. The strains along the longitudinal direction of steel plates have been measured and the shear stress were calculated from those measures strains. The effects of plate thickness, bonded length, and plate end treatment have been also clarified from the present test results. Nonlinear finite element analysis has been performed and compared with test results. The Interface properties are also modeled to present the separation failure behavior of strengthened members. The cracking patterns as well as maximum failure loads agree well with test data. The relation between maximum shear and normal stresses at the interface has been derived to propose a separation failure criterion of strengthened members. The present study allows more realistic analysis and design of externally strengthened flexural member with steel plates.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Investigation of O4 Air Mass Factor Sensitivity to Aerosol Peak Height Using UV-VIS Hyperspectral Synthetic Radiance in Various Measurement Conditions (UV-VIS 초분광 위성센서 모의복사휘도를 활용한 다양한 관측환경에서의 에어로솔 유효고도에 대한 O4 대기질량인자 민감도 조사)

  • Choi, Wonei;Lee, Hanlim;Choi, Chuluong;Lee, Yangwon;Noh, Youngmin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.155-165
    • /
    • 2020
  • In this present study, the sensitivity of O4 Air Mass Factor (AMF) to Aerosol Peak Height (APH) has been investigated using radiative transfer model according to various parameters(wavelength (340 nm and 477 nm), aerosol type (smoke, dust, sulfate), aerosol optical depth (AOD), surface reflectance, solar zenith angle, and viewing zenith angle). In general, it was found that O4 AMF at 477 nm is more sensitive to APH than that at 340 nm and is stably retrieved with low spectral fitting error in Differential Optical Absorption Spectroscopy (DOAS) analysis. In high AOD condition, sensitivity of O4 AMF on APH tends to increase. O4 AMF at 340 nm decreased with increasing solar zenith angle. This dependency isthought to be induced by the decrease in length of the light path where O4 absorption occurs due to the shielding effect caused by Rayleigh and Mie scattering at high solar zenith angles above 40°. At 477 nm, as the solar zenith angle increased, multiple scattering caused by Rayleigh and Mie scattering partly leads to the increase of O4 AMF in nonlinear function. Based on synthetic radiance, APHs have been retrieved using O4 AMF. Additionally, the effect of AOD uncertainty on APH retrieval error has been investigated. Among three aerosol types, APH retrieval for sulfate type is found to have the largest APH retrieval error due to uncertainty of AOD. In the case of dust aerosol, it was found that the influence of AOD uncertainty is negligible. It indicates that aerosol types affect APH retrieval error since absorption scattering characteristics of each aerosol type are various.