Evaluation of Soil Stiffness and Excavation Support Wall Deformation at Deep Excavation Site Using Inverse Analysis (역해석을 이용한 지반 강성 산정 및 굴착 지지벽체의 변형 평가)
-
- Journal of the Korean GEO-environmental Society
- /
- v.21 no.12
- /
- pp.5-10
- /
- 2020
In this study, the evolution of soil engineering property values according to excavation was analyzed through the inverse analysis for the OO deep excavation site located in Incheon. The stiffness of the ground was updated by comparing the horizontal wall deformation of the excavation support wall calculated by the finite element analysis at each stage of excavation and the value measured using an inclinometer. The updated stiffness was used to predict the response of the excavation support wall in the next excavation step. The finite element analysis method using the Hardening Soil model was used, and the stratum where the excavation support wall is located was selected as the stratum for the inverse analysis. The inverse analysis results showed that the stiffness value at the stiffness value at the initial stage of excavation is larger than the stiffness used in the original design. As the excavation proceeds, the stiffness calculated through the second inverse analysis was found to decrease compared to the value derived by the first inverse analysis. Therefore, it can be stated that the deformation of the excavation support wall can be accurately calculated through finite element analysis when an appropriate stiffness value is input according to the excavation stage.
This paper developed the algorithm for improving the performance the auto pilot in the autonomous vehicle system consisting of the Track keeping control, the Automatic steering, and the Automatic mooring control. The automatic steering is the control device that could save the voyage distance and cost of fuel by reducing the unnecessary burden of driving due to the continuous artificial navigation, and avoiding the route deviation. During the step of the ship autonomic navigation control, since the wind power or the tidal force could make the ship deviate from the fixed course, the automatic steering calculates the difference between actual sailing line and the set course to keep the ship sailing in the vicinity of intended course. first, we could get the transfer function for the modeling of ship according to the Nomoto model. Considering the maneuverability, we propose it as linear model with only 4 degree of freedoms to present the heading angle response to the input of rudder angle. In this paper, the model of ship is derived from the simplified Nomoto model. Since the proposed model considers the maximum angle and rudder rate of the ship auto pilot and also designs the Fuzzy controller based on existing PID controller, the performance of the steering machine is well improved.
The objective of this study is to examine how to maximize the efficiency of hospital management by minimizing the unit cost of hospital operation. For this purpose, this paper proposes to develop a model of the profit maximization based on the cost minimization dictum using the statistical tools of arriving at the maximum likelihood values. The preliminary survey data are collected from the annual statistics and their analyses published by Korea Health Industry Development Institute and Korean Hospital Association. The maximum likelihood value statistical analyses are conducted from the information on the cost (function) of each of 36 hospitals selected by the random stratified sampling method according to the size and location (urban or rural) of hospitals. We believe that, although the size of sample is relatively small, because of the sampling method used and the high response rate, the power of estimation of the results of the statistical analyses of the sample hospitals is acceptable. The conceptual framework of analyses is adopted from the various models of the determinants of hospital costs used by the previous studies. According to this framework, the study postulates that the unit cost of hospital operation is determined by the size, scope of service, technology (production function) as measured by capacity utilization, labor capital ratio and labor input-mix variables, and by exogeneous variables. The variables to represent the above cost determinants are selected by using the step-wise regression so that only the statistically significant variables may be utilized in analyzing how these variables impact on the hospital unit cost. The results of the analyses show that the models of hospital cost determinants adopted are well chosen. The various models analyzed have the (goodness of fit) overall determination (R2) which all turned out to be significant, regardless of the variables put in to represent the cost determinants. Specifically, the size and scope of service, no matter how it is measured, i. e., number of admissions per bed, number of ambulatory visits per bed, adjusted inpatient days and adjusted outpatients, have overall effects of reducing the hospital unit costs as measured by the cost per admission, per inpatient day, or office visit implying the existence of the economy of scale in the hospital operation. Thirdly, the technology used in operating a hospital has turned out to have its ramifications on the hospital unit cost similar to those postulated in the static theory of the firm. For example, the capacity utilization as represented by the inpatient days per employee tuned out to have statistically significant negative impacts on the unit cost of hospital operation, while payroll expenses per inpatient cost has a positive effect. The input-mix of hospital operation, as represented by the ratio of the number of doctor, nurse or medical staff per general employee, supports the known thesis that the specialized manpower costs more than the general employees. The labor/capital ratio as represented by the employees per 100 beds is shown to have a positive effect on the cost as expected. As for the exogeneous variable's impacts on the cost, when this variable is represented by the percent of urban 100 population at the location where the hospital is located, the regression analysis shows that the hospitals located in the urban area have a higher cost than those in the rural area. Finally, the case study of the sample hospitals offers a specific information to hospital administrators about how they share in terms of the cost they are incurring in comparison to other hospitals. For example, if his/her hospital is of small size and located in a city, he/she can compare the various costs of his/her hospital operation with those of other similar hospitals. Therefore, he/she may be able to find the reasons why the cost of his/her hospital operation has a higher or lower cost than other similar hospitals in what factors of the hospital cost determinants.
The safety related components in the nuclear power plant should be designed to withstand the seismic load. Among these components the integrity of reactor internals under earthquake load is important in stand points of safety and economics, because these are classified to Seismic Class I components. So far the modelling methods of reactor internals have been investigated by many authors. In this paper, the dynamic behaviour of reactor internals of Yong Gwang 1&2 nuclear power plants under SSE(Safe Shutdown Earthquake) load is analyzed by using of the simpled Global Beam Model. For this, as a first step, the characteristic analysis of reactor internal components are performed by using of the finite element code ANSYS. And the Global Beam Model for reactor internals which includes beam elements, nonlinear impact springs which have gaps in upper and lower positions, and hydrodynamical couplings which simulate the fluid-filled cylinders of reactor vessel and core barrel structures is established. And for the exciting external force the response spectrum which is applied to reactor support is converted to the time history input. With this excitation and the model the dynamic behaviour of reactor internals is obtained. As the results, the structural integrity of reactor internal components under seismic excitation is verified and the input for the detailed duel assembly series model could be obtained. And the simplicity and effectiveness of Global Beam Model and the economics of the explicit Runge-Kutta-Gills algorithm in impact problem of high frequency interface components are confirmed.
Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.
This paper proposes a design of a dual-band band-pass filter that integrates a λg/2 open SIR structure, a transmission line, and a fork-type structure with symmetric and asymmetric open stubs. To obtain the dual-band effect, the proposed filter uses the SIR structure and adjusts the impedance ratio of the SIR structure. Therefore, the position of the harmonics of the filter is shifted through the adjustment of the impedance ratio, and this can obtain a double-band effect. In order to obtain the dual-band characteristics, the dual-band effect is obtained by inserting a open stub between the SIR structures with the SIR structure divided in half. In addition, the second frequency response is obtained by adjusting the length of the open symmetrical stub in the fork-shaped structure. The asymmetrical open stub in the fork form achieves optimum bandwidth by adjusting the length. Therefore, the first center frequency of the proposed band-pass filter is 5.896 GHz and the bandwidth is 13.6 %. At this time, the measurement results are 0.13 dB and 33.6 dB. The second center frequency is 5.906 GHz and the bandwidth is 13.6 %. At this time, the measurement results are 0.15 dB and 19.8 dB. The reason is that when the impedance ratio (Δ) is higher than 1, the position of the harmonic is shifted to a lower frequency band. However, if the impedance ratio (Δ) is lowered by one step, the position of harmonics will move to a higher frequency band. The function of the filter designed using these characteristics can be obtained from the measurement result. The proposed band-pass filter has no coupling loss and no via energy concentration loss because there is no coupling structure of input/output and no via hole. Therefore, system integration is possible due to its excellent performance, and it is expected that dedicated short-range communication (DSRC) system applications used in traffic communication systems will be possible.
Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70