• Title/Summary/Keyword: experimental techniques

Search Result 3,187, Processing Time 0.035 seconds

Experimental Verification on the Effect of the Gap Flow Blocking Devices Attached on the Semi-Spade Rudder using Flow Visualization Technique (유동가시화를 이용한 혼-타의 간극유동 차단장치 효과에 관한 실험적 검증)

  • Shin, Kwangho;Suh, Jung-Chun;Kim, Hyochul;Ryu, Keuksang;Oh, Jungkeun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.50 no.5
    • /
    • pp.324-333
    • /
    • 2013
  • Recently, rudder erosion due to cavitation has been frequently reported on a semi-spade rudder of a high-speed large ship. This problem raises economic and safety issues when operating ships. The semi-spade rudders have a gap between the horn/pintle and the movable wing part. Due to this gap, a discontinuous surface, cavitation phenomenon arises and results in unresolved problems such as rudder erosion. In this study, we made a rudder model for 2-D experiments using the NACA0020 and also manufactured gap flow blocking devices to insert to the gap of the model. In order to study the gap flow characteristics at various rudder deflection angles($5^{\circ}$, $10^{\circ}$, $35^{\circ}$) and the effect of the gap flow blocking devices, we carried out the velocity measurements using PIV(Particle Image Velocimetry) techniques and cavitation observation using high speed camera in Seoul National University cavitation tunnel. To observe the gap cavitation on a semi-spade rudder, we slowly lowered the inside pressure of the cavitation tunnel until cavitation occurred near the gap and then captured it using high-speed camera with the frame rate of 4300 fps(frame per second). During this procedure, cavitation numbers and the generated location were recorded, and these experimental data were compared with CFD results calculated by commercial code, Fluent. When we use gap flow blocking device to block the gap, it showed a different flow character compared with previous observation without the device. With the device blocking the gap, the flow velocity increases on the suction side, while it decreases on the pressure side. Therefore, we can conclude that the gap flow blocking device results in a high lift-force effect. And we can also observe that the cavitation inception is delayed.

Analysis of changes in air consumption according to water depth in underwater search (수중수색 시 수심에 따른 공기소모량의 변화 분석)

  • Jeon, Jai-In;Kong, Ha-Sung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.433-439
    • /
    • 2020
  • This study compared and analyzed the change of air consumption according to water depth with human characteristics and theoretical values. The experimental results are as follows. First, subjects A and B showed similar rise rates depending on the water depth. Second, subject C had a significantly higher rate of increase in air consumption at 25m underwater because the body responded sensitively to deep water pressure, which increased air consumption because breathing was faster than other participants. Third, the subjects D and E showed significantly lower overall air consumption. D and E were 37 and 35 years of age, respectively, the youngest, strongest and most experienced in deep sea diving at the time of military service. Fourth, the average air consumption per minute of the test subjects increased from 5m in water to 1.45 times, 10m in water to 1.85 times, and 20m in water to 2.8 times. This seems to be a result of different experiences, physical fitness, the degree of adaptation of the body to underwater, and different breathing techniques. Lastly, the difference between the experimental average value and the theoretical value appears to be the result of using more or less air than the theoretical value depending on the experiences and physical strength of each of the 5 rescuers, the degree of adaptation of the body underwater, and the method of underwater breathing.

An Experimental Environment for Simulation of Stealthy Deception Attack in CPS Using PLCitM (PLC in the Middle) (중간자 PLC를 이용한 CPS 은닉형 공격 실험환경 구축 방안)

  • Chang, Yeop;Lee, Woomyo;shin, Hyeok-Ki;Kim, Sinkyu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.1
    • /
    • pp.123-133
    • /
    • 2018
  • Cyber-Physical System (CPS) is a system in which a physical system and a cyber system are strongly integrated. In order to operate the target physical system stably, the CPS constantly monitors the physical system through the sensor and performs control using the actuator according to the current state. If a malicious attacker performs a forgery attack on the measured values of the sensors in order to conceal their attacks, the cyber system operated based on the collected data can not recognize the current operation status of the physical system. This causes the delay of the response of the automation system and the operator, and then more damage will occur. To protect the CPS from increasingly sophisticated and targeted attacks, countermeasures must be developed that can detect stealthy deception attacks. However, in the CPS environment composed of various heterogeneous devices, the process of analyzing and demonstrating the vulnerability to actual field devices requires a lot of time. Therefore, in this study, we propose a method of constructing the experiment environment of the PLCitM (PLC in the middle) which can verify the performance of the techniques to detect the CPS stealthy deception attack and present the experimental results.

A Study on Optimum Coding Method for Correlation Processing of Radio Astronomy (전파천문 상관처리를 위한 최적 코딩 방법에 관한 연구)

  • Shin, Jae-Sik;Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Chung, Dong-Kyu;Oh, Chung-Sik;Hwang, Ju-Yeon;So, Yo-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.4
    • /
    • pp.139-148
    • /
    • 2015
  • In this paper, the optimum coding method is proposed by using open library in order to improve the performance of a software correlator developed for Korea-Japan Joint VLBI Correlator(KJJVC). The correlation system for VLBI observing system is generally implemented with hardware using ASIC or FPGA because the computational quantity is increased geometrically according to the participated observatory number. However, the software correlation system is recently constructed at a massive server such as a cluster using software according to the development of computing power. Since VLBI correlator implemented with hardware is able to conduct data processing with real-time or quasi real-time compared with mostly observational time, software correlation has to perform optimal data processing in coding work so as to have the same performance as that of the hardware. Therefore, in this paper, the experimental comparison was conducted by open-source based fftw library released in FFT processing stage, which is the most important part of the correlator system for performing optimum coding work in software development phase, such as general method using fftw library or methods using SSE(Streaming SIMD Extensions), shared memory, or OpenMP, and method using merged techniques listed above. Through the experimental results, the proposed optimum coding method for improving the performance of developed software correlator using fftw library, shared memory and OpenMP is effectively confirmed by reducing correlation time compared with conventional method.

Effects of Dynamic Compression to Listening Monitor on Vocal Recording (보컬 녹음에서 모니터에 적용된 컴프레서가 가창에 미치는 영향)

  • Kim, Si-On;Park, Jae-Rock
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.2
    • /
    • pp.93-100
    • /
    • 2019
  • Dynamic Compressors in vocal recordings of modern pop music are essential equipment. Dynamic compressors are applied not only to the mix for listening to music but also to the monitor for the singer to listen to his voice along with the accompaniment while the singer is recording. This study is an experimental study on the effects of a dynamic compressor applied to a monitor environment on the vocal performance of a singer. 10 participating singers participated in the blind test to test how the vocals heard through the monitor would be affected by the 1:1, 2:1 and 4:1 compression ratio. Experimental results show that the higher the compression ratio applied to the monitor, the bigger the song, the brighter the tone, but the pitch becomes finer inaccuracy on the bigger dynamic part of the song. In post-interviews with blinds, it was found that singers generally preferred to hear compressed sound through a compressor on the monitor. Since the music used in the experiment was a ballad with a wide dynamic range, it could not be generalized to all kind of music recordings, but it could provide important implications for the monitoring of recording sites. In addition, We hope that the cognitive science approach to recording technology will be added based on this paper which has been studied through empirical studies on the effect of the monitor environment on the singing voice.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Bankruptcy Type Prediction Using A Hybrid Artificial Neural Networks Model (하이브리드 인공신경망 모형을 이용한 부도 유형 예측)

  • Jo, Nam-ok;Kim, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.79-99
    • /
    • 2015
  • The prediction of bankruptcy has been extensively studied in the accounting and finance field. It can have an important impact on lending decisions and the profitability of financial institutions in terms of risk management. Many researchers have focused on constructing a more robust bankruptcy prediction model. Early studies primarily used statistical techniques such as multiple discriminant analysis (MDA) and logit analysis for bankruptcy prediction. However, many studies have demonstrated that artificial intelligence (AI) approaches, such as artificial neural networks (ANN), decision trees, case-based reasoning (CBR), and support vector machine (SVM), have been outperforming statistical techniques since 1990s for business classification problems because statistical methods have some rigid assumptions in their application. In previous studies on corporate bankruptcy, many researchers have focused on developing a bankruptcy prediction model using financial ratios. However, there are few studies that suggest the specific types of bankruptcy. Previous bankruptcy prediction models have generally been interested in predicting whether or not firms will become bankrupt. Most of the studies on bankruptcy types have focused on reviewing the previous literature or performing a case study. Thus, this study develops a model using data mining techniques for predicting the specific types of bankruptcy as well as the occurrence of bankruptcy in Korean small- and medium-sized construction firms in terms of profitability, stability, and activity index. Thus, firms will be able to prevent it from occurring in advance. We propose a hybrid approach using two artificial neural networks (ANNs) for the prediction of bankruptcy types. The first is a back-propagation neural network (BPN) model using supervised learning for bankruptcy prediction and the second is a self-organizing map (SOM) model using unsupervised learning to classify bankruptcy data into several types. Based on the constructed model, we predict the bankruptcy of companies by applying the BPN model to a validation set that was not utilized in the development of the model. This allows for identifying the specific types of bankruptcy by using bankruptcy data predicted by the BPN model. We calculated the average of selected input variables through statistical test for each cluster to interpret characteristics of the derived clusters in the SOM model. Each cluster represents bankruptcy type classified through data of bankruptcy firms, and input variables indicate financial ratios in interpreting the meaning of each cluster. The experimental result shows that each of five bankruptcy types has different characteristics according to financial ratios. Type 1 (severe bankruptcy) has inferior financial statements except for EBITDA (earnings before interest, taxes, depreciation, and amortization) to sales based on the clustering results. Type 2 (lack of stability) has a low quick ratio, low stockholder's equity to total assets, and high total borrowings to total assets. Type 3 (lack of activity) has a slightly low total asset turnover and fixed asset turnover. Type 4 (lack of profitability) has low retained earnings to total assets and EBITDA to sales which represent the indices of profitability. Type 5 (recoverable bankruptcy) includes firms that have a relatively good financial condition as compared to other bankruptcy types even though they are bankrupt. Based on the findings, researchers and practitioners engaged in the credit evaluation field can obtain more useful information about the types of corporate bankruptcy. In this paper, we utilized the financial ratios of firms to classify bankruptcy types. It is important to select the input variables that correctly predict bankruptcy and meaningfully classify the type of bankruptcy. In a further study, we will include non-financial factors such as size, industry, and age of the firms. Thus, we can obtain realistic clustering results for bankruptcy types by combining qualitative factors and reflecting the domain knowledge of experts.

The effects of out of hospital ACLS simulation training on the paramedic's duty ability (구급대원의 전문심장소생술 시뮬레이션훈련이 직무수행융합능력에 미치는 영향)

  • Park, Yoo-Na;Cho, Byung-Jun;Kim, Gyoung-Young
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.4
    • /
    • pp.99-106
    • /
    • 2019
  • The purpose of this study is to analyze the effects of the simulation-based professional cardiac resuscitation training on the performance of professional cardiac resuscitation performed by paramedics in the pre-hospital stage and to provide basic data for effective cardiac resuscitation. This study is an experimental study of the design before and after the control of non-equality. The subjects of this study were 16 newly recruited paramedics from K firefighting school. The simulation training program and evaluation papers used as the evaluation tool were reviewed and commented by 6 ACLS simulation experts (2 emergency medical doctor, 2 emergency medical professors, 2 KALS instructors)Respectively. The training consisted of 30 minutes of theory and 150 minutes of practical training. The lecturer first demonstrated for 5 minutes, and after training by individual debriefing after individual training, individual and team education was conducted The evaluation scale was given a 5 - point Likert scale. The SPSS 22.0 program for Windows was used. The general characteristics of the subjects were analyzed for frequency, the examination of homogeneity between the experimental group and the control group wasfulfilled by t test, and the difference test between the groups of the two groups was performed using the paired t-test. The homogeneity test was able to confirm the homogeneity between experimental group and control group. In the evaluation of six ACLS techniques, it was proven that the experimental group that received the simulation training had better performance in all aspects than the non - training control group. The following are the technical items to be performed. 1. Electrocardiogram 2. Specialized instrument 3. Treatment of fluid 4. Leadership and teamwork 5. Medical guidance 6. Evaluation during transfer. It was proved that paramedics who received simulation training were improved on their job performance ability than general lecture and training group. Therefore, if simulation training and education are applied to a student in the synthetic course or an emergency resident who is engaged in clinical practice, he / she will be able to perform his / her duties more proficiently. It is expected that emergency services provided to patients with cardiac arrest will be improved.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.