• Title/Summary/Keyword: Multiple Input Multiple Output

Search Result 1,143, Processing Time 0.034 seconds

Data Envelopment Analysis of the Management Efficiency of National Shipping Enterprises in South Korea -Chiefly on the Corporate Entertainment and Advertisement Cost- (DEA모형을 이용한 국적선사의 경영효율성 분석 -접대비와 광고·선전비를 중심으로-)

  • Park, Hyun-Jun;Kim, Hyuna;Lim, Young-Tae
    • Journal of Korea Port Economic Association
    • /
    • v.32 no.2
    • /
    • pp.123-135
    • /
    • 2016
  • This study uses Data Envelopment Analysis(DEA) to investigate the management efficiency of Korean shipping companies based on business administration costs such as corporate entertainment, advertisement, and labor costs. We analyze shipping enterprises listed on the Korean stock market of the period of 2010-2014. Corporate entertainment, advertisement and labor costs are used as input variables and sales and net income are used as output variables. We use technical efficiency, pure technical efficiency, scale efficiency and returns to scale to propose a plan to improve the efficiency of inefficiency decision-making units (DMUs). The results of the efficiency analysis show that six of the DMUs in the technical efficiency of CCR model and eight of the DMUs in the pure technical efficiency of BCC model are in efficient state. In terms of return to scale, six of the DMUs(24% of all DMUs) show increasing returns to scale, while 13 DMUs(52% of all DMUs) showdecreasing returns to scale. Because multiple efficient state for DMUs exist in the technical efficiency analysis, we conduct a super efficiency analysis. The results show that the efficient state of the twomost efficient DMUs are 1.314 and 1.243, respectively. This implies that these DMUs could maintain their current levels of the efficiency if they increase the amount spent on advertisements, corporate entertainment and labor costs by 31.4% and 24.3%. respectively. We conclude this study by providing the efficiency states of each DMU and target for improving the inefficiencies in each case.

Develpment of Analysis and Evaluation Model for a bus Transit Route Network Design (버스 노선망 설계를 위한 평가모형 개발)

  • Han, Jong-Hak;Lee, Seung-Jae;Kim, Jong-Hyeong
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.161-172
    • /
    • 2005
  • This study is to develop Bus Transit Route Analysis and Evaluation Model that can product the quantitative performance measures for Bus Transit Route Network Design. So far, in Korea, there are no so many models that evaluate a variety of other performance measures or service quality that are of concern to both the transit users and operator because of lower-level bus database system and transit route network analysis algorithm's limit. The BTRAEM in this research differ from the previous approach in that the BTRAEM employs a multiple path transit trip assignment model that explicitly considers the transfer and different travel time after boarding. And we develop input-output data structure and quantitative performance measure for the BTRAEM. In the numerical experimental applying BTRAEM to Mandl transit network, We got the meaningful results on performance measure of bus transit route network. In the future, we expect BTRAEM to give a good solution in real transit network.

Application of recurrent neural network for inflow prediction into multi-purpose dam basin (다목적댐 유입량 예측을 위한 Recurrent Neural Network 모형의 적용 및 평가)

  • Park, Myung Ky;Yoon, Yung Suk;Lee, Hyun Ho;Kim, Ju Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.12
    • /
    • pp.1217-1227
    • /
    • 2018
  • This paper aims to evaluate the applicability of dam inflow prediction model using recurrent neural network theory. To achieve this goal, the Artificial Neural Network (ANN) model and the Elman Recurrent Neural Network(RNN) model were applied to hydro-meteorological data sets for the Soyanggang dam and the Chungju dam basin during dam operation period. For the model training, inflow, rainfall, temperature, sunshine duration, wind speed were used as input data and daily inflow of dam for 10 days were used for output data. The verification was carried out through dam inflow prediction between July, 2016 and June, 2018. The results showed that there was no significant difference in prediction performance between ANN model and the Elman RNN model in the Soyanggang dam basin but the prediction results of the Elman RNN model are comparatively superior to those of the ANN model in the Chungju dam basin. Consequently, the Elman RNN prediction performance is expected to be similar to or better than the ANN model. The prediction performance of Elman RNN was notable during the low dam inflow period. The performance of the multiple hidden layer structure of Elman RNN looks more effective in prediction than that of a single hidden layer structure.

A channel parameter-based weighting method for performance improvement of underwater acoustic communication system using single vector sensor (단일 벡터센서의 수중음향 통신 시스템 성능 향상을 위한 채널 파라미터 기반 가중 방법)

  • Kang-Hoon, Choi;Jee Woong, Choi
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.610-620
    • /
    • 2022
  • An acoustic vector sensor can simultaneously receive vector quantities, such as particle velocity and acceleration, as well as acoustic pressure at one location, and thus it can be used as a single input multiple output receiver in underwater acoustic communication systems. On the other hand, vector signals received by a single vector sensor have different channel characteristics due to the azimuth angle between the source and receiver and the difference in propagation angle of multipath in each component, producing different communication performances. In this paper, we propose a channel parameter-based weighting method to improve the performance of an acoustic communication system using a single vector sensor. To verify the proposed method, we used communication data collected from the experiment conducted during the KOREX-17 (Korea Reverberation Experiment). For communication demodulation, block-based time reversal technique which is robust against time-varying channels were utilized. Finally, the communication results showed that the effectiveness of the channel parameter-based weighting method for the underwater communication system using a single vector sensor was verified.

Rainfall Forecasting Using Satellite Information and Integrated Flood Runoff and Inundation Analysis (I): Theory and Development of Model (위성정보에 의한 강우예측과 홍수유출 및 범람 연계 해석 (I): 이론 및 모형의 개발)

  • Choi, Hyuk Joon;Han, Kun Yeun;Kim, Gwangseob
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6B
    • /
    • pp.597-603
    • /
    • 2006
  • The purpose of this study is to improve the short term rainfall forecast skill using neural network model that can deal with the non-linear behavior between satellite data and ground observation, and minimize the flood damage. To overcome the geographical limitation of Korean peninsula and get the long forecast lead time of 3 to 6 hour, the developed rainfall forecast model took satellite imageries and wide range AWS data. The architecture of neural network model is a multi-layer neural network which consists of one input layer, one hidden layer, and one output layer. Neural network is trained using a momentum back propagation algorithm. Flood was estimated using rainfall forecasts. We developed a dynamic flood inundation model which is associated with 1-dimensional flood routing model. Therefore the model can forecast flood aspect in a protected lowland by levee failure of river. In the case of multiple levee breaks at main stream and tributaries, the developed flood inundation model can estimate flood level in a river and inundation level and area in a protected lowland simultaneously.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

A Study on the Serialized Event Sharing System for Multiple Telecomputing User Environments (원격.다원 사용자 환경에서의 순차적 이벤트 공유기에 관한 연구)

  • 유영진;오용선
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.344-350
    • /
    • 2003
  • In this paper, we propose a novel sharing method ordering the events occurring between users collaborated with the common telecomputing environment. We realize the sharing method with multimedia data to improve the coworking effect using teleprocessing network. This sharing method advances the efficiency of communicating projects such as remote education, tele-conference, and co-authoring of multimedia contents by offering conveniences of presentation, group authoring, common management, and transient event productions of the users. As for the conventional sharing white board system, all the multimedia contents segments should be authored by the exclusive program, and we cannot use any existing contents or program. Moreover we suffer from the problem that ordering error occurs in the teleprocessing operation because we do not have any line-up technology for the input ordering of commands. Therefore we develop a method of retrieving input and output events from the windows system and the message hooking technology which transmits between programs in the operating system In addition, we realize the allocation technology of the processing results for all sharing users of the distributed computing environment without any error. Our sharing technology should contribute to improve the face-to-face coworking efficiency for multimedia contents authoring, common blackboard system in the area of remote educations, and presentation display in visual conference.

  • PDF

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.