• Title/Summary/Keyword: Multiple input

Search Result 2,079, Processing Time 0.035 seconds

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Development of Music Recommendation System based on Customer Sentiment Analysis (소비자 감성 분석 기반의 음악 추천 알고리즘 개발)

  • Lee, Seung Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.197-217
    • /
    • 2018
  • Music is one of the most creative act that can express human sentiment with sound. Also, since music invoke people's sentiment to get empathized with it easily, it can either encourage or discourage people's sentiment with music what they are listening. Thus, sentiment is the primary factor when it comes to searching or recommending music to people. Regard to the music recommendation system, there are still lack of recommendation systems that are based on customer sentiment. An algorithm's that were used in previous music recommendation systems are mostly user based, for example, user's play history and playlists etc. Based on play history or playlists between multiple users, distance between music were calculated refer to basic information such as genre, singer, beat etc. It can filter out similar music to the users as a recommendation system. However those methodology have limitations like filter bubble. For example, if user listen to rock music only, it would be hard to get hip-hop or R&B music which have similar sentiment as a recommendation. In this study, we have focused on sentiment of music itself, and finally developed methodology of defining new index for music recommendation system. Concretely, we are proposing "SWEMS" index and using this index, we also extracted "Sentiment Pattern" for each music which was used for this research. Using this "SWEMS" index and "Sentiment Pattern", we expect that it can be used for a variety of purposes not only the music recommendation system but also as an algorithm which used for buildup predicting model etc. In this study, we had to develop the music recommendation system based on emotional adjectives which people generally feel when they listening to music. For that reason, it was necessary to collect a large amount of emotional adjectives as we can. Emotional adjectives were collected via previous study which is related to them. Also more emotional adjectives has collected via social metrics and qualitative interview. Finally, we could collect 134 individual adjectives. Through several steps, the collected adjectives were selected as the final 60 adjectives. Based on the final adjectives, music survey has taken as each item to evaluated the sentiment of a song. Surveys were taken by expert panels who like to listen to music. During the survey, all survey questions were based on emotional adjectives, no other information were collected. The music which evaluated from the previous step is divided into popular and unpopular songs, and the most relevant variables were derived from the popularity of music. The derived variables were reclassified through factor analysis and assigned a weight to the adjectives which belongs to the factor. We define the extracted factors as "SWEMS" index, which describes sentiment score of music in numeric value. In this study, we attempted to apply Case Based Reasoning method to implement an algorithm. Compare to other methodology, we used Case Based Reasoning because it shows similar problem solving method as what human do. Using "SWEMS" index of each music, an algorithm will be implemented based on the Euclidean distance to recommend a song similar to the emotion value which given by the factor for each music. Also, using "SWEMS" index, we can also draw "Sentiment Pattern" for each song. In this study, we found that the song which gives a similar emotion shows similar "Sentiment Pattern" each other. Through "Sentiment Pattern", we could also suggest a new group of music, which is different from the previous format of genre. This research would help people to quantify qualitative data. Also the algorithms can be used to quantify the content itself, which would help users to search the similar content more quickly.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

The Effect of Variations in the Vertical Position of the Bracket on the Crown Inclination (브라켓의 수직적 위치변동에 따른 치관경사도변화에 관한 연구)

  • Chang, Yeon-Joo;Kim, Tae-Woo;Yoo, Kwan-Hee
    • The korean journal of orthodontics
    • /
    • v.32 no.6 s.95
    • /
    • pp.401-411
    • /
    • 2002
  • Precise bracket positioning is essential in modem orthodontics. However, there can be alterations in the vertical position of a bracket due to several reasons. The purpose of this study was to evaluate the effect of variations in the vertical bracket position on the crown inclination in Korean patients with normal occlusion. From a larger group of what was considered to be normal occlusions obtained from the Department of Orthodontics, College of Dentistry, Seoul National University, each of the final 10 subjects (6 males and 4 females, with an average age of 22.3 yews) was selected. The dental models of each of the subjects were scanned three-dimensionally by a laser scanner, and measurements drawn from these were made on the scanned dental casts of the subjects were input into the computer program. From this the occlusal plane and the bracket plane were determined. The tooth plane was then constructed to measure the crown inclination on the bracket plane of each tooth. From a practical standpoint, information was obtained on the extent to which the torque of a tooth would be changed as the bracket position was to be moved vertically (in ${\pm}0.5mm,\;{\pm}1.0mm,\;{\pm}1.5mm$) from its ideal position. A one way analysis of the variance (ANOVA) was used to compare each group of the different vertical distances from the bracket plane on a specific tooth. Duncan's multiple comparison test was then performed. There were statistically significant differences in the crown inclination among the groups of different vertical distances for the upper central incisor, upper lateral incisor, upper canine, upper first and second molars, lower first and second premolars, and lower first and second molars (p<0.05). On the upper anterior teeth, upper molars, lower premolars and lower molars, the resultant torque values due to the vertical displacement of the bracket were different depending on the direction of the displacement, occlusal or gingival. This study implies that the torque of these teeth should be handled carefully during the orthodontic treatment. In circumstances in which the bracket must be positioned more gingivally or occlusally due to various reasons, it would be useful to provide the chart of torque alteration of each tooth referred to in this study with its specified bracket prescription.

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.

Simulation Approach for the Tracing the Marine Pollution Using Multi-Remote Sensing Data (다중 원격탐사 자료를 활용한 해양 오염 추적 모의 실험 방안에 대한 연구)

  • Kim, Keunyong;Kim, Euihyun;Choi, Jun Myoung;Shin, Jisun;Kim, Wonkook;Lee, Kwang-Jae;Son, Young Baek;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.249-261
    • /
    • 2020
  • Coastal monitoring using multiple platforms/sensors is a very important tools for accurately understanding the changes in offshore marine environment and disaster with high temporal and spatial resolutions. However, integrated observation studies using multiple platforms and sensors are insufficient, and none of them have been evaluated for efficiency and limitation of convergence. In this study, we aimed to suggest an integrated observation method with multi-remote sensing platform and sensors, and to diagnose the utility and limitation. Integrated in situ surveys were conducted using Rhodamine WT fluorescent dye to simulate various marine disasters. In September 2019, the distribution and movement of RWT dye patches were detected using satellite (Kompsat-2/3/3A, Landsat-8 OLI, Sentinel-3 OLCI and GOCI), unmanned aircraft (Mavic 2 pro and Inspire 2), and manned aircraft platforms after injecting fluorescent dye into the waters of the South Sea-Yeosu Sea. The initial patch size of the RWT dye was 2,600 ㎡ and spread to 62,000 ㎡ about 138 minutes later. The RWT patches gradually moved southwestward from the point where they were first released,similar to the pattern of tidal current flowing southwest as the tides gradually decreased. Unmanned Aerial Vehicles (UAVs) image showed highest resolution in terms of spatial and time resolution, but the coverage area was the narrowest. In the case of satellite images, the coverage area was wide, but there were some limitations compared to other platforms in terms of operability due to the long cycle of revisiting. For Sentinel-3 OLCI and GOCI, the spectral resolution and signal-to-noise ratio (SNR) were the highest, but small fluorescent dye detection was limited in terms of spatial resolution. In the case of hyperspectral sensor mounted on manned aircraft, the spectral resolution was the highest, but this was also somewhat limited in terms of operability. From this simulation approach, multi-platform integrated observation was able to confirm that time,space and spectral resolution could be significantly improved. In the future, if this study results are linked to coastal numerical models, it will be possible to predict the transport and diffusion of contaminants, and it is expected that it can contribute to improving model accuracy by using them as input and verification data of the numerical models.

Analysis of Authority Control System in Collecting Repository -from the case of Archival Management System in Korea Democracy Foundation- (수집형 기록관의 전거제어시스템 분석 - 민주화운동기념사업회 사료관리시스템의 사례를 중심으로 -)

  • Lee, Hyun-Jeong
    • The Korean Journal of Archival Studies
    • /
    • no.13
    • /
    • pp.91-134
    • /
    • 2006
  • In general, personally collected archives, manuscripts, are physically badly conditioned and also contextual of the archives and information on the history of production is mostly collected partly in the manuscripts. Therefore they need to control the name of the producers on the archives collected in various ways effectively and accumulate provenance information which is the key element when understanding the production background in the collecting repository. Here, the authority control and provenance information management must be organized from the beginning of acquisition and this means to collect necessary information considering control process of acquisition as well. This thesis is for verifying the necessity of the authority control in collecting repository and accumulation of the provenance information and for suggesting the things to be considered as collecting Archival authority system. For all these, this thesis shows that it has checked out the necessity of the authority control in archival management and archival authority control and researched the standard of archival authority control, work process and accumulation process. Archival provenance information management and authority control in the archival authority control system are organized through the whole steps of the archival management starting from the lead file to the name of the producers at archival registration and archival description at acquisition. And a lot of information is registered and described at the proper point of time and finally all the information including authority control which controls the Heading in the authority management must be organized to use them as an intellectual management of archives and Finding Aids. The features of the Archival authority system are as follows; first of all, Authority file type which is necessary at the archival authority control of democracy movement is made up of the name of the group, person, affair and terminology(subject name). Second of all, basic record structures and description elements in authority collection of Korea Democracy Foundation Archives apply in the paragraph 1 of ISAAR(CPF) adding some necessary elements and details of description rule such as spacing words and using the periods apply in the paragraph 4 of KCR coping with the features of the archival management system. And also the way of input on the authority record is based on EAC(Encoded Archival Context). Third of all, it made users approach to the sources which they want more easily by connecting the authority terms systemically making it possible to connect the relative terms with up and down words, before and after words variously and concretely expanding the term relations rather than earlier traditional authority system which is usually expressed only with relative words (see also). So the authority control of archival management system can effectively collect and manage the function of various and multiple groups and information on main activities as well as its own function which is controlling the Heading and express the multiple and intermediary relationship between archives and producers or between producers and it also provides them with expanded Record information service which satisfies user's various requests through Indexing service. Finally applying in this international standard ISAAR(CPF) through the instance of the authority management like this, it can be referred to making Archival authority system in Collecting repository hereafter by reorganizing the description elements into appropriate formations and setting up the authority file type which is to be managed properly for every service.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.