• Title/Summary/Keyword: Input task

Search Result 430, Processing Time 0.028 seconds

A Study on the Design of Case-based Reasoning Office Knowledge Recommender System for Office Professionals (사례기반추론을 이용한 사무지식 추천시스템)

  • Kim, Myong-Ok;Na, Jung-Ah
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.131-146
    • /
    • 2011
  • It is becoming more essential than ever for office professionals to become competent in information collection/gathering and problem solving in today's global business society. In particular, office professionals do not only assist simple chores but are also forced to make decisions as quickly and efficiently as possible in problematic situations that can end in either profit or loss to their company. Since office professionals rely heavily on their tacit knowledge to solve problems that arise in everyday business situations, it is truly helpful and efficient to refer to similar business cases from the past and share or reuse such previous business knowledge for better performance results. Case-based reasoning(CBR) is a problem-solving method which utilizes previous similar cases to solve problems. Through CBR, the closest case to the current business situation can be searched and retrieved from the case or knowledge base and can be referred to for a new solution. This reduces the time and resources needed and increase success probability. The main purpose of this study is to design a system called COKRS(Case-based reasoning Office Knowledge Recommender System) and develop a prototype for it. COKRS manages cases and their meta data, accepts key words from the user and searches the casebase for the most similar past case to the input keyword, and communicates with users to collect information about the quality of the case provided and continuously apply the information to update values on the similarity table. Core concepts like system architecture, definition of a case, meta database, similarity table have been introduced, and also an algorithm to retrieve all similar cases from past work history has also been proposed. In this research, a case is best defined as a work experience in office administration. However, defining a case in office administration was not an easy task in reality. We surveyed 10 office professionals in order to get an idea of how to define a case in office administration and found out that in most cases any type of office work is to be recorded digitally and/or non-digitally. Therefore, we have defined a record or document case as for COKRS. Similarity table was composed of items of the result of job analysis for office professionals conducted in a previous research. Values between items of the similarity table were initially set to those from researchers' experiences and literature review. The results of this study could also be utilized in other areas of business for knowledge sharing wherever it is necessary and beneficial to share and learn from past experiences. We expect this research to be a reference for researchers and developers who are in this area or interested in office knowledge recommendation system based on CBR. Focus group interview(FGI) was conducted with ten administrative assistants carefully selected from various areas of business. They were given a chance to try out COKRS in an actual work setting and make some suggestions for future improvement. FGI has identified the user-interface for saving and searching cases for keywords as the most positive aspect of COKRS, and has identified the most urgently needed improvement as transforming tacit knowledge and knowhow into recorded documents more efficiently. Also, the focus group has mentioned that it is essential to secure enough support, encouragement, and reward from the company and promote positive attitude and atmosphere for knowledge sharing for everybody's benefit in the company.

Early Identification of Gifted Young Children and Dynamic assessment (유아 영재의 판별과 역동적 평가)

  • 장영숙
    • Journal of Gifted/Talented Education
    • /
    • v.11 no.3
    • /
    • pp.131-153
    • /
    • 2001
  • The importance of identifying gifted children during early childhood is becoming recognized. Nonetheless, most researchers preferred to study the primary and secondary levels where children are already and more clearly demonstrating what talents they have, and where more reliable predictions of gifted may be made. Comparatively lisle work has been done in this area. When we identify giftedness during early childhood, we have to consider the potential of the young children rather than on actual achievement. Giftedness during early childhood is still developing and less stable than that of older children and this prevents us from making firm and accurate predictions based on children's actual achievement. Dynamic assessment, based on Vygotsky's concept of the zone of proximal development(ZPD), suggests a new idea in the way the gifted young children are identified. In light of dynamic assessment, for identifying the potential giftedness of young children. we need to involve measuring both unassisted and assisted performance. Dynamic assessment usually consists of a test-intervene-retest format that focuses attention on the improvement in child performance when an adult provides mediated assistance on how to master the testing task. The advantages of the dynamic assessment are as follows: First, the dynamic assessment approach can provide a useful means for assessing young gifted child who have not demonstrated high ability on traditional identification method. Second, the dynamic assessment approach can assess the learning process of young children. Third, the dynamic assessment can lead an individualized education by the early identification of young gifted children. Fourth, the dynamic assessment can be a more accurate predictor of potential by linking diagnosis and instruction. Thus, it can make us provide an educational treatment effectively for young gifted children.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Morphological Characteristics Optimizing Pocketability and Text Readability for Mobile Information Devices (모바일 정보기기의 소지용이성과 텍스트 가독성을 최적화하기 위한 형태적 특성)

  • Kim, Yeon-Ji;Lee, Woo-Hun
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.323-332
    • /
    • 2006
  • Information devices such as a cellular phone, smart phone, and PDA become smaller to such an extent that people put them into their pockets without any difficulties. This drastic miniaturization causes to deteriorate the readability of text-based contents. The morphological characteristics of size and proportion are supposed to have close relationships with the pocketability and text readability of mobile information devices. This research was aimed to investigate the optimal morphological characteristics to satisfy the two usability factors together. For this purpose, we conducted an controlled experiment, which was designed to evaluate the pocketability according to $size(4000mm^2/8000mm^2)$, proportion(1:1/2:1/3:1), and weight(100g/200g) of information devices as well as participants' pose and carrying method. In the case of male participants putting the models of information device into their pockets, 2:1 morphological proportion was preferred. On the other hand, the female participants carrying the models in their hands preferred 2:1 proportion$(size:4000mm^2{\times}2mm)$ and 3:1 proportion$(size:8000mm^2{\times}20mm)$. For the device in the size of $4000mm^2$, it was found that the weight of device has an significant effect on pocketability. In consequence, 2:1 proportion is optimal to achieve better pocketability. The second experiment was about how text readability is affected by size $(2000mm^2/4000mm^2/8000mm^2)$ and proportion(1:1/2:1/3:1) of information devices as well as interlinear space of displayed text(135%/200%). From this experiment result, it was found that reading speed was increased as line length increased. Regarding the subjective assessment on reading task, 2:1 proportion was strongly preferred. Based on these results, we suggest 2:l proportion as an optimal proportion that satisfy pocketability of mobile information devices and text readability displayed on the screen together. To apply these research outputs to a practical design work efficiently, it is important to take into account the fact that the space for input devices is also required in addition to a display screen.

  • PDF

The Records and Archives Administrative Reform in China in 1930s (1930년대 중국 문서당안 행정개혁론의 이해)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.10
    • /
    • pp.276-322
    • /
    • 2004
  • Historical interest in China in 1930s has been mostly focused on political characteristic of the National Government(國民政府) which was established by the KMT(中國國民黨) as a result of national unification. It is certain that China had a chance to construct a modern country by the establishment of the very unified revolutionary government. But, it was the time of expanding national crises that threatened the existence of the country such as the Manchurian Incident and the Chinese-Japanese War as well as the chaos of the domestic situation, too. So it has a good reason to examine the characteristic and pattern of the response of the political powers of those days. But, as shown in the recent studies, the manifestation method of political power by the revolutionary regime catches our attention through the understanding of internal operating system. Though this writing started from the fact that the Nationalist Government executed the administrative reform which aimed at "administrative efficiency" in the middle of 1930s, but it put stress on the seriousness of the problem and its solution rather than political background or results. "Committee on Administrative Efficiency(行政效率委員會)", the center of administrative reform movement which was established in 1934, examined the plan to execute the reform through legislation by the Executive Council(行政院) on the basis of the results of relevant studies. They claimed that the construction of a modern country should be performed by not political revolution anymore but by gradual improvement and daily reform, and that the operation of the government should become modern, scientific and efficient. There were many fields of administrative reform subjects, but especially, the field of records and archives adminstration(文書檔案行政) was studied intensively from the initial stage because that subject had already been discussed intensively. They recognized that records and archives were the basic tool of work performance and general activity but an inefficient field in spite of many input staff members, and most of all, archival reform bring about less conflicts than the fields of finance, organization and personnel. When it comes to the field of records adminstration, the key subjects that records should be written simply, the process of record treatment should be clear and the delay of that should be prevented were already presented in a records administrative meeting in 1922. That is, the unified law about record management was not established, so each government organization followed a conventional custom or performed independent improvement. It was through the other records administrative workshop of the Nationalist Government in 1933 when the new trend was appeared as the unified system improvement. They decided to unify the format of official records, to use marker and section, to unify the registration of receipt records and dispatch records and to strengthen the examination of records treatment. But, the method of records treatment was not unified yet, so the key point of records administrative reform was to establish a unified and standard record management system for preventing repetition by simplifying the treatment procedure and for intensive treatment by exclusive organizations. From the foundation of the Republic of China to 1930s, there was not big change in the field of archives administration, and archives management methods were prescribed differently even in the same section as well as same department. Therefore, the point at issue was to centralize scattered management systems that were performed in each section, to establish unified standard about filing and retention period allowance and to improve searching system through classification and proper number allowance. Especially, the problem was that each number system and classification system bring about different result due to dual operation of record registration and archives registration, and that strict management through mutual contrast, searching and application are impossible. Besides, various problems such as filing tools, arrangement method, preservation facilities & equipment, lending service and use method were raised also. In the process this study for the system improvement of records and archives management, they recognized that records and archives are the identical thing and reached to create a successive management method of records and archives called "Records and Archives Chain Management Method(文書檔案連鎖法)" as a potential alternative. Several principles that records and archives management should be performed unitedly in each organization by the general record recipient section and the general archives section under the principle of task centralization, a consistent classification system should be used by classification method decided in advance according to organizational constitution and work functions and an identical number system should be used in the process of record management stage and archive management stage by using a card-type register were established. Though, this "Records and Archives Chain Management Method" was developed to the stage of test application in several organizations, but it was not adopted as a regular system and discontinued. That was because the administrative reform of the Nationalist Government was discontinued by the outbreak of the Chinese-Japanese War. Even though the administrative reform in the middle of 1930s didn't produce practical results but merely an experimentation, it was verified that the reform against tradition and custom conducted by the Nationalist Government that aimed for the construction of a modern country was not only a field of politics, but on the other hand, the weak basis of the government operation became the obstacle to the realization of the political power of the revolutionary regime. Though the subject of records and archives administrative reform was postponed to the future, it should be understood that the consciousness of modern records and archives administration and overall studies began through this examination of administrative reform.

Digital Humanities, and Applications of the "Successful Exam Passers List" (과거 합격자 시맨틱 데이터베이스를 활용한 디지털 인문학 연구)

  • LEE, JAE OK
    • (The)Study of the Eastern Classic
    • /
    • no.70
    • /
    • pp.303-345
    • /
    • 2018
  • In this article, how the Bangmok(榜目) documents, which are essentially lists of successful passers for the civil competitive examination system of the $Chos{\breve{o}}n$ dynasty, when rendered into digitalized formats, could serve as source of information, which would not only lets us know the $Chos{\breve{o}}n$ individuals' social backgrounds and bloodlines but also enables us to understand the intricate nature that the Yangban network had, will be discussed. In digitalized humanity studies, the Bangmok materials, literally a list of leading elites of the $Chos{\breve{o}}n$ period, constitute a very interesting and important source of information. Based upon these materials, we can see how the society -as well as the Yangban community- was like. Currently, all data inside these Bangmok lists are rendered in XML(eXtensible Makrup Language) format and are being served through DBMS(Database Management System), so anyone who would want to examine the statistics could freely do so. Also, by connecting the data in these Bangmok materials with data from genealogy records, we could identify an individual's marital relationship, home town, and political affiliation, and therefore create a complex narrative that would be effective in describing that individual's life in particular. This is a graphic database, which shows-when Bangmok data is punched in-successful passers as individual nodes, and displays blood and marital relations in a very visible way. Clicking upon the nodes would provide you with access to all kinds of relationships formed among more than 90 thousand successful passers, and even the overall marital network, once the genealogical data is input. In Korea, since 2005 and through now, the task of digitalizing data from the Civil exam Bangmok(Mun-gwa Bangmok), Military exam Bangmok (Mu-gwa Bangmok), the "Sa-ma" Bangmok and "Jab-gwa" Bangmok materials, has been completed. They can be accessed through a website(http://people.aks.ac.kr/index.aks) which has information on numerous famous past Korean individuals. With this kind of source of information, we are now able to extract professional Jung-in figures from these lists. However, meaningful and practical studies using this data are yet to be announced. This article would like to remind everyone that this information should be used as a window through which we could see not only the lives of individuals, but also the society.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

Exploring the Model of Social Enterprise in Sport: Focused on Organization Form(Type) and Task (스포츠 분야 사회적기업의 모델 탐색: 조직형태 및 과제)

  • Sang-Hyun Park;Joo-Young Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.2
    • /
    • pp.73-83
    • /
    • 2024
  • The purpose of this study is to diagnose various problems arising around social enterprises in the sport field from the perspective of the organization and derive necessary tasks and implications. In order to achieve the purpose of the study, the study was largely divided into three stages, and the results were derived. First, the main status and characteristics of social enterprises in the sport field were examined. The current status was analyzed focusing on aspects such as background and origin, legislation and policy, organizational goals, organizational structure and procedures, and organizational characteristics. Social enterprises in the sport sector were in their early stages, and the government's social enterprise policy goal tended to focus on increasing the number of social enterprises in a short period of time through financial input. In addition, it was found that most individual companies rely on government subsidy support due to insufficient profit generation capacity. In the second stage, we focused on the situational factors that affect the functional performance of social enterprises in the sport field. As a result of reviewing the value, ideology, technology, and history of the organization, which are situational factors, it was derived that when certified as a social enterprise in the sport field and supported by the central government or local governments, political control is strong to some extent and exposure to the market is not severe. In the last third step, tasks and implications were derived to form an appropriate organization for social enterprises in the sport field. After the social enterprise ecosystem in the sport sector has been established to some extent, it is necessary to gradually move from the current "government-type" organization to the "national enterprise" organization. This is true in light of the government's limited financial level, not in the short term, but in order for the organization of social enterprises in the sports sector to survive in the long term.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.