• Title/Summary/Keyword: dimension reduction method

Search Result 250, Processing Time 0.029 seconds

Development of Measurement Scale for Korean Scaling Fear-1.0 and Related Factors (한국형 스켈링공포(KSF 1.0)의 측정도구 개발 및 관련요인)

  • Cho, Myung-Sook;Lee, Sung-Kook
    • Journal of dental hygiene science
    • /
    • v.9 no.3
    • /
    • pp.327-338
    • /
    • 2009
  • This study was to develop an instrument for multidimensional measurement of Korean scaling fear (KSF)-1.0 and analyze related factors. A sample of 720 subjects(scaling patients and community people) was studied in Daegu city from November in 2008 to March in 2009. Authors first conceptualized the KSF, item generation, item reduction, and questionnaire formatting were performed in the stage of the development. Item descriptive, missing%, item internal consistency, and item discriminant validity were analyzed in the item-level, also descriptive, floor and ceiling effect were analyzed in the scale-level. Cronbach's alpha, test-retest, inter-dimension correlations, and factor analysis were performed to evaluate the validity and reliability in the new instrument. Confirmative factor analysis was did to evaluate the fit of model. The results for item-level and scale-level were acceptable except item discriminant validity. The reliability for 0.92~0.96 of corelation coefficient range(Cronbach's alpha 0.96~0.98) was high in the test-retest, and there was no significant difference in paired t-test. Item internal consistency(range of pearson corelation coefficient 0.39~0.95) was also high. The result of explanatory factor analysis was the same as the intended dimension structure, also confirmatory factor analysis results revealed that the dimensional structure model were fined well in the evaluation of model fit($x^2$= 1245.66, df=146, p=0.0000; GFI=0.85; AGFI=0.80; RMSEA=0.10). Factors related to KSF by multiple regression were gender($\beta$=0.28, p=0.0004) and teeth brush method($\beta$=-0.15, p=0.0053) in scaling patients, also gender($\beta$=0.25, p=0.0002), educational level($\beta$=0.14, p=0.0155), teeth brush method($\beta$=-0.09, p=0.0229) and time of daily work out($\beta$=-0.10, p=0.0055) were significantly associated with KSF in no scaling group. In conclusion, The results of this study reveal that the new developed measurement scale was reliable and val id instrument for measuring the KSF in dental hygiene patients and community people. We recommend that further research should develop more the instrument for the Korean scaling fear.

  • PDF

Analysis of Subwavelength Metal Hole Array Structure for the Enhancement of Quantum Dot Infrared Photodetectors

  • Ha, Jae-Du;Hwang, Jeong-U;Gang, Sang-U;No, Sam-Gyu;Lee, Sang-Jun;Kim, Jong-Su;Krishna, Sanjay;Urbas, Augustine;Ku, Zahyun
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.02a
    • /
    • pp.334-334
    • /
    • 2013
  • In the past decade, the infrared detectors based on intersubband transition in quantum dots (QDs) have attracted much attention due to lower dark currents and increased lifetimes, which are in turn due a three-dimensional confinement and a reduction of scattering, respectively. In parallel, focal plane array development for infrared imaging has proceeded from the first to third generations (linear arrays, 2D arrays for staring systems, and large format with enhanced capabilities, respectively). For a step further towards the next generation of FPAs, it is envisioned that a two-dimensional metal hole array (2D-MHA) structures will improve the FPA structure by enhancing the coupling to photodetectors via local field engineering, and will enable wavelength filtering. In regard to the improved performance at certain wavelengths, it is worth pointing out the structural difference between previous 2D-MHA integrated front-illuminated single pixel devices and back-illuminated devices. Apart from the pixel linear dimension, it is a distinct difference that there is a metal cladding (composed of a number of metals for ohmic contact and the read-out integrated circuit hybridization) in the FPA between the heavily doped gallium arsenide used as the contact layer and the ROIC; on the contrary, the front-illuminated single pixel device consists of two heavily doped contact layers separated by the QD-absorber on a semi-infinite GaAs substrate. This paper is focused on analyzing the impact of a two dimensional metal hole array structure integrated to the back-illuminated quantum dots-in-a-well (DWELL) infrared photodetectors. The metal hole array consisting of subwavelength-circular holes penetrating gold layer (2DAu-CHA) provides the enhanced responsivity of DWELL infrared photodetector at certain wavelengths. The performance of 2D-Au-CHA is investigated by calculating the absorption of active layer in the DWELL structure using a finite integration technique. Simulation results show the enhanced electric fields (thereby increasing the absorption in the active layer) resulting from a surface plasmon, a guided mode, and Fabry-Perot resonances. Simulation method accomplished in this paper provides a generalized approach to optimize the design of any type of couplers integrated to infrared photodetectors.

  • PDF

CLINICAL STUDY OF THE SKELETAL CL III MALOCCLUSION PATIENTS AFTER 2-PHASE SURGICAL-ORTHODONTIC TREATMENT (골격성 제III급 부정교합 환자의 2단계 치료후 경과에 대한 임상적 연구)

  • Cho, Yun-Ju;Kim, Sang-Jung;Kim, Dong-Ryul;Suk, Geon-Jung;Hong, Kwang-Jin;Lee, Jeong-Gu;Sohn, Hong-Bum
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.26 no.6
    • /
    • pp.628-635
    • /
    • 2000
  • The purpose of this study was to evaluate the result after 2-phase surgical-orthodontic treatment without preoperative orthodontic treatment for the skeletal Cl III malocclusion patient and to obtain an adequate protocol on the bases of this result. This retrospective study of ten patients who underwent 2-phase treatment were done to evaluate 1) the surgical stability and relapse pattern 2) the facial esthetics 3) the TMJ problem 4) the total time of the treatment. Results were followed : 1) The horizontal relapse of the mandible was 26.8% and didn't show significant differences compared to the conventional 3-phase treatment. But, it was considered that this amount of relapse was the sum of true relapse and autoratation of mandible due to decreased vertical dimension during orthodontic treatment. 2) It was estimated that there's no difference on the ratio of anterior facial height between the subjects and the normal patients. On the horizontal analysis, the mandible of the subjects was located more anteriorly than that of the normal patients. This result showed that there was a need for the accurate preoperative esthetic evaluation and the additional methods for reducing the relapse due to the occlusal interference. 3) Wide variation was noted on the TMJ symptoms of the subjects, however, it was estimated that there's no significant differencees of symptoms compared to that of the conventional 3-phase treatment on literatures. 4) The average of the overall period of treatment was 20.8 months and we obtained reduction of the treatment time compaired to 3-phase treatment on many literatures. Most of the results of this study were similar to the findings of the 3-phase treatment(preoperative orthodontic-orthognathic surgery-postoperative orthodontic), but total time of the treatment was shorter in patients with 2-phase treatment than in those with the conventional 3-phase treatment. With 2-phase treatment, we experienced many advantages compared to the conventional method considering that it was favarable conditions for the teeth, it had the flexibility for the treatment, and it could be the adequate treatment approach for the stomatognathic system. Although this retrospective pilot study had some limitations, due to small samples, the authors would hope that it could serve as a guide for the future researches, and the clinical applications.

  • PDF

ZnO nanostructures for e-paper and field emission display applications

  • Sun, X.W.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.993-994
    • /
    • 2008
  • Electrochromic (EC) devices are capable of reversibly changing their optical properties upon charge injection and extraction induced by the external voltage. The characteristics of the EC device, such as low power consumption, high coloration efficiency, and memory effects under open circuit status, make them suitable for use in a variety of applications including smart windows and electronic papers. Coloration due to reduction or oxidation of redox chromophores can be used for EC devices (e-paper), but the switching time is slow (second level). Recently, with increasing demand for the low cost, lightweight flat panel display with paper-like readability (electronic paper), an EC display technology based on dye-modified $TiO_2$ nanoparticle electrode was developed. A well known organic dye molecule, viologen, was adsorbed on the surface of a mesoporous $TiO_2$ nanoparticle film to form the EC electrode. On the other hand, ZnO is a wide bandgap II-VI semiconductor which has been applied in many fields such as UV lasers, field effect transistors and transparent conductors. The bandgap of the bulk ZnO is about 3.37 eV, which is close to that of the $TiO_2$ (3.4 eV). As a traditional transparent conductor, ZnO has excellent electron transport properties, even in ZnO nanoparticle films. In the past few years, one-dimension (1D) nanostructures of ZnO have attracted extensive research interest. In particular, 1D ZnO nanowires renders much better electron transportation capability by providing a direct conduction path for electron transport and greatly reducing the number of grain boundaries. These unique advantages make ZnO nanowires a promising matrix electrode for EC dye molecule loading. ZnO nanowires grow vertically from the substrate and form a dense array (Fig. 1). The ZnO nanowires show regular hexagonal cross section and the average diameter of the ZnO nanowires is about 100 nm. The cross-section image of the ZnO nanowires array (Fig. 1) indicates that the length of the ZnO nanowires is about $6\;{\mu}m$. From one on/off cycle of the ZnO EC cell (Fig. 2). We can see that, the switching time of a ZnO nanowire electrode EC cell with an active area of $1\;{\times}\;1\;cm^2$ is 170 ms and 142 ms for coloration and bleaching, respectively. The coloration and bleaching time is faster compared to the $TiO_2$ mesoporous EC devices with both coloration and bleaching time of about 250 ms for a device with an active area of $2.5\;cm^2$. With further optimization, it is possible that the response time can reach ten(s) of millisecond, i.e. capable of displaying video. Fig. 3 shows a prototype with two different transmittance states. It can be seen that good contrast was obtained. The retention was at least a few hours for these prototypes. Being an oxide, ZnO is oxidation resistant, i.e. it is more durable for field emission cathode. ZnO nanotetropods were also applied to realize the first prototype triode field emission device, making use of scattered surface-conduction electrons for field emission (Fig. 4). The device has a high efficiency (field emitted electron to total electron ratio) of about 60%. With this high efficiency, we were able to fabricate some prototype displays (Fig. 5 showing some alphanumerical symbols). ZnO tetrapods have four legs, which guarantees that there is one leg always pointing upward, even using screen printing method to fabricate the cathode.

  • PDF

Modeling of heat efficiency of hot stove based on neural network using feature extraction (특성 추출과 신경회로망을 이용한 열 풍로 열효율에 대한 모델링)

  • Min Kwang Gi;Choi Tae Hwa;Han Chong Hun;Chang Kun Soo
    • Journal of the Korean Institute of Gas
    • /
    • v.2 no.4
    • /
    • pp.60-66
    • /
    • 1998
  • The hot stove system is a process that is continuously and constantly generating the hot combustion air required for the blast furnace. The hot stove process is considered as a main energy consumption process because it consumes about $20\%$ of the total energy in steel making works. So, many researchers have interested in the improvement of the heat efficiency of the hot stove to reduce the energy consumption. But they have difficulties in improving the heat efficiency of the hot stove because there is no precise information on heat transformation occurring during the heating period. In order to model the relationship between the operating conditions and heat efficiencies, we propose a neural network using feature extraction as one of experimental modeling methods. In order to show the performance of the model, we compare it with Partial Least Square (PLS) method. Both methods have similarities in using the dimension reduction technique. And then we present the simulation results on the prediction of the heat efficiency of the hot stove.

  • PDF

CLINICAL CONSIDERATION ON USING THE ELASTIC 'TIE BACKS' DURING SPACE CLOSURE ('Elastic tie back'을 이용한 발치공간 폐쇄에 관한 임상적 고려)

  • Cho, Ki-Soo;Chun, Youn-Sic
    • The korean journal of orthodontics
    • /
    • v.23 no.2 s.41
    • /
    • pp.217-227
    • /
    • 1993
  • Preadjusted appliance, following the original concept of the Andrews Straight-Wire appliance, became increasingly common in the 1980s. In six phases of treatment, anchorage control, leveling and aligning, overbite control, overjet reduction, space closure, and finishing are very effective with using the preadjusted appliances. Space closure is the phase of treatment in which the difference between standard edgewise and preadjusted mechanics is most noticeable. Orthodontists have been able to reduce the use of closing loops and, because of the level slot lineup, enjoy the advantages of sliding mechanics. In 1990, Dr. John C. Bennett and Richard P. McLaughlin introduced the new space closure system, namely, elastic 'tiebacks'. They found an $.019'\times.025'$ working archwire most effective in an .022'-slot system. Hooks of .024' stainless steel or .028' brass wire are soldered to the upper and lower archwires. The force required for space closure is delivered by elastic 'tiebacks'. An elastic modulo stretched by 2-3mm(to twice its normal length) usually delivers 0.5-1.5mm of space closure per month. Group movement and sliding mechanics are combined for gentle, controlled space closure, so that about 0.5mm of incisor retraction and 0.5mm of mesial molar movement can be seen each month. The tiebacks are replaced every four to six weeks. By using the elastic 'tiebacks', the next two cases were treated during space closure. Even though we found some clinical problems of this mechanics, long treatment time, hard to control of vertical dimension and anchorage, the application method of this system is so simple that orthodontists can manage many patients during short chair time. But we must apply this mechanics after perfect understanding of the biomechanics in tooth movement.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Recovery Trajectory in Tachycardia Induced Heart Failure Model (빈맥을 이용한 심부전 모델에서 회복궤도)

  • 오중환;박승일;원준호;김은기;이종국
    • Journal of Chest Surgery
    • /
    • v.32 no.5
    • /
    • pp.422-427
    • /
    • 1999
  • Background: Tachycardia induced heart failure model would be the model of choice for the dilated cardiomyopathy. This more closely resembles the clinical syndrome and does not require major surgical trauma, myocardial ischemia and pharmacological or toxic depression of cardiac function. When heart failure is progressive, application of new surgical procedures to the faling heart is highly risky. It has been shown that recovery trajectory from heart failure is a new method in decreasing animal mortality. The purpose is to establish the control datas for recovery trajectory in the canine heart failure model. Material and Method: 21 mongrel dogs were studied at 4 stages(baseline, at the heart failure, 4 and 8 weeks after recovery). Heart failure was induced during 4 weeks of continuous rapid pacing using a pacemaker. Eight weeks of trajectory of recovery period was allowed. Indices of left ventricular function and dimension were measured every 2 weeks and the hemodynamics were measured by use of Swan-Ganz catheterization and thermodilution method every 4 weeks. Values were expressed as mean${\pm}$standard deviation. Result: 4(20%) dogs died due to heart failure. Left ventricular end-diastolic volume at the 4 stages were 40.8${\pm}$7.4, 82.1${\pm}$21.1, 59.9${\pm}$7.7 and 46.5${\pm}$6.5ml. Left ventricular end-systolic volume showed the same trend. Ejection fractions were 50.6${\pm}$4.1, 17.5${\pm}$5.8, 36.3${\pm}$7.3, and 41.5${\pm}$2.4%. Blood pressure and heart rate showed no significant changes. Pressures of central vein, right ventricle, pulmonary artery, and pulmonary capillary wedge showed significant increase during the heart failure period, normalizing at the end of recovery period. Stroke volumes were 21.5${\pm}$8.2, 12.3${\pm}$3.5, 17.9${\pm}$4.6, and 15.5${\pm}$3.4ml. Blood norepinephrine level was 133.3${\pm}$60.0pg/dL at the baseline and 479.4${\pm}$327.3pg/dL at the heart failure stage(p=0.008). Conclusion: Development of tachycardia induced heart failure model is of high priority due to ready availability and reasonable amenability to measurements. Recovery trajectory after cessation of tachycardia showed reduction of cardiac dilatation and heart function. Application of new surgical procedures during the recovery period could decrease animal mortality.

  • PDF

Physiological Responses of One-year-old Zelkova serrata Makino Seedlings to Ozone in Open-top Chamber (Open-top chamber 내(內)에서 오존에 폭로(暴露)시킨 1년생(年生) 느티나무(Zelkova serrata Makino) 묘목(苗木)의 생리적(生理的) 반응(反應)에 관(關)한 연구(硏究))

  • Kim, Hyun Seok;Lee, Kyung Joon
    • Journal of Korean Society of Forest Science
    • /
    • v.84 no.4
    • /
    • pp.424-431
    • /
    • 1995
  • This study was conducted to evaluate resistance and physiological responses of Zelkova serrata Makino seedlings to ozone in open-top chamber. One-year-old seedlings of Zelkova serrata were planted in pots in April and grown in greenhouse until August. The plants were transferred into two out-door open-top chambers with a dimension of 2.0 m in diameter and 2.0 m in height. First chamber served as a control and was supplied with ambient air. Ozone was added to the second chamber for 5 hours per day(10.00 AM-15.00 PM) for 23 consecutive days at 0.1 ppm. Each chamber housed 70 pots. Every two, three or five days after initiation of exposure, ten pots were randomly removed from the chamber and determined for the contents of chlorophyll a, b, total chlorophyll and ${\beta}$-carotene in the leaves. Photosynthesis and dark respiration were estimated by measuring $CO_2$ absorption in a gas exchange chamber and oxygen absorption by oxygen monitoring system, respectively. Superoxide dismutase(SOD) activity in the leaves was assayed by a xanthine oxidase method. First visible injury of translucent(water-soaked looking) spots appeared on the leaves 14 days after the initial exposure, and ozone accelerated senescence of old leaves. Contents of chlorophyll a and b decreased by 17%, and 31%, respectively, in ozone treatment two days after exposure. The decrease in chlorophyll b was greater than that of chlorophyll a. Content of ${\beta}$-carotene in ozone treatment decreased by 25% two days after initiation of exposure, but the reduction was recovered with time. Photosynthesis decreased by 45%, and the respiration increased by 28% in the ozone treatment. SOD activity started to increase 4 days after beginning of exposure and increased by 285% 7 days after exposure, and decreased to the level below the control treatment with the advancement of the visible injury.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.