• Title/Summary/Keyword: higher order accuracy

Search Result 791, Processing Time 0.036 seconds

Pattern-based Signature Generation for Identification of HTTP Applications (HTTP 응용들의 식별을 위한 패턴 기반의 시그니쳐 생성)

  • Jin, Chang-Gyu;Choi, Mi-Jung
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.1
    • /
    • pp.101-111
    • /
    • 2013
  • Internet traffic volume has been increasing rapidly due to popularization of various smart devices and Internet development. In particular, HTTP-based traffic volume of smart devices is increasing rapidly in addition to desktop traffic volume. The increased mobile traffic can cause serious problems such as network overload, web security, and QoS. In order to solve these problems of the Internet overload and security, it is necessary to accurately detect applications. Traditionally, well-known port based method is utilized in traffic classification. However, this method shows low accuracy since P2P applications exploit a TCP/80 port, which is used for the HTTP protocol; to avoid firewall or IDS. Signature-based method is proposed to solve the lower accuracy problem. This method shows higher analysis rate but it has overhead of signature generation. Also, previous signature-based study only analyzes applications in HTTP protocol-level not application-level. That is, it is difficult to identify application name. Therefore, previous study only performs protocol-level analysis. In this paper, we propose a signature generation method to classify HTTP-based traffics in application-level using the characteristics of typical semi HTTP header. By applying our proposed method to campus network traffic, we validate feasibility of our method.

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Transfer Learning Backbone Network Model Analysis for Human Activity Classification Using Imagery (영상기반 인체행위분류를 위한 전이학습 중추네트워크모델 분석)

  • Kim, Jong-Hwan;Ryu, Junyeul
    • Journal of the Korea Society for Simulation
    • /
    • v.31 no.1
    • /
    • pp.11-18
    • /
    • 2022
  • Recently, research to classify human activity using imagery has been actively conducted for the purpose of crime prevention and facility safety in public places and facilities. In order to improve the performance of human activity classification, most studies have applied deep learning based-transfer learning. However, despite the increase in the number of backbone network models that are the basis of deep learning as well as the diversification of architectures, research on finding a backbone network model suitable for the purpose of operation is insufficient due to the atmosphere of using a certain model. Thus, this study applies the transfer learning into recently developed deep learning backborn network models to build an intelligent system that classifies human activity using imagery. For this, 12 types of active and high-contact human activities based on sports, not basic human behaviors, were determined and 7,200 images were collected. After 20 epochs of transfer learning were equally applied to five backbone network models, we quantitatively analyzed them to find the best backbone network model for human activity classification in terms of learning process and resultant performance. As a result, XceptionNet model demonstrated 0.99 and 0.91 in training and validation accuracy, 0.96 and 0.91 in Top 2 accuracy and average precision, 1,566 sec in train process time and 260.4MB in model memory size. It was confirmed that the performance of XceptionNet was higher than that of other models.

Evaluation of accuracy in the ExacTrac 6D image induced radiotherapy using CBCT (CBCT을 이용한 ExacTrac 6D 영상유도방사선치료법의 정확도 평가)

  • Park, Ho Chun;Kim, Hyo Jung;Kim, Jong Deok;Ji, Dong Hwa;Song, Ju Young
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.109-121
    • /
    • 2016
  • To verify the accuracy of the image guided radiotherapy using ExacTrac 6D couch, the error values in six directions are randomly assigned and corrected and then the corrected values were compared with CBCT image to check the accurateness of ExacTrac. The therapy coordination values in the Rando head Phantom were moved in the directions of X, Y and Z as the translation group and they were moved in the directions of pitch, roll and yaw as the rotation group. The corrected values were moved in 6 directions with the combined and mutual reactions. The Z corrected value ranges from 1mm to 23mm. In the analysis of errors between CBCT image of the phantom which is corrected with therapy coordinate and 3D/3D matching error value, the rotation group showed higher error value than the translation group. In the distribution of dose for the error value of the therapy coordinate corrected with CBCT, the restricted value of dosage for the normal organs in two groups meet the prescription dose. In terms of PHI and PCI values which are the dose homogeneity of the cancerous tissue, the rotation group showed a little higher in the low dose distribution range. This study is designed to verify the accuracy of ExacTrac 6D couch using CBCT. It showed that in terms of the error value in the simple movement, it showed the comparatively accurate correction capability but in the movement when the angle is put in the couch, it showed the inaccurate correction values. So, if the body of the patient is likely to have a lot of changes in the direction of rotation or there is a lot of errors in the pitch, roll and yaw in ExacTrac correction, it is better to conduct the CBCT guided image to correct the therapy coordinate in order to minimize any side effects.

  • PDF

Centroid Neural Network with Bhattacharyya Kernel (Bhattacharyya 커널을 적용한 Centroid Neural Network)

  • Lee, Song-Jae;Park, Dong-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.861-866
    • /
    • 2007
  • A clustering algorithm for Gaussian Probability Distribution Function (GPDF) data called Centroid Neural Network with a Bhattacharyya Kernel (BK-CNN) is proposed in this paper. The proposed BK-CNN is based on the unsupervised competitive Centroid Neural Network (CNN) and employs a kernel method for data projection. The kernel method adopted in the proposed BK-CNN is used to project data from the low dimensional input feature space into higher dimensional feature space so as the nonlinear problems associated with input space can be solved linearly in the feature space. In order to cluster the GPDF data, the Bhattacharyya kernel is used to measure the distance between two probability distributions for data projection. With the incorporation of the kernel method, the proposed BK-CNN is capable of dealing with nonlinear separation boundaries and can successfully allocate more code vector in the region that GPDF data are densely distributed. When applied to GPDF data in an image classification probleml, the experiment results show that the proposed BK-CNN algorithm gives 1.7%-4.3% improvements in average classification accuracy over other conventional algorithm such as k-means, Self-Organizing Map (SOM) and CNN algorithms with a Bhattacharyya distance, classed as Bk-Means, B-SOM, B-CNN algorithms.

P-y Curves from Large Displacement Borehole Testmeter for Railway Bridge Foundation (장변위공내재하시험기를 이용한 철도교 기초의 P-y곡선에 관한 연구)

  • Ryu, Chang-Youl;Lee, Seul;Kim, Dae-Sang;Cho, Kook-Hwan
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.836-842
    • /
    • 2011
  • The lateral stability of bridge foundations against train moving load, emergency stopping load, earthquakes, and so on is very important for a railway bridge foundation. A borehole test is much more accurate than laboratory tests since it is possible to minimize the disturbance of ground conditions on the test site. The representative borehole test methods are Dilatometer, Pressuremeter and Lateral Load Tester, which usually provide force-resistance characteristics in elastic range. In order to estimate P-y curves using those methods, the non-linear characteristics of soil which is one of the most important characteristics of the soil cannot be obtained. Therefore, P-y curves are estimated usually using elastic modulus ($E_O$, $E_R$) of lateral pressure-deformation ratio obtained within the range of elastic behavior. Even though the pile foundation is designed using borehole tests in field to increase design accuracy, it is necessary to use a higher safety factor to improve the reliability of the design. A Large Displacement Borehole Testmeter(LDBT) is developed to measure nonlinear characteristics of the soil in this study. P-y curves can be directly achieved from the developed equipment. Comparisons between measured P-y curves the LDBT developed equipment, theoretical methods based on geotechnical investigations, and back-calculated P-y curves from field tests are shown in this paper. The research result shows that the measured P-y curves using LDBT can be properly matched with back-calculated P-y curves from filed tests by applying scale effects for sand and clay, respectively.

  • PDF

Shape Design Optimization using Isogeometric Analysis Method (등기하 해석법을 이용한 형상 최적 설계)

  • Ha, Seung-Hyun;Cho, Seon-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2008.04a
    • /
    • pp.216-221
    • /
    • 2008
  • Shape design optimization for linear elasticity problem is performed using isogeometric analysis method. In many design optimization problems for real engineering models, initial raw data usually comes from CAD modeler. Then designer should convert this CAD data into finite element mesh data because conventional design optimization tools are generally based on finite element analysis. During this conversion there is some numerical error due to a geometry approximation, which causes accuracy problems in not only response analysis but also design sensitivity analysis. As a remedy of this phenomenon, the isogeometric analysis method is one of the promising approaches of shape design optimization. The main idea of isogeometric analysis is that the basis functions used in analysis is exactly same as ones which represent the geometry, and this geometrically exact model can be used shape sensitivity analysis and design optimization as well. In shape design sensitivity point of view, precise shape sensitivity is very essential for gradient-based optimization. In conventional finite element based optimization, higher order information such as normal vector and curvature term is inaccurate or even missing due to the use of linear interpolation functions. On the other hands, B-spline basis functions have sufficient continuity and their derivatives are smooth enough. Therefore normal vector and curvature terms can be exactly evaluated, which eventually yields precise optimal shapes. In this article, isogeometric analysis method is utilized for the shape design optimization. By virtue of B-spline basis function, an exact geometry can be handled without finite element meshes. Moreover, initial CAD data are used throughout the optimization process, including response analysis, shape sensitivity analysis, design parameterization and shape optimization, without subsequent communication with CAD description.

  • PDF

Analysis of Shaping Parameters Influencing on Dimensional Accuracy in Single Point Incremental Sheet Metal Forming (음각 점진성형에서 치수정밀도에 영향을 미치는 형상 파라미터 분석)

  • Kang, Jae Gwan;Kang, Han Soo;Jung, Jong-Yun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.4
    • /
    • pp.90-96
    • /
    • 2016
  • Incremental sheet forming (ISF) is a highly versatile and flexible process for rapid manufacturing of complex sheet metal parts. Compared to conventional sheet forming processes, ISF is of a clear advantage in manufacturing small batch or customized parts. ISF needs die-less machine alone, while conventional sheet forming requires highly expensive facilities like dies, molds, and presses. This equipment takes long time to get preparation for manufacturing. However, ISF does not need the full facilities nor much cost and time. Because of the facts, ISF is continuously being used for small batch or prototyping manufacturing in current industries. However, spring-back induced in the process of incremental forming becomes a critical drawback on precision manufacturing. Since sheet metal, being a raw material for ISF, has property to resilience, spring-back would come in the case. It is the research objective to investigate how geometrical shaping parameters make effect on shape dimensional errors. In order to analyze the spring-back occurred in the process, this study experimented on Al 1015 material in the ISF. The statistical tool employed experimental design with factors. The table of orthogonal arrays of $L_8(2^7)$ are used to design the experiments and ANOVA method are employed to statistically analyze the collected data. The results of the analysis from this study shows that the type of shape and the slope of bottom are the significant, whereas the shape size, the shape height, and the side angle are not significant factors on dimensional errors. More error incurred on the pyramid than on the circular type in the experiments. The sloped bottom showed higher errors than the flat one.

Korean Semantic Role Labeling Using Case Frame Dictionary and Subcategorization (격틀 사전과 하위 범주 정보를 이용한 한국어 의미역 결정)

  • Kim, Wan-Su;Ock, Cheol-Young
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1376-1384
    • /
    • 2016
  • Computers require analytic and processing capability for all possibilities of human expression in order to process sentences like human beings. Linguistic information processing thus forms the initial basis. When analyzing a sentence syntactically, it is necessary to divide the sentence into components, find obligatory arguments focusing on predicates, identify the sentence core, and understand semantic relations between the arguments and predicates. In this study, the method applied a case frame dictionary based on The Korean Standard Dictionary of The National Institute of the Korean Language; in addition, we used a CRF Model that constructed subcategorization of predicates as featured in Korean Lexical Semantic Network (UWordMap) for semantic role labeling. Automatically tagged semantic roles based on the CRF model, which established the information of words, predicates, the case-frame dictionary and hypernyms of words as features, were used. This method demonstrated higher performance in comparison with the existing method, with accuracy rate of 83.13% as compared to 81.2%, respectively.

Streamlined GoogLeNet Algorithm Based on CNN for Korean Character Recognition (한글 인식을 위한 CNN 기반의 간소화된 GoogLeNet 알고리즘 연구)

  • Kim, Yeon-gyu;Cha, Eui-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1657-1665
    • /
    • 2016
  • Various fields are being researched through Deep Learning using CNN(Convolutional Neural Network) and these researches show excellent performance in the image recognition. In this paper, we provide streamlined GoogLeNet of CNN architecture that is capable of learning a large-scale Korean character database. The experimental data used in this paper is PHD08 that is the large-scale of Korean character database. PHD08 has 2,187 samples for each character and there are 2,350 Korean characters that make total 5,139,450 sample data. As a training result, streamlined GoogLeNet showed over 99% of test accuracy at PHD08. Also, we made additional Korean character data that have fonts that are not in the PHD08 in order to ensure objectivity and we compared the performance of classification between streamlined GoogLeNet and other OCR programs. While other OCR programs showed a classification success rate of 66.95% to 83.16%, streamlined GoogLeNet showed 89.14% of the classification success rate that is higher than other OCR program's rate.