• Title/Summary/Keyword: TREE FEATURE

Search Result 372, Processing Time 0.031 seconds

Head Pose Estimation Based on Perspective Projection Using PTZ Camera (원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정)

  • Kim, Jin Suh;Lee, Gyung Ju;Kim, Gye Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.7
    • /
    • pp.267-274
    • /
    • 2018
  • This paper describes a head pose estimation method using PTZ(Pan-Tilt-Zoom) camera. When the external parameters of a camera is changed by rotation and translation, the estimated face pose for the same head also varies. In this paper, we propose a new method to estimate the head pose independently on varying the parameters of PTZ camera. The proposed method consists of 3 steps: face detection, feature extraction, and pose estimation. For each step, we respectively use MCT(Modified Census Transform) feature, the facial regression tree method, and the POSIT(Pose from Orthography and Scaling with ITeration) algorithm. The existing POSIT algorithm does not consider the rotation of a camera, but this paper improves the POSIT based on perspective projection in order to estimate the head pose robustly even when the external parameters of a camera are changed. Through experiments, we confirmed that RMSE(Root Mean Square Error) of the proposed method improve $0.6^{\circ}$ less then the conventional method.

Web Document Classification Based on Hangeul Morpheme and Keyword Analyses (한글 형태소 및 키워드 분석에 기반한 웹 문서 분류)

  • Park, Dan-Ho;Choi, Won-Sik;Kim, Hong-Jo;Lee, Seok-Lyong
    • The KIPS Transactions:PartD
    • /
    • v.19D no.4
    • /
    • pp.263-270
    • /
    • 2012
  • With the current development of high speed Internet and massive database technology, the amount of web documents increases rapidly, and thus, classifying those documents automatically is getting important. In this study, we propose an effective method to extract document features based on Hangeul morpheme and keyword analyses, and to classify non-structured documents automatically by predicting subjects of those documents. To extract document features, first, we select terms using a morpheme analyzer, form the keyword set based on term frequency and subject-discriminating power, and perform the scoring for each keyword using the discriminating power. Then, we generate the classification model by utilizing the commercial software that implements the decision tree, neural network, and SVM(support vector machine). Experimental results show that the proposed feature extraction method has achieved considerable performance, i.e., average precision 0.90 and recall 0.84 in case of the decision tree, in classifying the web documents by subjects.

A Study on the Work-time Estimation for Block Erections Using Stacking Ensemble Learning (Stacking Ensemble Learning을 활용한 블록 탑재 시수 예측)

  • Kwon, Hyukcheon;Ruy, Wonsun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.56 no.6
    • /
    • pp.488-496
    • /
    • 2019
  • The estimation of block erection work time at a dock is one of the important factors when establishing or managing the total shipbuilding schedule. In order to predict the work time, it is a natural approach that the existing block erection data would be used to solve the problem. Generally the work time per unit is the product of coefficient value, quantity, and product value. Previously, the work time per unit is determined statistically by unit load data. However, we estimate the work time per unit through work time coefficient value from series ships using machine learning. In machine learning, the outcome depends mainly on how the training data is organized. Therefore, in this study, we use 'Feature Engineering' to determine which one should be used as features, and to check their influence on the result. In order to get the coefficient value of each block, we try to solve this problem through the Ensemble learning methods which is actively used nowadays. Among the many techniques of Ensemble learning, the final model is constructed by Stacking Ensemble techniques, consisting of the existing Ensemble models (Decision Tree, Random Forest, Gradient Boost, Square Loss Gradient Boost, XG Boost), and the accuracy is maximized by selecting three candidates among all models. Finally, the results of this study are verified by the predicted total work time for one ship among the same series.

Analysis of Dimensionality Reduction Methods Through Epileptic EEG Feature Selection for Machine Learning in BCI (BCI에서 기계 학습을 위한 간질 뇌파 특징 선택을 통한 차원 감소 방법 분석)

  • Tong, Yang;Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1333-1342
    • /
    • 2018
  • Until now, Electroencephalography(: EEG) has been the most important and convenient method for the diagnosis and treatment of epilepsy. However, it is difficult to identify the wave characteristics of an epileptic EEG signals because it is very weak, non-stationary and has strong background noise. In this paper, we analyse the effect of dimensionality reduction methods on Epileptic EEG feature selection and classification. Three dimensionality reduction methods: Pincipal Component Analysis(: PCA), Kernel Principal Component Analysis(: KPCA) and Linear Discriminant Analysis(: LDA) were investigated. The performance of each method was evaluated by using Support Vector Machine SVM, Logistic Regression(: LR), K-Nearestneighbor(: K-NN), Decision Tree(: DR) and Random Forest(: RF). From the experimental result, PCA recorded 75% of highest accuracy in SVM, LR and K-NN. KPCA recorded 85% of best performance in SVM and K-KNN while LDA achieved 100% accuracy in K-NN. Thus, LDA dimensionality reduction is found to provide the best classification result for epileptic EEG signal.

Development of a Forensic Analyzing Tool based on Cluster Information of HFS+ filesystem

  • Cho, Gyu-Sang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.178-192
    • /
    • 2021
  • File system forensics typically focus on the contents or timestamps of a file, and it is common to work around file/directory centers. But to recover a deleted file on the disk or use a carving technique to find and connect partial missing content, the evidence must be analyzed using cluster-centered analysis. Forensics tools such as EnCase, TSK, and X-ways, provide a basic ability to get information about disk clusters, but these are not the core functions of the tools. Alternatively, Sysinternals' DiskView tool provides a more intuitive visualization function, which makes it easier to obtain information around disk clusters. In addition, most current tools are for Windows. There are very few forensic analysis tools for MacOS, and furthermore, cluster analysis tools are very rare. In this paper, we developed a tool named FACT (Forensic Analyzer based Cluster Information Tool) for analyzing the state of clusters in a HFS+ file system, for digital forensics. The FACT consists of three features, a Cluster based analysis, B-tree based analysis, and Directory based analysis. The Cluster based analysis is the main feature, and was basically developed for cluster analysis. The FACT tool's cluster visualization feature plays a central role. The FACT tool was programmed in two programming languages, C/C++ and Python. The core part for analyzing the HFS+ filesystem was programmed in C/C++ and the visualization part is implemented using the Python Tkinter library. The features in this study will evolve into key forensics tools for use in MacOS, and by providing additional GUI capabilities can be very important for cluster-centric forensics analysis.

Ontology Alignment based on Parse Tree Kernel usig Structural and Semantic Information (구조 및 의미 정보를 활용한 파스 트리 커널 기반의 온톨로지 정렬 방법)

  • Son, Jeong-Woo;Park, Seong-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.329-334
    • /
    • 2009
  • The ontology alignment has two kinds of major problems. First, the features used for ontology alignment are usually defined by experts, but it is highly possible for some critical features to be excluded from the feature set. Second, the semantic and the structural similarities are usually computed independently, and then they are combined in an ad-hoc way where the weights are determined heuristically. This paper proposes the modified parse tree kernel (MPTK) for ontology alignment. In order to compute the similarity between entities in the ontologies, a tree is adopted as a representation of an ontology. After transforming an ontology into a set of trees, their similarity is computed using MPTK without explicit enumeration of features. In computing the similarity between trees, the approximate string matching is adopted to naturally reflect not only the structural information but also the semantic information. According to a series of experiments with a standard data set, the kernel method outperforms other structural similarities such as GMO. In addition, the proposed method shows the state-of-the-art performance in the ontology alignment.

Classification of Land Cover over the Korean Peninsula Using Polar Orbiting Meteorological Satellite Data (극궤도 기상위성 자료를 이용한 한반도의 지면피복 분류)

  • Suh, Myoung-Seok;Kwak, Chong-Heum;Kim, Hee-Soo;Kim, Maeng-Ki
    • Journal of the Korean earth science society
    • /
    • v.22 no.2
    • /
    • pp.138-146
    • /
    • 2001
  • The land cover over Korean peninsula was classified using a multi-temporal NOAA/AVHRR (Advanced Very High Resolution Radiometer) data. Four types of phenological data derived from the 10-day composited NDVI (Normalized Differences Vegetation Index), maximum and annual mean land surface temperature, and topographical data were used not only reducing the data volume but also increasing the accuracy of classification. Self organizing feature map (SOFM), a kind of neural network technique, was used for the clustering of satellite data. We used a decision tree for the classification of the clusters. When we compared the classification results with the time series of NDVI and some other available ground truth data, the urban, agricultural area, deciduous tree and evergreen tree were clearly classified.

  • PDF

Image Coding Using DCT Map and Binary Tree-structured Vector Quantizer (DCT 맵과 이진 트리 구조 벡터 양자화기를 이용한 영상 부호화)

  • Jo, Seong-Hwan;Kim, Eung-Seong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.81-91
    • /
    • 1994
  • A DCT map and new cldebook design algorithm based on a two-dimension discrete cosine transform (2D-DCT) is presented for coder of image vector quantizer. We divide the image into smaller subblocks, then, using 2D DCT, separate it into blocks which are hard to code but it bears most of the visual information and easy to code but little visual information, and DCT map is made. According to this map, the significant features of training image are extracted by using the 2D DCT. A codebook is generated by partitioning the training set into a binary tree based on tree-structure. Each training vector at a nonterminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. Compared with the pairwise neighbor (PPN) and classified VQ(CVQ) algorithm, about 'Lenna' and 'Boat' image, the new algorithm results in a reduction in computation time and shows better picture quality with 0.45 dB and 0.33dB differences as to PNN, 0.05dB and 0.1dB differences as to CVQ respectively.

  • PDF

Weather Classification and Fog Detection using Hierarchical Image Tree Model and k-mean Segmentation in Single Outdoor Image (싱글 야외 영상에서 계층적 이미지 트리 모델과 k-평균 세분화를 이용한 날씨 분류와 안개 검출)

  • Park, Ki-Hong
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1635-1640
    • /
    • 2017
  • In this paper, a hierarchical image tree model for weather classification is defined in a single outdoor image, and a weather classification algorithm using image intensity and k-mean segmentation image is proposed. In the first level of the hierarchical image tree model, the indoor and outdoor images are distinguished. Whether the outdoor image is daytime, night, or sunrise/sunset image is judged using the intensity and the k-means segmentation image at the second level. In the last level, if it is classified as daytime image at the second level, it is finally estimated whether it is sunny or foggy image based on edge map and fog rate. Some experiments are conducted so as to verify the weather classification, and as a result, the proposed method shows that weather features are effectively detected in a given image.

Tree Component Model : Component Composition with Hybrid Message Passing (트리 컴포넌트 모델 : 하이브리드 메시지 전달을 사용한 컴포넌트 조합)

  • Huh, Je-Min;Kim, Ji-Hong
    • The KIPS Transactions:PartD
    • /
    • v.15D no.5
    • /
    • pp.659-668
    • /
    • 2008
  • Recently, the component model based on the Exogenous Connector has been proposed in which controls are separated from computation by managing the beginning and result of method calls in the connector. Although it could be loosely coupled between components, it has a problem that is a potential preponderance of element objects of the system by increasing the number of connectors and connection levels. In this paper we propose the Tree Component Model with the Hybrid Message Passing that combines direct and indirect message passing. In our model, components are wrapped by interfaces and controls are separated from computation by only using their interface references. There is a unique feature that the composition structure of components becomes the tree always. As a result of demonstration and comparison, it is found that the Tree Component Model is applicable practically and decreases objects to mediate message passing and build the system.