• Title/Summary/Keyword: Binary-tree

Search Result 297, Processing Time 0.024 seconds

A Comparative Study of Predictive Factors for Hypertension using Logistic Regression Analysis and Decision Tree Analysis

  • SoHyun Kim;SungHyoun Cho
    • Physical Therapy Rehabilitation Science
    • /
    • v.12 no.2
    • /
    • pp.80-91
    • /
    • 2023
  • Objective: The purpose of this study is to identify factors that affect the incidence of hypertension using logistic regression and decision tree analysis, and to build and compare predictive models. Design: Secondary data analysis study Methods: We analyzed 9,859 subjects from the Korean health panel annual 2019 data provided by the Korea Institute for Health and Social Affairs and National Health Insurance Service. Frequency analysis, chi-square test, binary logistic regression, and decision tree analysis were performed on the data. Results: In logistic regression analysis, those who were 60 years of age or older (Odds ratio, OR=68.801, p<0.001), those who were divorced/widowhood/separated (OR=1.377, p<0.001), those who graduated from middle school or younger (OR=1, reference), those who did not walk at all (OR=1, reference), those who were obese (OR=5.109, p<0.001), and those who had poor subjective health status (OR=2.163, p<0.001) were more likely to develop hypertension. In the decision tree, those over 60 years of age, overweight or obese, and those who graduated from middle school or younger had the highest probability of developing hypertension at 83.3%. Logistic regression analysis showed a specificity of 85.3% and sensitivity of 47.9%; while decision tree analysis showed a specificity of 81.9% and sensitivity of 52.9%. In classification accuracy, logistic regression and decision tree analysis showed 73.6% and 72.6% prediction, respectively. Conclusions: Both logistic regression and decision tree analysis were adequate to explain the predictive model. It is thought that both analysis methods can be used as useful data for constructing a predictive model for hypertension.

Geohashed Spatial Index Method for a Location-Aware WBAN Data Monitoring System Based on NoSQL

  • Li, Yan;Kim, Dongho;Shin, Byeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.12 no.2
    • /
    • pp.263-274
    • /
    • 2016
  • The exceptional development of electronic device technology, the miniaturization of mobile devices, and the development of telecommunication technology has made it possible to monitor human biometric data anywhere and anytime by using different types of wearable or embedded sensors. In daily life, mobile devices can collect wireless body area network (WBAN) data, and the co-collected location data is also important for disease analysis. In order to efficiently analyze WBAN data, including location information and support medical analysis services, we propose a geohash-based spatial index method for a location-aware WBAN data monitoring system on the NoSQL database system, which uses an R-tree-based global tree to organize the real-time location data of a patient and a B-tree-based local tree to manage historical data. This type of spatial index method is a support cloud-based location-aware WBAN data monitoring system. In order to evaluate the proposed method, we built a system that can support a JavaScript Object Notation (JSON) and Binary JSON (BSON) document data on mobile gateway devices. The proposed spatial index method can efficiently process location-based queries for medical signal monitoring. In order to evaluate our index method, we simulated a small system on MongoDB with our proposed index method, which is a document-based NoSQL database system, and evaluated its performance.

A Polynomial-time Algorithm to Find Optimal Path Decompositions of Trees (트리의 최적 경로 분할을 위한 다항시간 알고리즘)

  • An, Hyung-Chan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.195-201
    • /
    • 2007
  • A minimum terminal path decomposition of a tree is defined as a partition of the tree into edge-disjoint terminal-to-terminal paths that minimizes the weight of the longest path. In this paper, we present an $O({\mid}V{\mid}^2$time algorithm to find a minimum terminal path decomposition of trees. The algorithm reduces the given optimization problem to the binary search using the corresponding decision problem, the problem to decide whether the cost of a minimum terminal path decomposition is at most l. This decision problem is solved by dynamic programing in a single traversal of the tree.

Secure Outsourced Computation of Multiple Matrix Multiplication Based on Fully Homomorphic Encryption

  • Wang, Shufang;Huang, Hai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5616-5630
    • /
    • 2019
  • Fully homomorphic encryption allows a third-party to perform arbitrary computation over encrypted data and is especially suitable for secure outsourced computation. This paper investigates secure outsourced computation of multiple matrix multiplication based on fully homomorphic encryption. Our work significantly improves the latest Mishra et al.'s work. We improve Mishra et al.'s matrix encoding method by introducing a column-order matrix encoding method which requires smaller parameter. This enables us to develop a binary multiplication method for multiple matrix multiplication, which multiplies pairwise two adjacent matrices in the tree structure instead of Mishra et al.'s sequential matrix multiplication from left to right. The binary multiplication method results in a logarithmic-depth circuit, thus is much more efficient than the sequential matrix multiplication method with linear-depth circuit. Experimental results show that for the product of ten 32×32 (64×64) square matrices our method takes only several thousand seconds while Mishra et al.'s method will take about tens of thousands of years which is astonishingly impractical. In addition, we further generalize our result from square matrix to non-square matrix. Experimental results show that the binary multiplication method and the classical dynamic programming method have a similar performance for ten non-square matrices multiplication.

A Framework for Semantic Interpretation of Noun Compounds Using Tratz Model and Binary Features

  • Zaeri, Ahmad;Nematbakhsh, Mohammad Ali
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.743-752
    • /
    • 2012
  • Semantic interpretation of the relationship between noun compound (NC) elements has been a challenging issue due to the lack of contextual information, the unbounded number of combinations, and the absence of a universally accepted system for the categorization. The current models require a huge corpus of data to extract contextual information, which limits their usage in many situations. In this paper, a new semantic relations interpreter for NCs based on novel lightweight binary features is proposed. Some of the binary features used are novel. In addition, the interpreter uses a new feature selection method. By developing these new features and techniques, the proposed method removes the need for any huge corpuses. Implementing this method using a modular and plugin-based framework, and by training it using the largest and the most current fine-grained data set, shows that the accuracy is better than that of previously reported upon methods that utilize large corpuses. This improvement in accuracy and the provision of superior efficiency is achieved not only by improving the old features with such techniques as semantic scattering and sense collocation, but also by using various novel features and classifier max entropy. That the accuracy of the max entropy classifier is higher compared to that of other classifiers, such as a support vector machine, a Na$\ddot{i}$ve Bayes, and a decision tree, is also shown.

Reconstitution of CB Trie for the Efficient Hangul Retrieval (효율적인 한글 탐색을 위한 CB 트라이의 재구성)

  • Jung, Kyu-Cheol
    • Convergence Security Journal
    • /
    • v.7 no.4
    • /
    • pp.29-34
    • /
    • 2007
  • This paper proposes RCB(Reduced Compact Binary) trie to correct faults of CB(Compact Binary) trie. First, in the case of CB trie, a compact structure was tried for the first time, but as the amount of data was increasing, that of inputted data gained and much difficulty was experienced in insertion due to the dummy nods used in balancing trees. On the other hand, if the HCB trie realized hierarchically, given certain depth to prevent the map from increasing on the right, reached the depth, the method for making new trees and connecting to them was used. Eventually, fast progress could be made in the inputting and searching speed, but this had a disadvantage of the storage space becoming bigger because of the use of dummy nods like CB trie and of many tree links. In the case of RCB trie in this thesis, a capacity is increased by about 60% by completely cutting down dummy nods.

  • PDF

Embedding Full Binary Trees into Petersen-Torus(PT) Networks (정이진트리를 피터슨-토러스(PT) 네트워크에 임베딩)

  • Seo, Jung-Hyun;Lee, Hyeong-Ok;Jang, Moon-Suk;Han, Soon-Hee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06b
    • /
    • pp.580-583
    • /
    • 2008
  • 본 논문은 정이진트리(full binary tree)를 PT 네트워크에 임베딩 가능함을 보인다. 메쉬 네트워크에 H-트리을 생성하는 방법을 응용하여 밀집을 1, 확장율 ${\doteqdot}5$ 그리고 연장율 $\frac{3(n+1)}{4}+1$에 일대일 임베딩 하였다. 밀집을 1에 임베딩 함으로써 store-and-forward 라우팅 방식과 윔홀 라우팅 방식에 적합하며 일대일 임베딩 함으로써 프로세서의 부하를 줄였다.

  • PDF

Simple Recursive Approach for Detecting Spatial Clusters

  • Kim Jeongjin;Chung Younshik;Ma Sungjoon;Yang Tae Young
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.1
    • /
    • pp.207-216
    • /
    • 2005
  • A binary segmentation procedure is a simple recursive approach to detect clusters and provide inferences for the study space when the shape of the clusters and the number of clusters are unknown. The procedure involves a sequence of nested hypothesis tests of a single cluster versus a pair of distinct clusters. The size and the shape of the clusters evolve as the procedure proceeds. The procedure allows for various growth clusters and for arbitrary baseline densities which govern the form of the hypothesis tests. A real tree data is used to highlight the procedure.

Multi-Label Classification Approach to Location Prediction

  • Lee, Min Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.10
    • /
    • pp.121-128
    • /
    • 2017
  • In this paper, we propose a multi-label classification method in which multi-label classification estimation techniques are applied to resolving location prediction problem. Most of previous studies related to location prediction have focused on the use of single-label classification by using contextual information such as user's movement paths, demographic information, etc. However, in this paper, we focused on the case where users are free to visit multiple locations, forcing decision-makers to use multi-labeled dataset. By using 2373 contextual dataset which was compiled from college students, we have obtained the best results with classifiers such as bagging, random subspace, and decision tree with the multi-label classification estimation methods like binary relevance(BR), binary pairwise classification (PW).

FAST BDD TRUNCATION METHOD FOR EFFICIENT TOP EVENT PROBABILITY CALCULATION

  • Jung, Woo-Sik;Han, Sang-Hoon;Yang, Joon-Eon
    • Nuclear Engineering and Technology
    • /
    • v.40 no.7
    • /
    • pp.571-580
    • /
    • 2008
  • A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since it is highly memory consuming. In order to solve a large reliability problem within limited computational resources, many attempts have been made, such as static and dynamic variable ordering schemes, to minimize BDD size. Additional effort was the development of a ZBDD (Zero-suppressed BDD) algorithm to calculate an approximate TEP. The present method is the first successful application of a BDD truncation. The new method is an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. The benchmark tests demonstrate the efficiency of the developed method. The TEP rapidly converges to an exact value according to a lowered truncation limit.