• Title/Summary/Keyword: Tree-Based Network

Search Result 631, Processing Time 0.029 seconds

(An HTTP-Based Application Layer Security Protocol for Wireless Internet Services) (무선 인터넷 서비스를 위한 HTTP 기반의 응용 계층 보안 프로토콜)

  • 이동근;김기조;임경식
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.377-386
    • /
    • 2003
  • In this paper, we present an application layer protocol to support secure wireless Internet services, called Application Layer Security(ALS). The drawbacks of the two traditional approaches to secure wireless applications motivated the development of ALS. One is that in the conventional application-specific security protocol such as Secure HyperText Transfer Protocol(S-HTTP), security mechanism is included in the application itself. This gives a disadvantage that the security services are available only to that particular application. The other is that a separate protocol layer is inserted between the application and transport layers, as in the Secure Sockets Layer(SSL)/Transport Layer Security(TLS). In this case, all channel data are encrypted regardless of the specific application's requirements, resulting in much waste of network resources. To overcome these problems, ALS is proposed to be implemented on top of HTTP so that it is independent of the various transport layer protocols, and provides a common security interface with security applications so that it greatly improves the portability of security applications. In addition, since ALS takes advantages of well-known TLS mechanism, it eliminates the danger of malicious attack and provides applications with various security services such as authentication, confidentiality integrity and digital signature, and partial encryption. We conclude this paper with an example of applying ALS to the solution of end-to-end security in a present commercial wireless protocol stack, Wireless Application Protocol.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Convergence Study in the Severity-adjusted Mortality Ratio on inpatients with multiple chronic conditions (복합만성질환 입원환자의 중증도 보정 사망비에 대한 융복합 연구)

  • Seo, Young-Suk;Kang, Sung-Hong
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.245-257
    • /
    • 2015
  • This study was to develop the predictive model for severity-adjusted mortality of inpatients with multiple chronic conditions and analyse the factors on the variation of hospital standardized mortality ratio(HSMR) to propose the plan to reduce the variation. We collect the data "Korean National Hospital Discharge In-depth Injury Survey" from 2008 to 2010 and select the final 110,700 objects of study who have chronic diseases for principal diagnosis and who are over the age of 30 with more than 2 chronic diseases including principal diagnosis. We designed a severity-adjusted mortality predictive model with using data-mining methods (logistic regression analysis, decision tree and neural network method). In this study, we used the predictive model for severity-adjusted mortality ratio by the decision tree using Elixhauser comorbidity index. As the result of the hospital standardized mortality ratio(HSMR) of inpatients with multiple chronic conditions, there were statistically significant differences in HSMR by the insurance type, bed number of hospital, and the location of hospital. We should find the method based on the result of this study to manage mortality ratio of inpatients with multiple chronic conditions efficiently as the national level. So we should make an effort to increase the quality of medical treatment for inpatients with multiple chronic diseases and to reduce growing medical expenses.

Development of an Automatic Generation Methodology for Digital Elevation Models using a Two-Dimensional Digital Map (수치지형도를 이용한 DEM 자동 생성 기법의 개발)

  • Park, Chan-Soo;Lee, Seong-Kyu;Suh, Yong-Cheol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.3
    • /
    • pp.113-122
    • /
    • 2007
  • The rapid growth of aerial survey and remote sensing technology has enabled the rapid acquisition of very large amounts of geographic data, which should be analyzed using real-time visualization technology. The level of detail(LOD) algorithm is one of the most important elements for realizing real-time visualization. We chose the triangulated irregular network (TIN) method to generate normalized digital elevation model(DEM) data. First, we generated TIN data using contour lines obtained from a two-dimensional(2D) digital map and created a 2D grid array fitting the size of the area. Then, we generated normalized DEM data by calculating the intersection points between the TIN data and the points on the 2D grid array. We used constrained Delaunay triangulation(CDT) and ray-triangle intersection algorithms to calculate the intersection points between the TIN data and the points on the 2D grid array in each step. In addition, we simulated a three-dimensional(3D) terrain model based on normalized DEM data with real-time visualization using a Microsoft Visual C++ 6.0 program in the DirectX API library and a quad-tree LOD algorithm.

  • PDF

A Literature Review and Classification of Recommender Systems on Academic Journals (추천시스템관련 학술논문 분석 및 분류)

  • Park, Deuk-Hee;Kim, Hyea-Kyeong;Choi, Il-Young;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.139-152
    • /
    • 2011
  • Recommender systems have become an important research field since the emergence of the first paper on collaborative filtering in the mid-1990s. In general, recommender systems are defined as the supporting systems which help users to find information, products, or services (such as books, movies, music, digital products, web sites, and TV programs) by aggregating and analyzing suggestions from other users, which mean reviews from various authorities, and user attributes. However, as academic researches on recommender systems have increased significantly over the last ten years, more researches are required to be applicable in the real world situation. Because research field on recommender systems is still wide and less mature than other research fields. Accordingly, the existing articles on recommender systems need to be reviewed toward the next generation of recommender systems. However, it would be not easy to confine the recommender system researches to specific disciplines, considering the nature of the recommender system researches. So, we reviewed all articles on recommender systems from 37 journals which were published from 2001 to 2010. The 37 journals are selected from top 125 journals of the MIS Journal Rankings. Also, the literature search was based on the descriptors "Recommender system", "Recommendation system", "Personalization system", "Collaborative filtering" and "Contents filtering". The full text of each article was reviewed to eliminate the article that was not actually related to recommender systems. Many of articles were excluded because the articles such as Conference papers, master's and doctoral dissertations, textbook, unpublished working papers, non-English publication papers and news were unfit for our research. We classified articles by year of publication, journals, recommendation fields, and data mining techniques. The recommendation fields and data mining techniques of 187 articles are reviewed and classified into eight recommendation fields (book, document, image, movie, music, shopping, TV program, and others) and eight data mining techniques (association rule, clustering, decision tree, k-nearest neighbor, link analysis, neural network, regression, and other heuristic methods). The results represented in this paper have several significant implications. First, based on previous publication rates, the interest in the recommender system related research will grow significantly in the future. Second, 49 articles are related to movie recommendation whereas image and TV program recommendation are identified in only 6 articles. This result has been caused by the easy use of MovieLens data set. So, it is necessary to prepare data set of other fields. Third, recently social network analysis has been used in the various applications. However studies on recommender systems using social network analysis are deficient. Henceforth, we expect that new recommendation approaches using social network analysis will be developed in the recommender systems. So, it will be an interesting and further research area to evaluate the recommendation system researches using social method analysis. This result provides trend of recommender system researches by examining the published literature, and provides practitioners and researchers with insight and future direction on recommender systems. We hope that this research helps anyone who is interested in recommender systems research to gain insight for future research.

Efficient Peer-to-Peer Lookup in Multi-hop Wireless Networks

  • Shin, Min-Ho;Arbaugh, William A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.1
    • /
    • pp.5-25
    • /
    • 2009
  • In recent years the popularity of multi-hop wireless networks has been growing. Its flexible topology and abundant routing path enables many types of applications. However, the lack of a centralized controller often makes it difficult to design a reliable service in multi-hop wireless networks. While packet routing has been the center of attention for decades, recent research focuses on data discovery such as file sharing in multi-hop wireless networks. Although there are many peer-to-peer lookup (P2P-lookup) schemes for wired networks, they have inherent limitations for multi-hop wireless networks. First, a wired P2P-lookup builds a search structure on the overlay network and disregards the underlying topology. Second, the performance guarantee often relies on specific topology models such as random graphs, which do not apply to multi-hop wireless networks. Past studies on wireless P2P-lookup either combined existing solutions with known routing algorithms or proposed tree-based routing, which is prone to traffic congestion. In this paper, we present two wireless P2P-lookup schemes that strictly build a topology-dependent structure. We first propose the Ring Interval Graph Search (RIGS) that constructs a DHT only through direct connections between the nodes. We then propose the ValleyWalk, a loosely-structured scheme that requires simple local hints for query routing. Packet-level simulations showed that RIGS can find the target with near-shortest search length and ValleyWalk can find the target with near-shortest search length when there is at least 5% object replication. We also provide an analytic bound on the search length of ValleyWalk.

A Method for Constructing Multi-Hop Routing Tree among Cluster Heads in Wireless Sensor Networks (무선 센서 네트워크에서 클러스터 헤드의 멀티 홉 라우팅 트리 구성)

  • Choi, Hyekyeong;Kang, Sang Hyuk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.11
    • /
    • pp.763-770
    • /
    • 2014
  • In traditional routing protocols including LEACH for wireless sensor networks, nodes suffer from unbalanced energy consumption because the nodes require large transmission energy as the distance to the sink node increase. Multi-hop based routing protocols have been studied to address this problem. In existing protocols, each cluster head usually chooses the closest head as a relay node. We propose LEACH-CHT, in which cluster heads choose the path with least energy consumption to send data to the sink node. In our research, each hop, a cluster head selects the least cost path to the sink node. This method solves the looping problem efficiently as well as make it possible that a cluster head excludes other cluster heads placed farther than its location from the path, without additional energy consumption. By balancing the energy consumption among the nodes, our proposed scheme outperforms existing multi-hop schemes by up to 36% in terms of average network lifetime.

Access Control Policy of Data Considering Varying Context in Sensor Fusion Environment of Internet of Things (사물인터넷 센서퓨전 환경에서 동적인 상황을 고려한 데이터 접근제어 정책)

  • Song, You-jin;Seo, Aria;Lee, Jaekyu;Kim, Yei-chang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.9
    • /
    • pp.409-418
    • /
    • 2015
  • In order to delivery of the correct information in IoT environment, it is important to deduce collected information according to a user's situation and to create a new information. In this paper, we propose a control access scheme of information through context-aware to protect sensitive information in IoT environment. It focuses on the access rights management to grant access in consideration of the user's situation, and constrains(access control policy) the access of the data stored in network of unauthorized users. To this end, after analysis of the existing research 'CP-ABE-based on context information access control scheme', then include dynamic conditions in the range of status information, finally we propose a access control policy reflecting the extended multi-dimensional context attribute. Proposed in this paper, access control policy considering the dynamic conditions is designed to suit for IoT sensor fusion environment. Therefore, comparing the existing studies, there are advantages it make a possible to ensure the variety and accuracy of data, and to extend the existing context properties.

A Target Selection Model for the Counseling Services in Long-Term Care Insurance (노인장기요양보험 이용지원 상담 대상자 선정모형 개발)

  • Han, Eun-Jeong;Kim, Dong-Geon
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1063-1073
    • /
    • 2015
  • In the long-term care insurance (LTCI) system, National Health Insurance Service (NHIS) provide counseling services for beneficiaries and their family caregivers, which help them use LTC services appropriately. The purpose of this study was to develop a Target Selection Model for the Counseling Services based on needs of beneficiaries and their family caregivers. To develope models, we used data set of total 2,000 beneficiaries and family caregivers who have used the long-term care services in their home in March 2013 and completed questionnaires. The Target Selection Model was established through various data-mining models such as logistic regression, gradient boosting, Lasso, decision-tree model, Ensemble, and Neural network. Lasso model was selected as the final model because of the stability, high performance and availability. Our results might improve the satisfaction and the efficiency for the NHIS counseling services.

A reach of the domestic production broadcasting equipment actual condition of usage investigation and trend through the broadcasting system tree analysis (방송시스템 트리분석을 통한 국산 방송장비 활용실태 조사와 동향 연구)

  • Seo, In-Ho;Choi, Seong-Jin;Park, Seung-Kyu
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.4
    • /
    • pp.87-94
    • /
    • 2017
  • The broadcast service environment is changed to the complicated equipment configuration of the server and network-based for the advanced technology application and various service providings. The broadcasting market is growing rapidly by the development of broadcasting environment. But as to the domestic production broadcasting equipment industry, the satisfaction of request of the consumer and market competitive power is showing the limit due to the development of the single focused on goods and sale. This research gathered the opinion of the broadcasting technology experts and investigated the reality of usage of the domestic device in the broadcasting system. And according to the investigation result we discovers the hybrid system model that synergy can come out in which the domestic device more than 2 combines out and there is the purpose.