• Title/Summary/Keyword: Multiple Web Devices

Search Result 56, Processing Time 0.027 seconds

A Design and Implementation of Reliability Analyzer for Embedded Software using Markov Chain Model and Unit Testing (내장형 소프트웨어 마르코프 체인 모델과 단위 테스트를 이용한 내장형 소프트웨어 신뢰도 분석 도구의 설계와 구현)

  • Kwak, Dong-Gyu;Yoo, Chae-Woo;Choi, Jae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.1-10
    • /
    • 2011
  • As requirements of embedded system get complicated, the tool for analyzing the reliability of embedded software is being needed. A probabilistic modeling is used as the way of analyzing the reliability of a software and to apply it to embedded software controlling multiple devices. So, it is necessary to specialize that to embedded software. Also, existing reliability analyzers should measure the transition probability of each condition in different ways and doesn't consider reusing the model once used. In this paper, we suggest a reliability analyzer for embedded software using embedded software Markov chin model and a unit testing tool. Embedded software Markov chain model is model specializing Markov chain model which is used for analyzing reliability to an embedded software. And a unit testing tool has host-target structure which is appropriate to development environment of embedded software. This tool can analyze the reliability more easily than existing tool by automatically measuring the transition probability between units for analyzing reliability from the result of unit testing. It can also directly apply the test result updated by unit testing tool by representing software model as a XML oriented document and has the advantage that many developers can access easily using the web oriented interface and SVN store. In this paper, we show reliability analyzing of a example by so doing show usefulness of reliability analyzer.

A Novel QoS Provisoning Scheme Based on User Mobility Patterns in IP-based Next-Generation Mobile Networks (IP기반 차세대 모바일 네트워크에서 사용자 이동패턴에 기반한 QoS 보장기법)

  • Yang, Seungbo;Jeong, Jongpil
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.25-38
    • /
    • 2013
  • Future wireless systems will be required to support the increasingly nomadic lifestyle of people. This support will be provided through the use of multiple overlaid networks which have very different characteristics. Moreover, these networks will be required to support the seamless delivery of today's popular desktop services, such as web browsing, interactive multimedia and video conferencing to the mobile devices. Thus one of the major challenges in the design of these mobile systems will be the provision of the quality of service (QoS) guarantees that the applications demand under this diverse networking infrastructure. We believe that it is necessary to use resource reservation and adaptation techniques to deliver these QoS guarantee to applications. However, reservation and pre-configuration in the entire service region is overly aggressive, and results in schemes that are extremely inefficient and unreliable. To overcome this, the mobility pattern of a user can be exploited. If the movement of a user is known, the reservation and configuration procedure can be limited to the regions of the network a user is likely to visit. Our proposed Proxy-UMP is not sensitive to increase of the search cost than other schemes and shows that the increasing rate of total cost is low as the SMR increases.

Study on Basic Elements for Smart Content through the Market Status-quo (스마트콘텐츠 현황분석을 통한 기본요소 추출)

  • Kim, Gyoung Sun;Park, Joo Young;Kim, Yi Yeon
    • Korea Science and Art Forum
    • /
    • v.21
    • /
    • pp.31-43
    • /
    • 2015
  • Information and Communications Technology (ICT) is one of the technologies which represent the core value of the creative economy. It has served as a vehicle connecting the existing industry and corporate infrastructure, developing existing products and services and creating new products and services. In addition to the ICT, new devices including big data, mobile gadgets and wearable products are gaining a great attention sending an expectation for a new market-pioneering. Further, Internet of Things (IoT) is helping solidify the ICT-based social development connecting human-to-human, human-to-things and things-to-things. This means that the manufacturing-based hardware development needs to be achieved simultaneously with software development through convergence. The essential element the convergence between hardware and software is OS, for which world's leading companies such as Google and Apple have launched an intense development recognizing the importance of software. Against this backdrop, the status-quo of the software market has been examined for the study of the present report (Korea Evaluation Institute of Industrial Technology: Professional Design Technology Development Project). As a result, the software platform-based Google's android and Apple's iOS are dominant in the global market and late comers are trying to enter the market through various pathways by releasing web-based OS and similar OS to provide a new paradigm to the market. The present study is aimed at finding the way to utilize a smart content by which anyone can be a developer based on OS responding to such as social change, newly defining a smart content to be universally utilized and analyzing the market to deal with a rapid market change. The study method, scope and details are as follows: Literature investigation, Analysis on the app market according to a smart classification system, Trend analysis on the current content market, Identification of five common trends through comparison among the universal definition of smart content, the status-quo of application represented in the app market and content market situation. In conclusion, the smart content market is independent but is expected to develop in the form of a single organic body being connected each other. Therefore, the further classification system and development focus should be made in a way to see the area from multiple perspectives including a social point of view in terms of the existing technology, culture, business and consumers.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

SKU recommender system for retail stores that carry identical brands using collaborative filtering and hybrid filtering (협업 필터링 및 하이브리드 필터링을 이용한 동종 브랜드 판매 매장간(間) 취급 SKU 추천 시스템)

  • Joe, Denis Yongmin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.77-110
    • /
    • 2017
  • Recently, the diversification and individualization of consumption patterns through the web and mobile devices based on the Internet have been rapid. As this happens, the efficient operation of the offline store, which is a traditional distribution channel, has become more important. In order to raise both the sales and profits of stores, stores need to supply and sell the most attractive products to consumers in a timely manner. However, there is a lack of research on which SKUs, out of many products, can increase sales probability and reduce inventory costs. In particular, if a company sells products through multiple in-store stores across multiple locations, it would be helpful to increase sales and profitability of stores if SKUs appealing to customers are recommended. In this study, the recommender system (recommender system such as collaborative filtering and hybrid filtering), which has been used for personalization recommendation, is suggested by SKU recommendation method of a store unit of a distribution company that handles a homogeneous brand through a plurality of sales stores by country and region. We calculated the similarity of each store by using the purchase data of each store's handling items, filtering the collaboration according to the sales history of each store by each SKU, and finally recommending the individual SKU to the store. In addition, the store is classified into four clusters through PCA (Principal Component Analysis) and cluster analysis (Clustering) using the store profile data. The recommendation system is implemented by the hybrid filtering method that applies the collaborative filtering in each cluster and measured the performance of both methods based on actual sales data. Most of the existing recommendation systems have been studied by recommending items such as movies and music to the users. In practice, industrial applications have also become popular. In the meantime, there has been little research on recommending SKUs for each store by applying these recommendation systems, which have been mainly dealt with in the field of personalization services, to the store units of distributors handling similar brands. If the recommendation method of the existing recommendation methodology was 'the individual field', this study expanded the scope of the store beyond the individual domain through a plurality of sales stores by country and region and dealt with the store unit of the distribution company handling the same brand SKU while suggesting a recommendation method. In addition, if the existing recommendation system is limited to online, it is recommended to apply the data mining technique to develop an algorithm suitable for expanding to the store area rather than expanding the utilization range offline and analyzing based on the existing individual. The significance of the results of this study is that the personalization recommendation algorithm is applied to a plurality of sales outlets handling the same brand. A meaningful result is derived and a concrete methodology that can be constructed and used as a system for actual companies is proposed. It is also meaningful that this is the first attempt to expand the research area of the academic field related to the existing recommendation system, which was focused on the personalization domain, to a sales store of a company handling the same brand. From 05 to 03 in 2014, the number of stores' sales volume of the top 100 SKUs are limited to 52 SKUs by collaborative filtering and the hybrid filtering method SKU recommended. We compared the performance of the two recommendation methods by totaling the sales results. The reason for comparing the two recommendation methods is that the recommendation method of this study is defined as the reference model in which offline collaborative filtering is applied to demonstrate higher performance than the existing recommendation method. The results of this model are compared with the Hybrid filtering method, which is a model that reflects the characteristics of the offline store view. The proposed method showed a higher performance than the existing recommendation method. The proposed method was proved by using actual sales data of large Korean apparel companies. In this study, we propose a method to extend the recommendation system of the individual level to the group level and to efficiently approach it. In addition to the theoretical framework, which is of great value.