• Title/Summary/Keyword: data scalability

Search Result 574, Processing Time 0.024 seconds

Technology of MRAM (Magneto-resistive Random Access Memory) Using MTJ(Magnetic Tunnel Junction) Cell

  • Park, Wanjun;Song, I-Hun;Park, Sangjin;Kim, Teawan
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.2 no.3
    • /
    • pp.197-204
    • /
    • 2002
  • DRAM, SRAM, and FLASH memory are three major memory devices currently used in most electronic applications. But, they have very distinct attributes, therefore, each memory could be used only for limited applications. MRAM (Magneto-resistive Random Access Memory) is a promising candidate for a universal memory that meets all application needs with non-volatile, fast operational speed, and low power consumption. The simplest architecture of MRAM cell is a series of MTJ (Magnetic Tunnel Junction) as a data storage part and MOS transistor as a data selection part. To be a commercially competitive memory device, scalability is an important factor as well. This paper is testing the actual electrical parameters and the scaling factors to limit MRAM technology in the semiconductor based memory device by an actual integration of MRAM core cell. Electrical tuning of MOS/MTJ, and control of resistance are important factors for data sensing, and control of magnetic switching for data writing.

Scalable Approach to Failure Analysis of High-Performance Computing Systems

  • Shawky, Doaa
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1023-1031
    • /
    • 2014
  • Failure analysis is necessary to clarify the root cause of a failure, predict the next time a failure may occur, and improve the performance and reliability of a system. However, it is not an easy task to analyze and interpret failure data, especially for complex systems. Usually, these data are represented using many attributes, and sometimes they are inconsistent and ambiguous. In this paper, we present a scalable approach for the analysis and interpretation of failure data of high-performance computing systems. The approach employs rough sets theory (RST) for this task. The application of RST to a large publicly available set of failure data highlights the main attributes responsible for the root cause of a failure. In addition, it is used to analyze other failure characteristics, such as time between failures, repair times, workload running on a failed node, and failure category. Experimental results show the scalability of the presented approach and its ability to reveal dependencies among different failure characteristics.

Data Mining for High Dimensional Data in Drug Discovery and Development

  • Lee, Kwan R.;Park, Daniel C.;Lin, Xiwu;Eslava, Sergio
    • Genomics & Informatics
    • /
    • v.1 no.2
    • /
    • pp.65-74
    • /
    • 2003
  • Data mining differs primarily from traditional data analysis on an important dimension, namely the scale of the data. That is the reason why not only statistical but also computer science principles are needed to extract information from large data sets. In this paper we briefly review data mining, its characteristics, typical data mining algorithms, and potential and ongoing applications of data mining at biopharmaceutical industries. The distinguishing characteristics of data mining lie in its understandability, scalability, its problem driven nature, and its analysis of retrospective or observational data in contrast to experimentally designed data. At a high level one can identify three types of problems for which data mining is useful: description, prediction and search. Brief review of data mining algorithms include decision trees and rules, nonlinear classification methods, memory-based methods, model-based clustering, and graphical dependency models. Application areas covered are discovery compound libraries, clinical trial and disease management data, genomics and proteomics, structural databases for candidate drug compounds, and other applications of pharmaceutical relevance.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

Efficient Top-k Join Processing over Encrypted Data in a Cloud Environment

  • Kim, Jong Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5153-5170
    • /
    • 2016
  • The benefit of the scalability and flexibility inherent in cloud computing motivates clients to upload data and computation to public cloud servers. Because data is placed on public clouds, which are very likely to reside outside of the trusted domain of clients, this strategy introduces concerns regarding the security of sensitive client data. Thus, to provide sufficient security for the data stored in the cloud, it is essential to encrypt sensitive data before the data are uploaded onto cloud servers. Although data encryption is considered the most effective solution for protecting sensitive data from unauthorized users, it imposes a significant amount of overhead during the query processing phase, due to the limitations of directly executing operations against encrypted data. Recently, substantial research work that addresses the execution of SQL queries against encrypted data has been conducted. However, there has been little research on top-k join query processing over encrypted data within the cloud computing environments. In this paper, we develop an efficient algorithm that processes a top-k join query against encrypted cloud data. The proposed top-k join processing algorithm is, at an early phase, able to prune unpromising data sets which are guaranteed not to produce top-k highest scores. The experiment results show that the proposed approach provides significant performance gains over the naive solution.

Main Memory Spatial Database Clusters for Large Scale Web Geographic Information Systems (대규모 웹 지리정보시스템을 위한 메모리 상주 공간 데이터베이스 클러스터)

  • Lee, Jae-Dong
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.3-17
    • /
    • 2004
  • With the rapid growth of the Internet geographic information services through the WWW such as a location-based service and so on. Web GISs (Geographic Information Systems) have also come to be a cluster-based architecture like most other information systems. That is, in order to guarntee high quality of geographic information service without regard to the rapid growth of the number of users, web GISs need cluster-based architecture that will be cost-effective and have high availability and scalability. This paper proposes the design of the cluster-based web GIS with high availability and scalability. For this, each node within a cluster-based web GIS consists of main memory spatial databases which accomplish role of caching by using data declustering and the locality of spatial query. Not only simple region queries but also the proposed system processed spatial join queries effectively. Compare to the existing method. Parallel R-tree spatial join for a shared-Nothing architecture, the result of simulation experiments represents that the proposed spatial join method achieves improvement of performance respectively 23% and 30% as data quantity and nodes of cluster become large.

  • PDF

An Adaptable Destination-Based Dissemination Algorithm Using a Publish/Subscribe Model in Vehicular Networks

  • Morales, Mildred Madai Caballeros;Haw, Rim;Cho, Eung-Jun;Hong, Choong-Seon;Lee, Sung-Won
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.3
    • /
    • pp.227-242
    • /
    • 2012
  • Vehicular Ad Hoc Networks (VANETs) are highly dynamic and unstable due to the heterogeneous nature of the communications, intermittent links, high mobility and constant changes in network topology. Currently, some of the most important challenges of VANETs are the scalability problem, congestion, unnecessary duplication of data, low delivery rate, communication delay and temporary fragmentation. Many recent studies have focused on a hybrid mechanism to disseminate information implementing the store and forward technique in sparse vehicular networks, as well as clustering techniques to avoid the scalability problem in dense vehicular networks. However, the selection of intermediate nodes in the store and forward technique, the stability of the clusters and the unnecessary duplication of data remain as central challenges. Therefore, we propose an adaptable destination-based dissemination algorithm (DBDA) using the publish/subscribe model. DBDA considers the destination of the vehicles as an important parameter to form the clusters and select the intermediate nodes, contrary to other proposed solutions. Additionally, DBDA implements a publish/subscribe model. This model provides a context-aware service to select the intermediate nodes according to the importance of the message, destination, current location and speed of the vehicles; as a result, it avoids delay, congestion, unnecessary duplications and low delivery rate.

A Mechanism to Support Scalability for Network Mobility (확장성 있는 네트워크 이동성 지원 방안)

  • Kim Taeeun;Lee Meejeong
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.1
    • /
    • pp.34-50
    • /
    • 2005
  • Recently, various proposals for supporting network mobility, which provides efficient Internet access when a network formed within a vehicle moves around as a unit, have emerged. The schemes in those proposals, though, manifest some major drawbacks with respect to scalability: If the number of mobile nodes within a mobile network is large, the handoff latency would increase greatly, causing communication disruption; Data delivery to a node within a nested mobile network nay suffer extremely inefficient pinball routing. We propose a scalable network mobility supporting mechanism named SNEMOS (Scalable NEtwork Mobility Support), which resolves the above two major problems of the existing schemes. The performance of SNEMOS is compared with the existing schemes through extensive simulations. The numerical results show that SNEMOS outperforms the existing schemes with respect to handoff latency hop counts of routing paths, packet delivery time, header overhead in data packets, and signaling overhead.

A Recommender System Using Factorization Machine (Factorization Machine을 이용한 추천 시스템 설계)

  • Jeong, Seung-Yoon;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.707-712
    • /
    • 2017
  • As the amount of data increases exponentially, the recommender system is attracting interest in various industries such as movies, books, and music, and is being studied. The recommendation system aims to propose an appropriate item to the user based on the user's past preference and click stream. Typical examples include Netflix's movie recommendation system and Amazon's book recommendation system. Previous studies can be categorized into three types: collaborative filtering, content-based recommendation, and hybrid recommendation. However, existing recommendation systems have disadvantages such as sparsity, cold start, and scalability problems. To improve these shortcomings and to develop a more accurate recommendation system, we have designed a recommendation system as a factorization machine using actual online product purchase data.

A Fast and Scalable Image Retrieval Algorithms by Leveraging Distributed Image Feature Extraction on MapReduce (MapReduce 기반 분산 이미지 특징점 추출을 활용한 빠르고 확장성 있는 이미지 검색 알고리즘)

  • Song, Hwan-Jun;Lee, Jin-Woo;Lee, Jae-Gil
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1474-1479
    • /
    • 2015
  • With mobile devices showing marked improvement in performance in the age of the Internet of Things (IoT), there is demand for rapid processing of the extensive amount of multimedia big data. However, because research on image searching is focused mainly on increasing accuracy despite environmental changes, the development of fast processing of high-resolution multimedia data queries is slow and inefficient. Hence, we suggest a new distributed image search algorithm that ensures both high accuracy and rapid response by using feature extraction of distributed images based on MapReduce, and solves the problem of memory scalability based on BIRCH indexing. In addition, we conducted an experiment on the accuracy, processing time, and scalability of this algorithm to confirm its excellent performance.