• Title/Summary/Keyword: in-memory data grid

Search Result 58, Processing Time 0.044 seconds

A Research on Development of Bills of Material Using Web Grid for Product Lifecycle Management

  • Yoo, Ji-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.131-136
    • /
    • 2017
  • PLM(Product Lifecycle Management) is an information management system that can integrate data, processes, business systems and human resources throughout the enterprise. BOM(Bills Of Material) is key data for designing, purchasing materials, manufacturing planning and management, which is basic for product development throughout the product life cycle. In this paper, we propose the efficient system to increase the data loading speed and the processing speed when using such large BOM data. We present the performance and usability of IMDG (In Memory Data Grid) for data processing when loading large amounts of data. In the UI, using the pure web grid of JavaScript instead of the existing data loading method can be improve performance of data managing.

An Adaptive Grid-based Clustering Algorithm over Multi-dimensional Data Streams (적응적 격자기반 다차원 데이터 스트림 클러스터링 방법)

  • Park, Nam-Hun;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.733-742
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, memory usage for data stream analysis should be confined finitely although new data elements are continuously generated in a data stream. To satisfy this requirement, data stream processing sacrifices the correctness of its analysis result by allowing some errors. The old distribution statistics are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. This paper proposes a grid based clustering algorithm for a data stream. Given a set of initial grid cells, the dense range of a grid cell is recursively partitioned into a smaller cell based on the distribution statistics of data elements by a top down manner until the smallest cell, called a unit cell, is identified. Since only the distribution statistics of data elements are maintained by dynamically partitioned grid cells, the clusters of a data stream can be effectively found without maintaining the data elements physically. Furthermore, the memory usage of the proposed algorithm is adjusted adaptively to the size of confined memory space by flexibly resizing the size of a unit cell. As a result, the confined memory space can be fully utilized to generate the result of clustering as accurately as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics

Experimental investigation of Scalability of DDR DRAM packages

  • Crisp, R.
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.17 no.4
    • /
    • pp.73-76
    • /
    • 2010
  • A two-facet approach was used to investigate the parametric performance of functional high-speed DDR3 (Double Data Rate) DRAM (Dynamic Random Access Memory) die placed in different types of BGA (Ball Grid Array) packages: wire-bonded BGA (FBGA, Fine Ball Grid Array), flip-chip (FCBGA) and lead-bonded $microBGA^{(R)}$. In the first section, packaged live DDR3 die were tested using automatic test equipment using high-resolution shmoo plots. It was found that the best timing and voltage margin was obtained using the lead-bonded microBGA, followed by the wire-bonded FBGA with the FCBGA exhibiting the worst performance of the three types tested. In particular the flip-chip packaged devices exhibited reduced operating voltage margin. In the second part of this work a test system was designed and constructed to mimic the electrical environment of the data bus in a PC's CPU-Memory subsystem that used a single DIMM (Dual In Line Memory Module) socket in point-to-point and point-to-two-point configurations. The emulation system was used to examine signal integrity for system-level operation at speeds in excess of 6 Gb/pin/sec in order to assess the frequency extensibility of the signal-carrying path of the microBGA considered for future high-speed DRAM packaging. The analyzed signal path was driven from either end of the data bus by a GaAs laser driver capable of operation beyond 10 GHz. Eye diagrams were measured using a high speed sampling oscilloscope with a pulse generator providing a pseudo-random bit sequence stimulus for the laser drivers. The memory controller was emulated using a circuit implemented on a BGA interposer employing the laser driver while the active DRAM was modeled using the same type of laser driver mounted to the DIMM module. A custom silicon loading die was designed and fabricated and placed into the microBGA packages that were attached to an instrumented DIMM module. It was found that 6.6 Gb/sec/pin operation appears feasible in both point to point and point to two point configurations when the input capacitance is limited to 2pF.

Density Adaptive Grid-based k-Nearest Neighbor Regression Model for Large Dataset (대용량 자료에 대한 밀도 적응 격자 기반의 k-NN 회귀 모형)

  • Liu, Yiqi;Uk, Jung
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.2
    • /
    • pp.201-211
    • /
    • 2021
  • Purpose: This paper proposes a density adaptive grid algorithm for the k-NN regression model to reduce the computation time for large datasets without significant prediction accuracy loss. Methods: The proposed method utilizes the concept of the grid with centroid to reduce the number of reference data points so that the required computation time is much reduced. Since the grid generation process in this paper is based on quantiles of original variables, the proposed method can fully reflect the density information of the original reference data set. Results: Using five real-life datasets, the proposed k-NN regression model is compared with the original k-NN regression model. The results show that the proposed density adaptive grid-based k-NN regression model is superior to the original k-NN regression in terms of data reduction ratio and time efficiency ratio, and provides a similar prediction error if the appropriate number of grids is selected. Conclusion: The proposed density adaptive grid algorithm for the k-NN regression model is a simple and effective model which can help avoid a large loss of prediction accuracy with faster execution speed and fewer memory requirements during the testing phase.

A Study on the Development Automatic Placement/Routing System in the PCB (인쇄회로기판 자동배치/배선 시스템 개발에 관한 연구)

  • Kim, Hyun-Gi;Woo, Kyong-Hwan
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.267-276
    • /
    • 2004
  • The modeling methods of routing region used in the automatic placement/routing system are a grid and non-grid. Because the gird method is curbed by its size and a board if the electrical and physical elements on PCB are of small quantity, it has many memories. Therefore, it has demerit which decreases the speed of automatic placement/routing. The Shape-based type, non-grid method, makes the shapes exist as an in dividual element in a memory by using a region-processing method. Each individual element needs very small memory since it has its unique data size. Therefore, this paper aimed to develope the automatic placement/routing system which can automatically place and route the PCB without dissipation of memory at a high speed. To this aim, the auction algorithm method was applied which can make the memories be most rapidly reached from the original point to various destinations. Also, this system was developed by the Visual C++ in the Widows environment of IBM Pentium computer in order to use it in an individual PC system.

The Performance Evaluation of a Space-Division typed Index on the Flash Memory based Storage (플래쉬 메모리기반 저장장치에서의 공간분할기법 색인의 성능 평가)

  • Kim, Dong Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.103-108
    • /
    • 2014
  • The flash memory which is exploited on hand-held devices such as smart phones is a non-volatile storage and has the benefit that it can store mass data at a small sized chip. To process queries on the mass data stored in the flash memory, the index scheme should be exploited. However, since the write operation of the flash memory is slower than the read operation and the overwrite is not supported, it is required to reevaluate the performance of the index and find out the drawbacks. In this paper, we evaluate the performance of a space division typed index scheme on the flash memory. To do this, we implement the fixed grid file and measure the average speeds of the query and update processing on a various condition and compare the value of the flash memory with that of the magnetic disk.

A Case Study of the Base Technology for the Smart Grid Security: Focusing on a Performance Improvement of the Basic Algorithm for the DDoS Attacks Detection Using CUDA

  • Huh, Jun-Ho;Seo, Kyungryong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.411-417
    • /
    • 2016
  • Since the development of Graphic Processing Unit (GPU) in 1999, the development speed of GPUs has become much faster than that of CPUs and currently, the computational power of GPUs exceeds CPUs dozens and hundreds times in terms of decimal calculations and costs much less. Owing to recent technological development of hardwares, general-purpose computing and utilization using GPUs are on the rise. Thus, in this paper, we have identified the elements to be considered for the Smart Grid Security. Focusing on a Performance Improvement of the Basic Algorithm for the Stateful Inspection to Detect DDoS Attacks using CUDA. In the program, we compared the search speeds of GPU against CPU while they search for the suffix trees. For the computation, the system constraints and specifications were made identical during the experiment. We were able to understand from the results of the experiment that the problem-solving capability improves when GPU is used. The other finding was that performance of the system had been enhanced when shared memory was used explicitly instead of a global memory as the volume of data became larger.

Accurate and efficient GPU ray-casting algorithm for volume rendering of unstructured grid data

  • Gu, Gibeom;Kim, Duksu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.608-618
    • /
    • 2020
  • We present a novel GPU-based ray-casting algorithm for volume rendering of unstructured grid data. Our volume rendering system uses a ray-casting method that guarantees accurate rendering results. We also employ the per-pixel intersection list concept in the Bunyk algorithm to guarantee an accurate result for non-convex meshes. For efficient memory access for the lists on the GPU, we represent the intersection lists for all faces as an array with our novel construction algorithm. With the intersection lists, we perform ray-casting on a GPU, and a GPU thread handles each ray. To increase ray-coherency in a thread block and improve memory access efficiency, we extend a prior image-tile-based work distribution method to fit modern GPU architectures. We also show that a prior approach using a per-thread local buffer to reduce redundant computation is not appropriate for modern GPU architectures. Instead, we take an on-demand calculation strategy that achieves better performance even though it allows duplicate computations. We applied our method to three unstructured grid datasets with different characteristics. With a GPU, our method achieved up to 36.5 times higher performance for the ray-casting process and 19.7 times higher performance for the whole volume rendering process compared with the Bunyk algorithm using a CPU core. Also, our approach showed up to 8.2 times higher performance than a GPU-based cell projection method while generating more accurate rendering results. These results demonstrate the efficiency and accuracy of our method.

Smart grid and nuclear power plant security by integrating cryptographic hardware chip

  • Kumar, Niraj;Mishra, Vishnu Mohan;Kumar, Adesh
    • Nuclear Engineering and Technology
    • /
    • v.53 no.10
    • /
    • pp.3327-3334
    • /
    • 2021
  • Present electric grids are advanced to integrate smart grids, distributed resources, high-speed sensing and control, and other advanced metering technologies. Cybersecurity is one of the challenges of the smart grid and nuclear plant digital system. It affects the advanced metering infrastructure (AMI), for grid data communication and controls the information in real-time. The research article is emphasized solving the nuclear and smart grid hardware security issues with the integration of field programmable gate array (FPGA), and implementing the latest Time Authenticated Cryptographic Identity Transmission (TACIT) cryptographic algorithm in the chip. The cryptographic-based encryption and decryption approach can be used for a smart grid distribution system embedding with FPGA hardware. The chip design is carried in Xilinx ISE 14.7 and synthesized on Virtex-5 FPGA hardware. The state of the art of work is that the algorithm is implemented on FPGA hardware that provides the scalable design with different key sizes, and its integration enhances the grid hardware security and switching. It has been reported by similar state-of-the-art approaches, that the algorithm was limited in software, not implemented in a hardware chip. The main finding of the research work is that the design predicts the utilization of hardware parameters such as slices, LUTs, flip-flops, memory, input/output blocks, and timing information for Virtex-5 FPGA synthesis before the chip fabrication. The information is extracted for 8-bit to 128-bit key and grid data with initial parameters. TACIT security chip supports 400 MHz frequency for 128-bit key. The research work is an effort to provide the solution for the industries working towards embedded hardware security for the smart grid, power plants, and nuclear applications.

A Study of File Replacement Policy in Data Grid Environments (데이터 그리드 환경에서 파일 교체 정책 연구)

  • Park, Hong-Jin
    • The KIPS Transactions:PartA
    • /
    • v.13A no.6 s.103
    • /
    • pp.511-516
    • /
    • 2006
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used) LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The results of the simulation show that the proposed policy performs better than traditional policies.