• Title/Summary/Keyword: memory constraint

Search Result 70, Processing Time 0.024 seconds

Performance Evaluation of SSD-Index Maintenance Schemes in IR Applications

  • Jin, Du-Seok;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.4
    • /
    • pp.377-382
    • /
    • 2010
  • With the advent of flash memory based new storage device (SSD), there is considerable interest within the computer industry in using flash memory based storage devices for many different types of application. The dynamic index structure of large text collections has been a primary issue in the Information Retrieval Applications among them. Previous studies have proven the three approaches to be effective: In- Place, merge-based index structure and a combination of both. The above-mentioned strategies have been researched with the traditional storage device (HDD) which has a constraint on how keep the contiguity of dynamic data. However, in case of the new storage device, we don' have any constraint contiguity problems due to its low access latency time. But, although the new storage device has superiority such as low access latency and improved I/O throughput speeds, it is still not well suited for traditional dynamic index structures because of the poor random write throughput in practical systems. Therefore, using the experimental performance evaluation of various index maintenance schemes on the new storage device, we propose an efficient index structure for new storage device that improves significantly the index maintenance speed without degradation of query performance.

Tunnel Barrier Engineering for Non-Volatile Memory

  • Jung, Jong-Wan;Cho, Won-Ju
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.8 no.1
    • /
    • pp.32-39
    • /
    • 2008
  • Tunnel oxide of non-volatile memory (NVM) devices would be very difficult to downscale if ten-year data retention were still needed. This requirement limits further improvement of device performance in terms of programming speed and operating voltages. Consequently, for low-power applications with Fowler-Nordheim programming such as NAND, program and erase voltages are essentially sustained at unacceptably high levels. A promising solution for tunnel oxide scaling is tunnel barrier engineering (TBE), which uses multiple dielectric stacks to enhance field-sensitivity. This allows for shorter writing/erasing times and/or lower operating voltages than single $SiO_2$ tunnel oxide without altering the ten-year data retention constraint. In this paper, two approaches for tunnel barrier engineering are compared: the crested barrier and variable oxide thickness. Key results of TBE and its applications for NVM are also addressed.

An Analysis of Execution Patterns of Weather Forecast Application in Constraints Conditions (제약 조건에서의 예보를 위한 기상 응용의 실행 패턴 분석)

  • Oh, Jisun;Kim, Yoonhee
    • KNOM Review
    • /
    • v.22 no.3
    • /
    • pp.25-30
    • /
    • 2019
  • For meteorological applications, meaningful results must be derived and provided within time and resource limits. Forecasts through numerous historical data are time-consuming and still have resource limitations in the case of disaster safety-related analyses/predictions such as local typhoon forecasts. Suitable forecasts should be provided without any problems caused by limited physical environmental conditions and when results are to be drawn under time constraints, such as typhoon forecasts and forecast services for flooded areas by road. In this paper, we analyze the application of weather and climate forecasting to provide a suitable forecasting service in both temporal and resource conditions. Through the analysis of execution time according to mesh sizes, it was confirmed that a mesh adjustment can cope with the case of the temporal constraint. In addition, by analyzing the execution time through memory resource control, we confirmed the minimum resource condition that does not affect the performance and the resource usage pattern of the application through the swap and mlock analysis.

Tracking Cold Blocks for Static Wear Leveling in FTL-based NAND Flash Memory (메모리에서 정적 마모도 평준화를 위한 콜드 블록 추적 기법)

  • Jang, Yonghun;Kim, Sungho;Hwang, Sang-Ho;Lee, Myungsub;Park, Chang-Hyeon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.3
    • /
    • pp.185-192
    • /
    • 2017
  • Due to the characteristics of low power, high durability and high density, NAND flash memory is being heavily used in various type of devices such as USB, SD card, smart phone and SSD. On the other hand, because of another characteristic of flash cell with the limited number of program/erase cycles, NAND flash memory has a short lifetime compared to other storage devices. To overcome the lifetime problem, many researches related to the wear leveling have been conducted. This paper presents a method called a TCB (Tracking Cold Blocks) using more reinforced constraint conditions when classifying cold blocks than previous works. TCB presented in this paper keeps a MCT (Migrated Cold block Table) to manage the enhanced classification process of cold blocks, with which unnecessary migrations of pages can be reduced much more. Through the experiments, we show that TCB reduces the overhead of wear leveling by about 30% and increases the lifetime up to about 60% compared to BET and BST.

Enhancing the performance of taxi application based on in-memory data grid technology (In-memory data grid 기술을 활용한 택시 애플리케이션 성능 향상 기법 연구)

  • Choi, Chi-Hwan;Kim, Jin-Hyuk;Park, Min-Kyu;Kwon, Kaaen;Jung, Seung-Hyun;Nazareno, Franco;Cho, Wan-Sup
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1035-1045
    • /
    • 2015
  • Recent studies in Big Data Analysis are showing promising results, utilizing the main memory for rapid data processing. In-memory computing technology can be highly advantageous when used with high-performing servers having tens of gigabytes of RAM with multi-core processors. The constraint in network in these infrastructure can be lessen by combining in-memory technology with distributed parallel processing. This paper discusses the research in the aforementioned concept applying to a test taxi hailing application without disregard to its underlying RDBMS structure. The application of IMDG technology in the application's backend API without restructuring the database schema yields 6 to 9 times increase in performance in data processing and throughput. Specifically, the change in throughput is very small even with increase in data load processing.

A study on the estimate of the angular distortion for a fillet weldment (필릿 용접부의 각변형량 예측에 관한 연구)

  • ;;;Lee, S. H.;Cho, S. H.
    • Journal of Welding and Joining
    • /
    • v.15 no.4
    • /
    • pp.63-69
    • /
    • 1997
  • Welding distortion is more serious problem than any other problems caused by welding process, especially, in the heavy-industrial place. These welding distortions are caused by nonuniform heating and cooling of metal during and after welding operations. And these distortion quantities are must be known to worker in production line because distorions are important role in assembling part. Therefore an analytical model to explain and predict the welding distortion are needed. A numerical analysis of welding distortion which is inelastic behavior of weldment would require the three dimensional calculation. But computing time and memory would be very large, and the resulting cost might be unacceptable. Therefore we use a numerical technique for two dimensional analysis in the section normal to the weld direction of weldment under an assumption of quasi-stationary conditions. But the result of the calculation under two dimensional(plane strain) assumption was not satisfied as compared with experimental result. This paper proposed a technique for analysing the welding angular distortion by using a constraint boundary condition on the two dimensional finite element model. The simulation results revealed that the constraint boundary model could more reasonably describe the welding distortion than the plane strain model did.

  • PDF

A Implementation of Simple Convolution Decoder Using a Temporal Neural Networks

  • Chung, Hee-Tae;Kim, Kyung-Hun
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.4
    • /
    • pp.177-182
    • /
    • 2003
  • Conventional multilayer feedforward artificial neural networks are very effective in dealing with spatial problems. To deal with problems with time dependency, some kinds of memory have to be built in the processing algorithm. In this paper we show how the newly proposed Serial Input Neuron (SIN) convolutional decoders can be derived. As an example, we derive the SIN decoder for rate code with constraint length 3. The SIN is tested in Gaussian channel and the results are compared to the results of the optimal Viterbi decoder. A SIN approach to decode convolutional codes is presented. No supervision is required. The decoder lends itself to pleasing implementations in hardware and processing codes with high speed in a time. However, the speed of the current circuits may set limits to the codes used. With increasing speeds of the circuits in the future, the proposed technique may become a tempting choice for decoding convolutional coding with long constraint lengths.

Compiler Optimization Techniques for The Next Generation Low Power Multibank Memory (차세대 저전력 멀티뱅크 메모리를 위한 컴파일러 최적화 기법)

  • Cho, Doosan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.141-145
    • /
    • 2021
  • Various types of memory architectures have been developed, and various compiler optimization techniques have been studied to efficiently use them. In particular, since a memory is a major component that determines performance in mobile computing devices, various optimization techniques have been developed to support them. Recently, a lot of research on hybrid type memory architecture is being conducted, so various compiler techniques are being studied to support it. Existing compiler optimization techniques can be used to achieve the required minimum performance and constraint on low power according to market requirements. References for determining the low-power effect and the degree of performance improvement using these optimization techniques are not properly provided yet. This study was conducted to provide the experimental results of the existing compiler technique as a reference for the development of multibank memory architecture.

A design of viterbi decoder for forward error correction (오류 정정을 위한 Viterbi 디코더 설계)

  • 박화세;김은원
    • The Journal of Information Technology
    • /
    • v.3 no.1
    • /
    • pp.29-36
    • /
    • 2000
  • Viterbi decoder is a maximum likelihood decoding method for convolution coding used in satellite and mobile communications. In this paper, a Viterbi decoder with constraint length of K=7, 3 bit soft decision and traceback depth of ${\Gamma}=96$ for convolution code is implemented using VHDL. The hardware size of designed decoder is reduced by 4 bit pre-traceback in the survivor memory.

  • PDF

Integer Programming-based Local Search Technique for Linear Constraint Satisfaction Optimization Problem (선형 제약 만족 최적화 문제를 위한 정수계획법 기반 지역 탐색 기법)

  • Hwang, Jun-Ha;Kim, Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.47-55
    • /
    • 2010
  • Linear constraint satisfaction optimization problem is a kind of combinatorial optimization problem involving linearly expressed objective function and complex constraints. Integer programming is known as a very effective technique for such problem but require very much time and memory until finding a suboptimal solution. In this paper, we propose a method to improve the search performance by integrating local search and integer programming. Basically, simple hill-climbing search, which is the simplest form of local search, is used to solve the given problem and integer programming is applied to generate a neighbor solution. In addition, constraint programming is used to generate an initial solution. Through the experimental results using N-Queens maximization problems, we confirmed that the proposed method can produce far better solutions than any other search methods.