• Title/Summary/Keyword: Memory reduction

Search Result 471, Processing Time 0.022 seconds

A Triple-Band Transceiver Module for 2.3/2.5/3.5 GHz Mobile WiMAX Applications

  • Jang, Yeon-Su;Kang, Sung-Chan;Kim, Young-Eil;Lee, Jong-Ryul;Yi, Jae-Hoon;Chun, Kuk-Jin
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.11 no.4
    • /
    • pp.295-301
    • /
    • 2011
  • A triple-band transceiver module for 2.3/2.5/3.5 GHz mobile WiMAX, IEEE 802.16e, applications is introduced. The suggested transceiver module consists of RFIC, reconfigurable/multi-resonance MIMO antenna, embedded PCB, mobile WiMAX base band, memory and channel selection front-end module. The RFIC is fabricated in $0.13{\mu}m$ RF CMOS process and has 3.5 dB noise figure(NF) of receiver and 1 dBm maximum power of transmitter with 68-pin QFN package, $8{\times}8\;mm^2$ area. The area reduction of transceiver module is achieved by using embedded PCB which decreases area by 9% of the area of transceiver module with normal PCB. The developed triple-band mobile WiMAX transceiver module is tested by performing radio conformance test(RCT) and measuring carrier to interference plus noise ratio (CINR) and received signal strength indication (RSSI) in each 2.3/2.5/3.5 GHz frequency.

A Design of Programmable Fragment Shader with Reduction of Memory Transfer Time (메모리 전송 효율을 개선한 programmable Fragment 쉐이더 설계)

  • Park, Tae-Ryoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2675-2680
    • /
    • 2010
  • Computation steps for 3D graphic processing consist of two stages - fixed operation stage and programming required stage. Using this characteristic of 3D pipeline, a hybrid structure between graphics hardware designed by fixed structure and programmable hardware based on instructions, can handle graphic processing more efficiently. In this paper, fragment Shader is designed under this hybrid structure. It also supports OpenGL ES 2.0. Interior interface is optimized to reduce the delay of entire pipeline, which may be occurred by data I/O between the fixed hardware and the Shader. Interior register group of the Shader is designed by an interleaved structure to improve the register space and processing speed.

Design and Implementation of the Java Card API for Efficient File Management (효율적 파일 관리를 위한 자바카드 API 설계 및 구현)

  • Song Young-Sang;Shin In-Chul
    • The KIPS Transactions:PartC
    • /
    • v.13C no.3 s.106
    • /
    • pp.275-282
    • /
    • 2006
  • There are several independent applets to support various applications in a Java Card. Each applet in a Java Card processes and manages its own data without concern to other applets and their data. In this paper we proposed file system API to support efficient file management based on Java Card. Also we designed and implemented Java Card based file system API using basic API and referring to the file system standard defined in ISO 7816-4 Smart Card standard. By using proposed file system API, we can replace duplications of same code in each applet with short method call. So the used memory space and processing time is reduced and also the reduction of development time and cost will be expected.

Scalar Multiplication on Elliptic Curves by Frobenius Expansions

  • Cheon, Jung-Hee;Park, Sang-Joon;Park, Choon-Sik;Hahn, Sang-Geun
    • ETRI Journal
    • /
    • v.21 no.1
    • /
    • pp.28-39
    • /
    • 1999
  • Koblitz has suggested to use "anomalous" elliptic curves defined over ${\mathbb{F}}_2$, which are non-supersingular and allow or efficient multiplication of a point by and integer, For these curves, Meier and Staffelbach gave a method to find a polynomial of the Frobenius map corresponding to a given multiplier. Muller generalized their method to arbitrary non-supersingular elliptic curves defined over a small field of characteristic 2. in this paper, we propose an algorithm to speed up scalar multiplication on an elliptic curve defined over a small field. The proposed algorithm uses the same field. The proposed algorithm uses the same technique as Muller's to get an expansion by the Frobenius map, but its expansion length is half of Muller's due to the reduction step (Algorithm 1). Also, it uses a more efficient algorithm (Algorithm 3) to perform multiplication using the Frobenius expansion. Consequently, the proposed algorithm is two times faster than Muller's. Moreover, it can be applied to an elliptic curve defined over a finite field with odd characteristic and does not require any precomputation or additional memory.

  • PDF

APPLICATION OF BACKWARD DIFFERENTIATION FORMULA TO SPATIAL REACTOR KINETICS CALCULATION WITH ADAPTIVE TIME STEP CONTROL

  • Shim, Cheon-Bo;Jung, Yeon-Sang;Yoon, Joo-Il;Joo, Han-Gyu
    • Nuclear Engineering and Technology
    • /
    • v.43 no.6
    • /
    • pp.531-546
    • /
    • 2011
  • The backward differentiation formula (BDF) method is applied to a three-dimensional reactor kinetics calculation for efficient yet accurate transient analysis with adaptive time step control. The coarse mesh finite difference (CMFD) formulation is used for an efficient implementation of the BDF method that does not require excessive memory to store old information from previous time steps. An iterative scheme to update the nodal coupling coefficients through higher order local nodal solutions is established in order to make it possible to store only node average fluxes of the previous five time points. An adaptive time step control method is derived using two order solutions, the fifth and the fourth order BDF solutions, which provide an estimate of the solution error at the current time point. The performance of the BDF- and CMFD-based spatial kinetics calculation and the adaptive time step control scheme is examined with the NEACRP control rod ejection and rod withdrawal benchmark problems. The accuracy is first assessed by comparing the BDF-based results with those of the Crank-Nicholson method with an exponential transform. The effectiveness of the adaptive time step control is then assessed in terms of the possible computing time reduction in producing sufficiently accurate solutions that meet the desired solution fidelity.

Finite Element Modeling for Static and Dynamic Analysis of Structures with Bolted Joints (볼트결합부를 포함한 구조물의 정적 및 동적 해석을 위한 유한요소 모델링)

  • Gwon, Yeong-Du;Gu, Nam-Seo;Kim, Seong-Yun;Jo, Min-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.4
    • /
    • pp.667-676
    • /
    • 2002
  • Many studies on the finite element modeling for bolted joints have proceeded, but the structures with bolted joints are complicated in shape and it is difficult to find out the characteristics according to joint condition. Usually, experimental methods have been used for bolted joint analysis. A reliable and practical finite element modeling technique for structure with bolted joints is very important for engineers in industry. In this study, three kinds of model are presented; a detailed model, a practical model and a simple model. The detailed model is modeled by using 3-D solid element and gap element, and the practical model is modeled by using shell element (a portion of bolt head) and beam element (a portion of bolt body), the simple model is modeled by simplifying practical model without using gap elements. Among these models, the simple model has the least degree of freedom and show the effect of memory reduction of 59%, when compared with the detailed model.

Design and Implementation of Incremental Learning Technology for Big Data Mining

  • Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.15 no.3
    • /
    • pp.32-38
    • /
    • 2019
  • We usually suffer from difficulties in treating or managing Big Data generated from various digital media and/or sensors using traditional mining techniques. Additionally, there are many problems relative to the lack of memory and the burden of the learning curve, etc. in an increasing capacity of large volumes of text when new data are continuously accumulated because we ineffectively analyze total data including data previously analyzed and collected. In this paper, we propose a general-purpose classifier and its structure to solve these problems. We depart from the current feature-reduction methods and introduce a new scheme that only adopts changed elements when new features are partially accumulated in this free-style learning environment. The incremental learning module built from a gradually progressive formation learns only changed parts of data without any re-processing of current accumulations while traditional methods re-learn total data for every adding or changing of data. Additionally, users can freely merge new data with previous data throughout the resource management procedure whenever re-learning is needed. At the end of this paper, we confirm a good performance of this method in data processing based on the Big Data environment throughout an analysis because of its learning efficiency. Also, comparing this algorithm with those of NB and SVM, we can achieve an accuracy of approximately 95% in all three models. We expect that our method will be a viable substitute for high performance and accuracy relative to large computing systems for Big Data analysis using a PC cluster environment.

Density Adaptive Grid-based k-Nearest Neighbor Regression Model for Large Dataset (대용량 자료에 대한 밀도 적응 격자 기반의 k-NN 회귀 모형)

  • Liu, Yiqi;Uk, Jung
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.2
    • /
    • pp.201-211
    • /
    • 2021
  • Purpose: This paper proposes a density adaptive grid algorithm for the k-NN regression model to reduce the computation time for large datasets without significant prediction accuracy loss. Methods: The proposed method utilizes the concept of the grid with centroid to reduce the number of reference data points so that the required computation time is much reduced. Since the grid generation process in this paper is based on quantiles of original variables, the proposed method can fully reflect the density information of the original reference data set. Results: Using five real-life datasets, the proposed k-NN regression model is compared with the original k-NN regression model. The results show that the proposed density adaptive grid-based k-NN regression model is superior to the original k-NN regression in terms of data reduction ratio and time efficiency ratio, and provides a similar prediction error if the appropriate number of grids is selected. Conclusion: The proposed density adaptive grid algorithm for the k-NN regression model is a simple and effective model which can help avoid a large loss of prediction accuracy with faster execution speed and fewer memory requirements during the testing phase.

Improving safety performance of construction workers through cognitive function training

  • Se-jong Ahn;Ho-sang Moon;Sung-Taek Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.159-166
    • /
    • 2023
  • Due to the aging workforce in the construction industry in South Korea, the accident rate has been increasing. The cognitive abilities of older workers are closely related to both safety incidents and labor productivity. Therefore, there is a need to improve cognitive abilities through personalized training based on cognitive assessment results, using cognitive training content, in order to enable safe performance in labor-intensive environments. The provided cognitive training content includes concentration, memory, oreintation, attention, and executive functions. Difficulty levels were applied to each content to enhance user engagement and interest. To stimulate interest and encourage active participation of the participants, the difficulty level was automatically adjusted based on feedback from the MMSE-DS results and content measurement data. Based on the accumulated data, individual training scenarios have been set differently to intensively improve insufficient cognitive skills, and cognitive training programs will be developed to reduce safety accidents at construction sites through measured data and research. Through such simple cognitive training, it is expected that the reduction of accidents in the aging construction workforce can lead to a decrease in the social costs associated with prolonged construction periods caused by accidents.

Structural reliability analysis using temporal deep learning-based model and importance sampling

  • Nguyen, Truong-Thang;Dang, Viet-Hung
    • Structural Engineering and Mechanics
    • /
    • v.84 no.3
    • /
    • pp.323-335
    • /
    • 2022
  • The main idea of the framework is to seamlessly combine a reasonably accurate and fast surrogate model with the importance sampling strategy. Developing a surrogate model for predicting structures' dynamic responses is challenging because it involves high-dimensional inputs and outputs. For this purpose, a novel surrogate model based on cutting-edge deep learning architectures specialized for capturing temporal relationships within time-series data, namely Long-Short term memory layer and Transformer layer, is designed. After being properly trained, the surrogate model could be utilized in place of the finite element method to evaluate structures' responses without requiring any specialized software. On the other hand, the importance sampling is adopted to reduce the number of calculations required when computing the failure probability by drawing more relevant samples near critical areas. Thanks to the portability of the trained surrogate model, one can integrate the latter with the Importance sampling in a straightforward fashion, forming an efficient framework called TTIS, which represents double advantages: less number of calculations is needed, and the computational time of each calculation is significantly reduced. The proposed approach's applicability and efficiency are demonstrated through three examples with increasing complexity, involving a 1D beam, a 2D frame, and a 3D building structure. The results show that compared to the conventional Monte Carlo simulation, the proposed method can provide highly similar reliability results with a reduction of up to four orders of magnitudes in time complexity.