• Title/Summary/Keyword: a sparse matrix

Search Result 228, Processing Time 0.028 seconds

Novel construction of quasi-cyclic low-density parity-check codes with variable code rates for cloud data storage systems

  • Vairaperumal Bhuvaneshwari;Chandrapragasam Tharini
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.404-417
    • /
    • 2023
  • This paper proposed a novel method for constructing quasi-cyclic low-density parity-check (QC-LDPC) codes of medium to high code rates that can be applied in cloud data storage systems, requiring better error correction capabilities. The novelty of this method lies in the construction of sparse base matrices, using a girth greater than 4 that can then be expanded with a lift factor to produce high code rate QC-LDPC codes. Investigations revealed that the proposed large-sized QC-LDPC codes with high code rates displayed low encoding complexities and provided a low bit error rate (BER) of 10-10 at 3.5 dB Eb/N0 than conventional LDPC codes, which showed a BER of 10-7 at 3 dB Eb/N0. Subsequently, implementation of the proposed QC-LDPC code in a softwaredefined radio, using the NI USRP 2920 hardware platform, was conducted. As a result, a BER of 10-6 at 4.2 dB Eb/N0 was achieved. Then, the performance of the proposed codes based on their encoding-decoding speeds and storage overhead was investigated when applied to a cloud data storage (GCP). Our results revealed that the proposed codes required much less time for encoding and decoding (of data files having a 10 MB size) and produced less storage overhead than the conventional LDPC and Reed-Solomon codes.

A Wavefront Array Processor Utilizing a Recursion Equation for ME/MC in the frequency Domain (주파수 영역에서의 움직임 예측 및 보상을 위한 재귀 방정식을 이용한 웨이브프런트 어레이 프로세서)

  • Lee, Joo-Heung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.10C
    • /
    • pp.1000-1010
    • /
    • 2006
  • This paper proposes a new architecture for DCT-based motion estimation and compensation. Previous methods do riot take sufficient advantage of the sparseness of 2-D DCT coefficients to reduce execution time. We first derive a recursion equation to perform DCT domain motion estimation more efficiently; we then use it to develop a wavefront array processor (WAP) consisting of processing elements. In addition, we show that the recursion equation enables motion predicted images with different frequency bands, for example, from the images with low frequency components to the images with low and high frequency components. The wavefront way Processor can reconfigure to different motion estimation algorithms, such as logarithmic search and three step search, without architectural modifications. These properties can be effectively used to reduce the energy required for video encoding and decoding. The proposed WAP architecture achieves a significant reduction in computational complexity and processing time. It is also shown that the motion estimation algorithm in the transform domain using SAD (Sum of Absolute Differences) matching criterion maximizes PSNR and the compression ratio for the practical video coding applications when compared to tile motion estimation algorithm in the spatial domain using either SAD or SSD.

Compressed Sensing Techniques for Millimeter Wave Channel Estimation (밀리미터파 채널 추정을 위한 압축 센싱 기법)

  • Han, Yonghee;Lee, Jungwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.1
    • /
    • pp.25-30
    • /
    • 2017
  • Millimeter wave (mmWave) bands are expected to improve date rate of 5G systems due to the wide available bandwidth. While severe path loss in those bands has impeded the utilization, short wavelength enables a large number of antennas packed in a compact form, which can mitigate the path loss. However, estimating the channel with a conventional scheme requires a huge training overhead, hence an efficient estimation scheme operating with a small overhead needs to be developed. The sparsity of mmWave channels caused by the limited scatterers can be exploited to reduce the overhead by utilizing compressed sensing. In this paper, we introduce compressed sensing techniques for mmWave channel estimation. First, we formulate wideband channel estimation into a sparse recovery problem. We also analyze the characteristics of random measurement matrix constructed using quantized phase shifters in terms of mutual incoherence.

Analysis of PM2.5 Impact and Human Exposure from Worst-Case of Mt. Baekdu Volcanic Eruption (백두산 분화 Worst-case로 인한 우리나라 초미세먼지(PM2.5) 영향분석 및 노출평가)

  • Park, Jae Eun;Kim, Hyerim;Sunwoo, Young
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1267-1276
    • /
    • 2020
  • To quantitatively predict the impacts of large-scale volcanic eruptions of Mt. Baekdu on air quality and damage around the Korean Peninsula, a three-dimensional chemistry-transport modeling system (Weather Research & Forecasting - Sparse Matrix Operation Kernel Emission - Comunity Multi-scale Air Quality) was adopted. A worst-case meteorology scenario was selected to estimate the direct impact on Korea. This study applied the typical worst-case scenarios that are likely to cause significant damage to Korea among worst-case volcanic eruptions of Mt. Baekdu in the past decade (2005~2014) and assumed a massive VEI 4 volcanic eruption on May 16, 2012, to analyze the concentration of PM2.5 caused by the volcanic eruption. The effects of air quality in each region-cities, counties, boroughs-were estimated, and vulnerable areas were derived by conducting an exposure assessment reflecting vulnerable groups. Moreover, the effects of cities, counties, and boroughs were analyzed with a high-resolution scale (9 km × 9 km) to derive vulnerable areas within the regions. As a result of analyzing the typical worst-case volcanic eruptions of Mt. Baekdu, a discrepancy was shown in areas between high PM2.5 concentration, high population density, and where vulnerable groups are concentrated. From the result, PM2.5 peak concentration was about 24,547 ㎍/㎥, which is estimated to be a more serious situation than the eruption of Mt. St. Helensin 1980, which is known for 540 million tons of volcanic ash. Paju, Gimpo, Goyang, Ganghwa, Sancheong, Hadong showed to have a high PM2.5 concentration. Paju appeared to be the most vulnerable area from the exposure assessment. While areas estimated with a high concentration of air pollutants are important, it is also necessary to develop plans and measures considering densely populated areas or areas with high concentrations of susceptible population or vulnerable groups. Also, establishing measures for each vulnerable area by selecting high concentration areas within cities, counties, and boroughs rather than establishing uniform measures for all regions is needed. This study will provide the foundation for developing the standards for disaster declaration and preemptive response systems for volcanic eruptions.

3D Modeling and Inversion of Magnetic Anomalies (자력이상 3차원 모델링 및 역산)

  • Cho, In-Ky;Kang, Hye-Jin;Lee, Keun-Soo;Ko, Kwang-Beom;Kim, Jong-Nam;You, Young-June;Han, Kyeong-Soo;Shin, Hong-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.3
    • /
    • pp.119-130
    • /
    • 2013
  • We developed a method for inverting magnetic data to recover the 3D susceptibility models. The major difficulty in the inversion of the potential data is the non-uniqueness and the vast computing time. The insufficient number of data compared with that of inversion blocks intensifies the non-uniqueness problem. Furthermore, there is poor depth resolution inherent in magnetic data. To overcome this non-uniqueness problem, we propose a resolution model constraint that imposes large penalty on the model parameter with good resolution; on the other hand, small penalty on the model parameter with poor resolution. Using this model constraint, the model parameter with a poor resolution can be effectively resolved. Moreover, the wavelet transform and parallel solving were introduced to save the computing time. Through the wavelet transform, a large system matrix was transformed to a sparse matrix and solved by a parallel linear equation solver. This procedure is able to enormously save the computing time for the 3D inversion of magnetic data. The developed inversion algorithm is applied to the inversion of the synthetic data for typical models of magnetic anomalies and real airborne data obtained at the Geumsan area of Korea.

A MIMO LTE Precoding Codebook Based on Fast Diagonal Weighted Matrices (고속 대각 하중 행렬을 이용한 MIMO LTE 프리코딩 코드북)

  • Park, Ju-Yong;Peng, Bu Shi;Lee, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.3
    • /
    • pp.14-26
    • /
    • 2012
  • In this paper, a fast diagonal-weighted Jacket matrices (DWJMs) is proposed to have the orthogonal architecture. We develop the successive DWJM to reduce the computational load while factorizing the large-order DWJMs into the low-order sparse matrices with the fast algorithms. The proposed DWJM is then applied to the precoding multiple-input and multiple output (MIMO) wireless communications because of its diagonal-weighted framework with element-wise inverse characteristics. Based on the properties of the DWJM, the DWJM can be used as alternative open loop cyclic delay diversity (CDD) precoding, which has recently become part of the cellular communications systems. Performance of the DWJM-based precoding system is verified for orthogonal space-time block code (OSTBC) MIMO LTE systems.

Korea Emissions Inventory Processing Using the US EPA's SMOKE System

  • Kim, Soon-Tae;Moon, Nan-Kyoung;Byun, Dae-Won W.
    • Asian Journal of Atmospheric Environment
    • /
    • v.2 no.1
    • /
    • pp.34-46
    • /
    • 2008
  • Emissions inputs for use in air quality modeling of Korea were generated with the emissions inventory data from the National Institute of Environmental Research (NIER), maintained under the Clean Air Policy Support System (CAPSS) database. Source Classification Codes (SCC) in the Korea emissions inventory were adapted to use with the U.S. EPA's Sparse Matrix Operator Kernel Emissions (SMOKE) by finding the best-matching SMOKE default SCCs for the chemical speciation and temporal allocation. A set of 19 surrogate spatial allocation factors for South Korea were developed utilizing the Multi-scale Integrated Modeling System (MIMS) Spatial Allocator and Korean GIS databases. The mobile and area source emissions data, after temporal allocation, show typical sinusoidal diurnal variations with high peaks during daytime, while point source emissions show weak diurnal variations. The model-ready emissions are speciated for the carbon bond version 4 (CB-4) chemical mechanism. Volatile organic carbon (VOC) emissions from painting related industries in area source category significantly contribute to TOL (Toluene) and XYL (Xylene) emissions. ETH (Ethylene) emissions are largely contributed from point industrial incineration facilities and various mobile sources. On the other hand, a large portion of OLE (Olefin) emissions are speciated from mobile sources in addition to those contributed by the polypropylene industry in point source. It was found that FORM (Formaldehyde) is mostly emitted from petroleum industry and heavy duty diesel vehicles. Chemical speciation of PM2.5 emissions shows that PEC (primary fine elemental carbon) and POA (primary fine organic aerosol) are the most abundant species from diesel and gasoline vehicles. To reduce uncertainties in processing the Korea emission inventory due to the mapping of Korean SCCs to those of U.S., it would be practical to develop and use domestic source profiles for the top 10 SCCs for area and point sources and top 5 SCCs for on-road mobile sources when VOC emissions from the sources are more than 90% of the total.

Computational Algorithm for Nonlinear Large-scale/Multibody Structural Analysis Based on Co-rotational Formulation with FETI-local Method (Co-rotational 비선형 정식화 및 FETI-local 기법을 결합한 비선형 대용량/다물체 구조 해석 알고리듬 개발)

  • Cho, Haeseong;Joo, HyunShig;Lee, Younghun;Gwak, Min-cheol;Shin, SangJoon;Yoh, Jack J.
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.9
    • /
    • pp.775-780
    • /
    • 2016
  • In this paper, a computational algorithm of an improved and versatile structural analysis applicable for large-size flexible nonlinear structures is developed. In more detail, nonlinear finite element based on the co-rotational (CR) framework is developed. Then, a finite element tearing and interconnecting method using local Lagrange multipliers (FETI-local) is combined with the nonlinear CR finite element. The resulting computational algorithm is presented and applied for nonlinear static analyses, i.e., cantilevered beam and multibody structure. Finally, the proposed analysis is evaluated with regard to its parallel computation performance, and it is compared with those obtained by serial computation using the sparse matrix linear solver, PARDISO.

Optimum Operation of Power System Using Fuzzy Linear Programming (퍼지 선형계획법을 적용한 전력계통의 최적운용에 관한 연구)

  • 박성대;정재길;조양행
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.8 no.1
    • /
    • pp.37-45
    • /
    • 1994
  • A method of optimal active and reactive power control for economic operation in electrical power system is presented in this paper. The major features and techniques of this paper are as follows: 1) The method presented for obtaining the equivalent active power balance equation applying the sparse Jacobian matrix of power flow equation instead of using B constant as active power Balance equation considering transmission loss, and for determining directly optimal active power allocation without repeating calculations. 2) More reasonable and economic profit by minimizing total fuel cost of thermal power plants instead of using transmission loss as objective function of reactive Power control can be achieved. 3) Particularly in reactive power control, computing time can be considerably reduced by using Fuzzy Linear Programming instead of using conventional Linear Programming.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.