• Title/Summary/Keyword: Orthogonal set

Search Result 264, Processing Time 0.023 seconds

Supervised learning and frequency domain averaging-based adaptive channel estimation scheme for filterbank multicarrier with offset quadrature amplitude modulation

  • Singh, Vibhutesh Kumar;Upadhyay, Nidhi;Flanagan, Mark;Cardiff, Barry
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.966-977
    • /
    • 2021
  • Filterbank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) is an attractive alternative to the orthogonal frequency division multiplexing (OFDM) modulation technique. In comparison with OFDM, the FBMC-OQAM signal has better spectral confinement and higher spectral efficiency and tolerance to synchronization errors, primarily due to per-subcarrier filtering using a frequency-time localized prototype filter. However, the filtering process introduces intrinsic interference among the symbols and complicates channel estimation (CE). An efficient way to improve the CE in FBMC-OQAM is using a technique known as windowed frequency domain averaging (FDA); however, it requires a priori knowledge of the window length parameter which is set based on the channel's frequency selectivity (FS). As the channel's FS is not fixed and not a priori known, we propose a k-nearest neighbor-based machine learning algorithm to classify the FS and decide on the FDA's window length. A comparative theoretical analysis of the mean-squared error (MSE) is performed to prove the proposed CE scheme's effectiveness, validated through extensive simulations. The adaptive CE scheme is shown to yield a reduction in CE-MSE and improved bit error rates compared with the popular preamble-based CE schemes for FBMC-OQAM, without a priori knowledge of channel's frequency selectivity.

SOME ABELIAN MCCOY RINGS

  • Rasul Mohammadi;Ahmad Moussavi;Masoome Zahiri
    • Journal of the Korean Mathematical Society
    • /
    • v.60 no.6
    • /
    • pp.1233-1254
    • /
    • 2023
  • We introduce two subclasses of abelian McCoy rings, so-called π-CN-rings and π-duo rings, and systematically study their fundamental characteristic properties accomplished with relationships among certain classical sorts of rings such as 2-primal rings, bounded rings etc. It is shown that a ring R is π-CN whenever every nilpotent element of index 2 in R is central. These rings naturally generalize the long-known class of CN-rings, introduced by Drazin [9]. It is proved that π-CN-rings are abelian, McCoy and 2-primal. We also show that, π-duo rings are strongly McCoy and abelian and also they are strongly right AB. If R is π-duo, then R[x] has property (A). If R is π-duo and it is either right weakly continuous or every prime ideal of R is maximal, then R has property (A). A π-duo ring R is left perfect if and only if R contains no infinite set of orthogonal idempotents and every left R-module has a maximal submodule. Our achieved results substantially improve many existing results.

A Study on the Prediction System of Block Matching Rework Time (블록 정합 재작업 시수 예측 시스템에 관한 연구)

  • Jang, Moon-Seuk;Ruy, Won-Sun;Park, Chang-Kyu;Kim, Deok-Eun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.55 no.1
    • /
    • pp.66-74
    • /
    • 2018
  • In order to evaluate the precision degree of the blocks on the dock, the shipyards recently started to use the point cloud approaches using the 3D scanners. However, they hesitate to use it due to the limited time, cost, and elaborative effects for the post-works. Although it is somewhat traditional instead, they have still used the electro-optical wave devices which have a characteristic of having less dense point set (usually 1 point per meter) around the contact section of two blocks. This paper tried to expand the usage of point sets. Our approach can estimate the rework time to weld between the Pre-Erected(PE) Block and Erected(ER) block as well as the precision of block construction. In detail, two algorithms were applied to increase the efficiency of estimation process. The first one is K-mean clustering algorithm which is used to separate only the related contact point set from others not related with welding sections. The second one is the Concave hull algorithm which also separates the inner point of the contact section used for the delayed outfitting and stiffeners section, and constructs the concave outline of contact section as the primary objects to estimate the rework time of welding. The main purpose of this paper is that the rework cost for welding is able to be obtained easily and precisely with the defective point set. The point set on the blocks' outline are challenging to get the approximated mathematical curves, owing to the lots of orthogonal parts and lack of number of point. To solve this problems we compared the Radial based function-Multi-Layer(RBF-ML) and Akima interpolation method. Collecting the proposed methods, the paper suggested the noble point matching method for minimizing the rework time of block-welding on the dock, differently the previous approach which had paid the attention of only the degree of accuracy.

Statistical Analysis of Projection-Based Face Recognition Algorithms (투사에 기초한 얼굴 인식 알고리즘들의 통계적 분석)

  • 문현준;백순화;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5A
    • /
    • pp.717-725
    • /
    • 2000
  • Within the last several years, there has been a large number of algorithms developed for face recognition. The majority of these algorithms have been view- and projection-based algorithms. Our definition of projection is not restricted to projecting the image onto an orthogonal basis the definition is expansive and includes a general class of linear transformation of the image pixel values. The class includes correlation, principal component analysis, clustering, gray scale projection, and matching pursuit filters. In this paper, we perform a detailed analysis of this class of algorithms by evaluating them on the FERET database of facial images. In our experiments, a projection-based algorithms consists of three steps. The first step is done off-line and determines the new basis for the images. The bases is either set by the algorithm designer or is learned from a training set. The last two steps are on-line and perform the recognition. The second step projects an image onto the new basis and the third step recognizes a face in an with a nearest neighbor classifier. The classification is performed in the projection space. Most evaluation methods report algorithm performance on a single gallery. This does not fully capture algorithm performance. In our study, we construct set of independent galleries. This allows us to see how individual algorithm performance varies over different galleries. In addition, we report on the relative performance of the algorithms over the different galleries.

  • PDF

Factor Analysis for Exploratory Research in the Distribution Science Field (유통과학분야에서 탐색적 연구를 위한 요인분석)

  • Yim, Myung-Seong
    • Journal of Distribution Science
    • /
    • v.13 no.9
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.

A Study on the Development of HMR Products of Korean Foods Using Conjoint Analysis (컨조인트 분석법을 이용한 한국 음식의 HMR 상품 개발에 관한 연구)

  • Choi, Won-Sik;Seo, Kyung-Hwa;Lee, Soo-Bum
    • Culinary science and hospitality research
    • /
    • v.18 no.1
    • /
    • pp.156-167
    • /
    • 2012
  • The purpose of this study is to examine the structural elements of HMR in Korea foods and explore the way HMR products using Korean foods can be developed at this time of increased interest. Through an investigation of its importance by attributes and their partial values, hypothetical HMR products using Korean foods were estimated. In order to develop the optimal HMR goods of Korean food, a preference survey was conducted after selecting 9 profiles using conjoint analysis with orthogonal design, and 4 holdout sets were generated and used for cross-validity authorization and reliability of the model. The results of this study showed that customers put cooking levels, menu price, and the location of purchase into importance when selecting HMR products of Korean foods. They preferred to eat the products after sufficiently heating them and buy the products sold online and through home shopping programs, with the price range of 10,000 won and over. It was concluded that more customers can be attracted if a variety of HMR products using Korean foods which can be prepared readily anywhere and at any time are developed.

  • PDF

A Simple Bit Allocation Scheme Based on Grouped Sub-Channels for V-BLAST OFDM Systems (V-BLAST OFDM 시스템을 위한 그룹화된 부채널 기반의 간단한 형태의 비트 할당 기법)

  • Park Dae-Jin;Yang Suck-Chel;Kim Jong-Won;Yoo Myung-Sik;Lee Won-Cheol;Shin Yo-An
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.7C
    • /
    • pp.680-690
    • /
    • 2006
  • In this paper, we present a bit allocation scheme based on grouped sub-channels for MIMO-OFDM (Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing) systems using V-BLAST (Vertical-Bell laboratories LAyered Space-Time) detector. A fully adaptive modulation and coding scheme may provide optimal performance in the MIMO-OFDM systems, however it requires excessive feedback information. Instead, SBA (Simplified Bit Allocation) scheme for reduction of feedback overhead, which applies the same modulation and coding to all the good sub-channels, may be considered. The proposed scheme in this paper named SBA-GS (Simplified Bit Allocation based on Grouped Sub-channels) groups sub-channels and assigns the same modulation and coding to the set of selected sub-channel groups. Simulation results show that the proposed scheme achieves comparable bit error rate performance of the conventional SBA scheme, while significantly reducing the feedback overhead in multipath channels with small delay spreads.

Spectral Efficiency of MC-CDMA (MC-CDMA 방식의 주파수 효율)

  • Han Hee-Goo;Oh Seong-Keun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.3 s.345
    • /
    • pp.39-48
    • /
    • 2006
  • In this paper, we analyze the spectral efficiency of multicarrier-code division multiple access (MC-CDMA) scheme. First, we derive a generalized formula for the spectral efficiency according to the number of subcarriers involved in, code division multiplexing and the number of codes used (i.e., loading factor), under a given set of channel coefficients. Also, we derive a generalized formula for spectral efficiency of various reduced-complexity systems that divide the full sets of subcarriers into several groups of subcarriers for code division multiplexing. Then, through these derivations, we establish an inter-relationship between the frequency selectivity and diversity order according to the number of multipaths. From the results, we choose the smallest code length while maximizing the diversity effect, provide an optimum subcarrier allocation strategy, and finally suggest a system structure for capacity-maximizing under the smallest code length. Through numerical analyses under simulated environments, we analyze the properties of spectral efficiency of various systems with reduced complexity and choose a major contributing factors to system design and a better system design methodology. Finally, we compare the spectral efficiency of the MC-CDMA scheme and orthogonal frequency division multiplexing (OFDM) scheme to make a relationship between both schemes.

Coverage Enhancement in TDD-OFDMA Downlink by using Simple-Relays with Resource Allocation and Throughput Guarantee Scheduler (TDD-OFDMA 하향링크에서의 단순 릴레이를 이용한 자원 할당과 수율 보장 스케줄러를 사용한 서비스 커버리지 향상에 관한 연구)

  • Byun, Dae-Wook;Ki, Young-Min;Kim, Dong-Ku
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.3
    • /
    • pp.275-281
    • /
    • 2006
  • Simple-relay aided resource allocation (SRARA) schemes are incorporated with throughput guarantee scheduling (TGS) in IEEE 802.16 type time division duplex - orthogonal frequency division multiple access (TDD-OFDMA) downlink in order to enhance service coverage, where the amount of resources for relaying at each relay is limited due to either its available power which is much smaller than base station (BS) power or the overhead required for exchanging feedback information. The performance of SRARA schemes is evaluated with schedulers such as proportional fair (PF) and TGS at 64kbps and 128kbps user throughput requirements when total MS power is set to 500mW or 1 W. For 64kbps throughput requirement level, more improvement comes from relay than scheduler design. For 128kbps case, it comse from scheduler design than relay due to the fact that simple relay can't help using strictly limited amount of resources for relaying function.

  • PDF

Screening of the liver, serum, and urine of piglets fed zearalenone using a NMR-based metabolomic approach

  • Jeong, Jin Young;Kim, Min Seok;Jung, Hyun Jung;Kim, Min Ji;Lee, Hyun Jeong;Lee, Sung Dae
    • Korean Journal of Agricultural Science
    • /
    • v.45 no.3
    • /
    • pp.447-454
    • /
    • 2018
  • Zearalenone (ZEN), a mycotoxin produced by Fusarium in food and feed, causes serious damage to the health of humans and livestock. Therefore, we compared the metabolomic profiles in the liver, serum, and urine of piglets fed a ZEN-contaminated diet using proton nuclear magnetic resonance ($^1H-NMR$) spectroscopy. The spectra from the three different samples, treated with ZEN concentrations of 0.8 mg/kg for 4 weeks, were aligned and identified using MATLAB. The aligned data were subjected to discriminating analysis using multivariate statistical analysis and a web server for metabolite set enrichment analysis. The ZEN-exposed groups were almost separated in the three different samples. Metabolic analysis showed that 28, 29, and 20 metabolites were profiled in the liver, serum, and urine, respectively. The discriminating analysis showed that the alanine, arginine, choline, and glucose concentrations were increased in the liver. Phenylalanine and tyrosine metabolites showed high concentrations in serum, whereas valine showed a low concentration. In addition, the formate levels were increased in the ZEN-treated urine. For the integrated analysis, glucose, lactate, taurine, glycine, alanine, glutamate, glutamine, and creatine from orthogonal partial least squares discriminant analysis (OPLS-DA) were potential compounds for the discriminating analysis. In conclusion, our findings suggest that potential biomarker compounds can provide a better understanding on how ZEN contaminated feed in swine affects the liver, serum, and urine.