• Title/Summary/Keyword: Multiple-input multiple-output System

Search Result 592, Processing Time 0.027 seconds

Development of a Oak Pollen Emission and Transport Modeling Framework in South Korea (한반도 참나무 꽃가루 확산예측모델 개발)

  • Lim, Yun-Kyu;Kim, Kyu Rang;Cho, Changbum;Kim, Mijin;Choi, Ho-seong;Han, Mae Ja;Oh, Inbo;Kim, Baek-Jo
    • Atmosphere
    • /
    • v.25 no.2
    • /
    • pp.221-233
    • /
    • 2015
  • Pollen is closely related to health issues such as allergenic rhinitis and asthma as well as intensifying atopic syndrome. Information on current and future spatio-temporal distribution of allergenic pollen is needed to address such issues. In this study, the Community Multiscale Air Quality Modeling (CMAQ) was utilized as a base modeling system to forecast pollen dispersal from oak trees. Pollen emission is one of the most important parts in the dispersal modeling system. Areal emission factor was determined from gridded areal fraction of oak trees, which was produced by the analysis of the tree type maps (1:5000) obtained from the Korea Forest Service. Daily total pollen production was estimated by a robust multiple regression model of weather conditions and pollen concentration. Hourly emission factor was determined from wind speed and friction velocity. Hourly pollen emission was then calculated by multiplying areal emission factor, daily total pollen production, and hourly emission factor. Forecast data from the KMA UM LDAPS (Korea Meteorological Administration Unified Model Local Data Assimilation and Prediction System) was utilized as input. For the verification of the model, daily observed pollen concentration from 12 sites in Korea during the pollen season of 2014. Although the model showed a tendency of over-estimation in terms of the seasonal and daily mean concentrations, overall concentration was similar to the observation. Comparison at the hourly output showed distinctive delay of the peak hours by the model at the 'Pocheon' site. It was speculated that the constant release of hourly number of pollen in the modeling framework caused the delay.

Development of Door Control Unit for the Electric Plug-in Door of Subway Train (전동차 전기식 플러그도어 출입문 제어 장치 개발)

  • Joung, Eui-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.4
    • /
    • pp.47-53
    • /
    • 2011
  • The Electric Multiple Unit (EMU) has many types of door system such as sliding door, plug door etc.al. according to customer's requirements. The sliding door is widely used in Korea but has weak point in the noise problem. In the low operation speed, the noise coming from outer side of the EMU is not an important factor. As the speed is higher than before, noise is increased and make a problem. The main cause of noise is the imperfect air tightness in the EMU. The plug door system has advantages for the noise reduction characteristic in the high speed area. We have been developing electric plug-in door. The door is controlled by Door Control Unit(DCU) following the order of Automatic Train Protection (ATP) that is a kind of train signalling system. DCU has to simultaneously open and close the doors and the operation of it is related to the passengers safety. So DCU is a safety device that is important to reliability and safety. DCU is composed of several devices of control, motor driving, Input/Output, communication and power. In this paper, we will describe the functions, characteristic, requirement, subsystem and test results of DCU used for the electric plug-in door.

A channel parameter-based weighting method for performance improvement of underwater acoustic communication system using single vector sensor (단일 벡터센서의 수중음향 통신 시스템 성능 향상을 위한 채널 파라미터 기반 가중 방법)

  • Kang-Hoon, Choi;Jee Woong, Choi
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.610-620
    • /
    • 2022
  • An acoustic vector sensor can simultaneously receive vector quantities, such as particle velocity and acceleration, as well as acoustic pressure at one location, and thus it can be used as a single input multiple output receiver in underwater acoustic communication systems. On the other hand, vector signals received by a single vector sensor have different channel characteristics due to the azimuth angle between the source and receiver and the difference in propagation angle of multipath in each component, producing different communication performances. In this paper, we propose a channel parameter-based weighting method to improve the performance of an acoustic communication system using a single vector sensor. To verify the proposed method, we used communication data collected from the experiment conducted during the KOREX-17 (Korea Reverberation Experiment). For communication demodulation, block-based time reversal technique which is robust against time-varying channels were utilized. Finally, the communication results showed that the effectiveness of the channel parameter-based weighting method for the underwater communication system using a single vector sensor was verified.

Study on CGM-LMS Hybrid Based Adaptive Beam Forming Algorithm for CDMA Uplink Channel (CDMA 상향채널용 CGM-LMS 접목 적응빔형성 알고리듬에 관한 연구)

  • Hong, Young-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.895-904
    • /
    • 2007
  • This paper proposes a robust sub-optimal smart antenna in Code Division Multiple Access (CDMA) basestation. It makes use of the property of the Least Mean Square (LMS) algorithm and the Conjugate Gradient Method (CGM) algorithm for beamforming processes. The weight update takes place at symbol level which follows the PN correlators of receiver module under the assumption that the post correlation desired signal power is far larger than the power of each of the interfering signals. The proposed algorithm is simple and has as low computational load as five times of the number of antenna elements(O(5N)) as a whole per each snapshot. The output Signal to Interference plus Noise Ratio (SINR) of the proposed smart antenna system when the weight vector reaches the steady state has been examined. It has been observed in computer simulations that proposed beamforming algorithm improves the SINR significantly compared to the single antenna case. The convergence property of the weight vector has also been investigated to show that the proposed hybrid algorithm performs better than CGM and LMS during the initial stage of the weight update iteration. The Bit Error Rate (BER) characteristics of the proposed array has also been shown as the processor input Signal to Noise Ratio (SNR) varies.

A Development of SCM Model in Chemical Industry Including Batch Mode Operations (회분식 공정이 포함된 화학산업에서의 공급사슬 관리 모델 개발)

  • Park, Jeung Min;Ha, Jin-Kuk;Lee, Euy Soo
    • Korean Chemical Engineering Research
    • /
    • v.46 no.2
    • /
    • pp.316-329
    • /
    • 2008
  • Recently the increased attention pays on the processing of multiple, relatively low quantity, high value-added products resulted in adoption of batch process in the chemical process industry such as pharmaceuticals, polymers, bio-chemicals and foods. As there are more possibilities of the improvement of operations in batch process than continuous processes, a lot of effort has been made to enhance the productivity and operability of batch processes. But the chemical process industry faces a range of uncertainties factors such as demands for products, prices of product, lead time for the supply of raw materials and in the production, and the distribution of product. And global competition has made it imperative for the process industries to manage their supply chains optimally. Supply chain management aims to integrate plants with their supplier and customers so that they can be managed as a single entity and coordinate all input/output flows (of materials, information) so that products are produced and distributed in the right quantities, to the right locations, and at the right time.The objective of this study is to solve the purchase, distribution, production planning and scheduling problem, which minimizes the total costs of production, inventory, and transportation under uncertainty. And development of SCM model in chemical industry including batch mode operations. Through that, the enterprise can respond to uncertainty. Also integrated process optimal planning and scheduling model for manufacturing supply chain. The result shows that, the advantage of supply chain integration are quality matters seen by customers and suppliers, order schedules, flexibility, cost reduction, and increase in sales and profits. Also, an integration of supply chain (production and distribution system) generates significant savings by trading off the costs associated with the whole, rather than minimizing supply chain costs separately.

Develpment of Analysis and Evaluation Model for a bus Transit Route Network Design (버스 노선망 설계를 위한 평가모형 개발)

  • Han, Jong-Hak;Lee, Seung-Jae;Kim, Jong-Hyeong
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.161-172
    • /
    • 2005
  • This study is to develop Bus Transit Route Analysis and Evaluation Model that can product the quantitative performance measures for Bus Transit Route Network Design. So far, in Korea, there are no so many models that evaluate a variety of other performance measures or service quality that are of concern to both the transit users and operator because of lower-level bus database system and transit route network analysis algorithm's limit. The BTRAEM in this research differ from the previous approach in that the BTRAEM employs a multiple path transit trip assignment model that explicitly considers the transfer and different travel time after boarding. And we develop input-output data structure and quantitative performance measure for the BTRAEM. In the numerical experimental applying BTRAEM to Mandl transit network, We got the meaningful results on performance measure of bus transit route network. In the future, we expect BTRAEM to give a good solution in real transit network.

A Study on the Serialized Event Sharing System for Multiple Telecomputing User Environments (원격.다원 사용자 환경에서의 순차적 이벤트 공유기에 관한 연구)

  • 유영진;오용선
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.344-350
    • /
    • 2003
  • In this paper, we propose a novel sharing method ordering the events occurring between users collaborated with the common telecomputing environment. We realize the sharing method with multimedia data to improve the coworking effect using teleprocessing network. This sharing method advances the efficiency of communicating projects such as remote education, tele-conference, and co-authoring of multimedia contents by offering conveniences of presentation, group authoring, common management, and transient event productions of the users. As for the conventional sharing white board system, all the multimedia contents segments should be authored by the exclusive program, and we cannot use any existing contents or program. Moreover we suffer from the problem that ordering error occurs in the teleprocessing operation because we do not have any line-up technology for the input ordering of commands. Therefore we develop a method of retrieving input and output events from the windows system and the message hooking technology which transmits between programs in the operating system In addition, we realize the allocation technology of the processing results for all sharing users of the distributed computing environment without any error. Our sharing technology should contribute to improve the face-to-face coworking efficiency for multimedia contents authoring, common blackboard system in the area of remote educations, and presentation display in visual conference.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

The Optimal Turbo Coded V-BLAST Technique in the Adaptive Modulation System corresponding to each MIMO Scheme (적응 변조 시스템에서 각 MIMO 기법에 따른 최적의 터보 부호화된 V-BLAST 기법)

  • Lee, Kyung-Hwan;Ryoo, Sang-Jin;Choi, Kwang-Wook;You, Cheol-Woo;Hong, Dae-Ki;Kim, Dae-Jin;Hwang, In-Tae;Kim, Cheol-Sung
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.6 s.360
    • /
    • pp.40-47
    • /
    • 2007
  • In this paper, we propose and analyze the Adaptive Modulation System with optimal Turbo Coded V-BLAST(Vertical-Bell-lab Layered Space-Time) technique that adopts the extrinsic information from MAP (Maximum A Posteriori) Decoder with Iterative Decoding as a priori probability in two decoding procedures of V-BLAST; the ordering and the slicing. Also, we consider and compare the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is simply combined V-BLAST with Turbo Coding scheme and the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is decoded by the ML (Maximum Likelihood) decoding algorithm. We observe a throughput performance and a complexity. As a result of a performance comparison of each system, it has been proved that the complexity of the proposed decoding algorithm is lower than that of the ML decoding algorithm but is higher than that of the conventional V-BLAST decoding algorithm. however, we can see that the proposed system achieves a better throughput performance than the conventional system in the whole SNR (Signal to Noise Ratio) range. And the result shows that the proposed system achieves a throughput performance close to the ML decoded system. Specifically, a simulation shows that the maximum throughput improvement in each MIMO scheme is respectively about 350 kbps, 460 kbps, and 740 kbps compared to the conventional system. It is suggested that the effect of the proposed decoding algorithm accordingly gets higher as the number of system antenna increases.