• Title/Summary/Keyword: Feature vectors

Search Result 811, Processing Time 0.022 seconds

Improved SIM Algorithm for Contents-based Image Retrieval (내용 기반 이미지 검색을 위한 개선된 SIM 방법)

  • Kim, Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.2
    • /
    • pp.49-59
    • /
    • 2009
  • Contents-based image retrieval methods are in general more objective and effective than text-based image retrieval algorithms since they use color and texture in search and avoid annotating all images for search. SIM(Self-organizing Image browsing Map) is one of contents-based image retrieval algorithms that uses only browsable mapping results obtained by SOM(Self Organizing Map). However, SOM may have an error in selecting the right BMU in learning phase if there are similar nodes with distorted color information due to the intensity of light or objects' movements in the image. Such images may be mapped into other grouping nodes thus the search rate could be decreased by this effect. In this paper, we propose an improved SIM that uses HSV color model in extracting image features with color quantization. In order to avoid unexpected learning error mentioned above, our SOM consists of two layers. In learning phase, SOM layer 1 has the color feature vectors as input. After learning SOM Layer 1, the connection weights of this layer become the input of SOM Layer 2 and re-learning occurs. With this multi-layered SOM learning, we can avoid mapping errors among similar nodes of different color information. In search, we put the query image vector into SOM layer 2 and select nodes of SOM layer 1 that connects with chosen BMU of SOM layer 2. In experiment, we verified that the proposed SIM was better than the original SIM and avoid mapping error effectively.

  • PDF

A Semi-Noniterative VQ Design Algorithm for Text Dependent Speaker Recognition (문맥종속 화자인식을 위한 준비반복 벡터 양자기 설계 알고리즘)

  • Lim, Dong-Chul;Lee, Haing-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.67-72
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text dependent speaker recognition. In a concrete way, we present the non-Iterative method which makes a vector quantization codebook and this method Is nut Iterative learning so that the computational complexity is epochally reduced. The proposed semi-noniterative VQ design method contrasts with the existing design method which uses the iterative learning algorithm for every training speaker. The characteristics of a semi-noniterative VQ design is as follows. First, the proposed method performs the iterative learning only for the reference speaker, but the existing method performs the iterative learning for every speaker. Second, the quantization region of the non-reference speaker is equivalent for a quantization region of the reference speaker. And the quantization point of the non-reference speaker is the optimal point for the statistical distribution of the non-reference speaker In the numerical experiment, we use the 12th met-cepstrum feature vectors of 20 speakers and compare it with the existing method, changing the codebook size from 2 to 32. The recognition rate of the proposed method is 100% for suitable codebook size and adequate training data. It is equal to the recognition rate of the existing method. Therefore the proposed semi-noniterative VQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal.

A Comparative Experiment on Dimensional Reduction Methods Applicable for Dissimilarity-Based Classifications (비유사도-기반 분류를 위한 차원 축소방법의 비교 실험)

  • Kim, Sang-Woon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.59-66
    • /
    • 2016
  • This paper presents an empirical evaluation on dimensionality reduction strategies by which dissimilarity-based classifications (DBC) can be implemented efficiently. In DBC, classification is not based on feature measurements of individual objects (a set of attributes), but rather on a suitable dissimilarity measure among the individual objects (pair-wise object comparisons). One problem of DBC is the high dimensionality of the dissimilarity space when a lots of objects are treated. To address this issue, two kinds of solutions have been proposed in the literature: prototype selection (PS)-based methods and dimension reduction (DR)-based methods. In this paper, instead of utilizing the PS-based or DR-based methods, a way of performing DBC in Eigen spaces (ES) is considered and empirically compared. In ES-based DBC, classifications are performed as follows: first, a set of principal eigenvectors is extracted from the training data set using a principal component analysis; second, an Eigen space is expanded using a subset of the extracted and selected Eigen vectors; third, after measuring distances among the projected objects in the Eigen space using $l_p$-norms as the dissimilarity, classification is performed. The experimental results, which are obtained using the nearest neighbor rule with artificial and real-life benchmark data sets, demonstrate that when the dimensionality of the Eigen spaces has been selected appropriately, compared to the PS-based and DR-based methods, the performance of the ES-based DBC can be improved in terms of the classification accuracy.

New QECCs for Multiple Flip Error Correction (다중플립 오류정정을 위한 새로운 QECCs)

  • Park, Dong-Young;Kim, Baek-Ki
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.907-916
    • /
    • 2019
  • In this paper, we propose a new five-qubit multiple bit flip code that can completely protect the target qubit from all multiple bit flip errors using only CNOT gates. The proposed multiple bit flip codes can be easily extended to multiple phase flip codes by embedding Hadamard gate pairs in the root error section as in conventional single bit flip code. The multiple bit flip code and multiple phase flip code in this paper share the state vector error information by four auxiliary qubits. These four-qubit state vectors reflect the characteristic that all the multiple flip errors with Pauli X and Z corrections commonly include a specific root error. Using this feature, this paper shows that low-cost implementation is possible despite the QECC design for multiple-flip error correction by batch processing the detection and correction of Pauli X and Z root errors with only three CNOT gates. The five-qubit multiple bit flip code and multiple phase flip code proposed in this paper have 100% error correction rate and 50% error discrimination rate. All QECCs presented in this paper were verified using QCAD simulator.

Field Observation of Morphological Response to Storm Waves and Sensitivity Analysis of XBeach Model at Beach and Crescentic Bar (폭풍파랑에 따른 해빈과 호형 사주 지형변화 현장 관측 및 XBeach 모델 민감도 분석)

  • Jin, Hyeok;Do, Kideok;Chang, Sungyeol;Kim, In Ho
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.6
    • /
    • pp.446-457
    • /
    • 2020
  • Crescentic sand bar in the coastal zone of eastern Korea is a common morphological feature and the rhythmic patterns exist constantly except for high wave energy events. However, four consecutive typhoons that directly and indirectly affected the East Sea of Korea from September to October in 2019 impacted the formation of longshore uniform sand bar and overall shoreline retreats (approx. 2 m) although repetitive erosion and accretion patterns exist near the shoreline. Widely used XBeach to predict storm erosions in the beach is utilized to investigate the morphological response to a series of storms and each storm impact (NE-E wave incidence). Several calibration processes for improved XBeach modeling are conducted by recently reported calibration methods and the optimal calibration set obtained is applied to the numerical simulation. Using observed wave, tide, and pre & post-storm bathymetries data with optimal calibration set for XBeach input, XBeach successfully reproduces erosion and accretion patterns near MSL (BSS = 0.77 (Erosion profile), 0.87 (Accretion profile)) and observed the formation of the longshore uniform sandbar. As a result of analysis of simulated total sediment transport vectors and bed level changes at each storm peak Hs, the incident wave direction contributes considerable impact to the behavior of crescentic sandbar. Moreover, not only the wave height but also storm duration affects the magnitude of the sediment transport. However, model results suggest that additional calibration processes are needed to predict the exact crest position of bar and bed level changes across the inner surfzone.

Denoising Self-Attention Network for Mixed-type Data Imputation (혼합형 데이터 보간을 위한 디노이징 셀프 어텐션 네트워크)

  • Lee, Do-Hoon;Kim, Han-Joon;Chun, Joonghoon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.135-144
    • /
    • 2021
  • Recently, data-driven decision-making technology has become a key technology leading the data industry, and machine learning technology for this requires high-quality training datasets. However, real-world data contains missing values for various reasons, which degrades the performance of prediction models learned from the poor training data. Therefore, in order to build a high-performance model from real-world datasets, many studies on automatically imputing missing values in initial training data have been actively conducted. Many of conventional machine learning-based imputation techniques for handling missing data involve very time-consuming and cumbersome work because they are applied only to numeric type of columns or create individual predictive models for each columns. Therefore, this paper proposes a new data imputation technique called 'Denoising Self-Attention Network (DSAN)', which can be applied to mixed-type dataset containing both numerical and categorical columns. DSAN can learn robust feature expression vectors by combining self-attention and denoising techniques, and can automatically interpolate multiple missing variables in parallel through multi-task learning. To verify the validity of the proposed technique, data imputation experiments has been performed after arbitrarily generating missing values for several mixed-type training data. Then we show the validity of the proposed technique by comparing the performance of the binary classification models trained on imputed data together with the errors between the original and imputed values.

Comparative Study of Anomaly Detection Accuracy of Intrusion Detection Systems Based on Various Data Preprocessing Techniques (다양한 데이터 전처리 기법 기반 침입탐지 시스템의 이상탐지 정확도 비교 연구)

  • Park, Kyungseon;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.449-456
    • /
    • 2021
  • An intrusion detection system is a technology that detects abnormal behaviors that violate security, and detects abnormal operations and prevents system attacks. Existing intrusion detection systems have been designed using statistical analysis or anomaly detection techniques for traffic patterns, but modern systems generate a variety of traffic different from existing systems due to rapidly growing technologies, so the existing methods have limitations. In order to overcome this limitation, study on intrusion detection methods applying various machine learning techniques is being actively conducted. In this study, a comparative study was conducted on data preprocessing techniques that can improve the accuracy of anomaly detection using NGIDS-DS (Next Generation IDS Database) generated by simulation equipment for traffic in various network environments. Padding and sliding window were used as data preprocessing, and an oversampling technique with Adversarial Auto-Encoder (AAE) was applied to solve the problem of imbalance between the normal data rate and the abnormal data rate. In addition, the performance improvement of detection accuracy was confirmed by using Skip-gram among the Word2Vec techniques that can extract feature vectors of preprocessed sequence data. PCA-SVM and GRU were used as models for comparative experiments, and the experimental results showed better performance when sliding window, skip-gram, AAE, and GRU were applied.

Statistical Techniques to Detect Sensor Drifts (센서드리프트 판별을 위한 통계적 탐지기술 고찰)

  • Seo, In-Yong;Shin, Ho-Cheol;Park, Moon-Ghu;Kim, Seong-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be calibrated. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this paper, principal component-based Auto-Associative support vector regression (PCSVR) was proposed for the sensor signal validation of the NPP. It utilizes the attractive merits of principal component analysis (PCA) for extracting predominant feature vectors and AASVR because it easily represents complicated processes that are difficult to model with analytical and mechanistic models. With the use of real plant startup data from the Kori Nuclear Power Plant Unit 3, SVR hyperparameters were optimized by the response surface methodology (RSM). Moreover the statistical techniques are integrated with PCSVR for the failure detection. The residuals between the estimated signals and the measured signals are tested by the Shewhart Control Chart, Exponentially Weighted Moving Average (EWMA), Cumulative Sum (CUSUM) and generalized likelihood ratio test (GLRT) to detect whether the sensors are failed or not. This study shows the GLRT can be a candidate for the detection of sensor drift.

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.