• Title/Summary/Keyword: 선별 알고리즘

Search Result 292, Processing Time 0.025 seconds

An Implementation of Automatic Genre Classification System for Korean Traditional Music (한국 전통음악 (국악)에 대한 자동 장르 분류 시스템 구현)

  • Lee Kang-Kyu;Yoon Won-Jung;Park Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.29-37
    • /
    • 2005
  • This paper proposes an automatic genre classification system for Korean traditional music. The Proposed system accepts and classifies queried input music as one of the six musical genres such as Royal Shrine Music, Classcal Chamber Music, Folk Song, Folk Music, Buddhist Music, Shamanist Music based on music contents. In general, content-based music genre classification consists of two stages - music feature vector extraction and Pattern classification. For feature extraction. the system extracts 58 dimensional feature vectors including spectral centroid, spectral rolloff and spectral flux based on STFT and also the coefficient domain features such as LPC, MFCC, and then these features are further optimized using SFS method. For Pattern or genre classification, k-NN, Gaussian, GMM and SVM algorithms are considered. In addition, the proposed system adopts MFC method to settle down the uncertainty problem of the system performance due to the different query Patterns (or portions). From the experimental results. we verify the successful genre classification performance over $97{\%}$ for both the k-NN and SVM classifier, however SVM classifier provides almost three times faster classification performance than the k-NN.

A Method of Detecting the Aggressive Driving of Elderly Driver (노인 운전자의 공격적인 운전 상태 검출 기법)

  • Koh, Dong-Woo;Kang, Hang-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.537-542
    • /
    • 2017
  • Aggressive driving is a major cause of car accidents. Previous studies have mainly analyzed young driver's aggressive driving tendency, yet they were only done through pure clustering or classification technique of machine learning. However, since elderly people have different driving habits due to their fragile physical conditions, it is necessary to develop a new method such as enhancing the characteristics of driving data to properly analyze aggressive driving of elderly drivers. In this study, acceleration data collected from a smartphone of a driving vehicle is analyzed by a newly proposed ECA(Enhanced Clustering method for Acceleration data) technique, coupled with a conventional clustering technique (K-means Clustering, Expectation-maximization algorithm). ECA selects high-intensity data among the data of the cluster group detected through K-means and EM in all of the subjects' data and models the characteristic data through the scaled value. Using this method, the aggressive driving data of all youth and elderly experiment participants were collected, unlike the pure clustering method. We further found that the K-means clustering has higher detection efficiency than EM method. Also, the results of K-means clustering demonstrate that a young driver has a driving strength 1.29 times higher than that of an elderly driver. In conclusion, the proposed method of our research is able to detect aggressive driving maneuvers from data of the elderly having low operating intensity. The proposed method is able to construct a customized safe driving system for the elderly driver. In the future, it will be possible to detect abnormal driving conditions and to use the collected data for early warning to drivers.

Robust Orientation Estimation Algorithm of Fingerprint Images (노이즈에 강인한 지문 융선의 방향 추출 알고리즘)

  • Lee, Sang-Hoon;Lee, Chul-Han;Choi, Kyoung-Taek;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.55-63
    • /
    • 2008
  • Ridge orientations of fingerprint image are crucial informations in many parts of fingerprint recognition such as enhancement, matching and classification. Therefore it is essential to extract the ridge orientations of image accurately because it directly affects the performance of the system. The two main properties of ridge orientation are 1) global characteristic(gradual change in whole part of fingerprint) and 2) local characteristic(abrupt change around core and delta points). When we only consider the local characteristic, estimated ridge orientations are well around singular points but not robust to noise. When the global characteristic is only considered, to estimate ridge orientation is robust to noise but cannot represent the orientation around singular points. In this paper, we propose a novel method for estimating ridge orientation which represents local characteristic specifically as well as be robust to noise. We reduce the noise caused by scar using iterative outlier rejection. We apply adaptive measurement resolution in each fingerprint area to estimate the ridge orientation around singular points accurately. We evaluate the performance of proposed method using synthetic fingerprint and FVC 2002 DB. We compare the accuracy of ridge orientation. The performance of fingerprint authentication system is evaluated using FVC 2002 DB.

Development of Sludge Concentration Estimation Method using Neuro-Fuzzy Algorithm (뉴로-퍼지 알고리즘을 이용한 슬러지 농도 추정 기법 개발)

  • Jang, Sang-Bok;Lee, Ho-Hyun;Lee, Dae-Jong;Kweon, Jin-Hee;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.2
    • /
    • pp.119-125
    • /
    • 2015
  • A concentration meter is widely used at purification plants, sewage treatment plants and waste water treatment plants to sort and transfer high concentration sludge and to control the amount of chemical dosage. When the strange substance is contained in the sludge, however, the attenuation of ultrasonic wave could be increased or not be transmitted to the receiver. At that case, the value of concentration meter is higher than the actual density value or vibrated up and down. It has also been difficult to automate the residuals treatment process according to the problems as sludge attachment or damage of a sensor. Multi-beam ultrasonic concentration meter has been developed to solve these problems, but the failure of the ultrasonic beam of a specific concentration measurement value degrade the performance of the entire system. This paper proposes the method to improve the accuracy of sludge concentration rate by choosing reliable sensor values and learning them by proposed algorithm. The prediction algorithm is chosen as neuro-fuzzy model, which is tested by the various experiments.

Personalized EPG Application using Automatic User Preference Learning Method (사용자 선호도 자동 학습 방법을 이용한 개인용 전자 프로그램 가이드 어플리케이션 개발)

  • Lim Jeongyeon;Jeong Hyun;Kim Munchurl;Kang Sanggil;Kang Kyeongok
    • Journal of Broadcast Engineering
    • /
    • v.9 no.4 s.25
    • /
    • pp.305-321
    • /
    • 2004
  • With the advent of the digital broadcasting, the audiences can access a large number of TV programs and their information through the multiple channels on various media devices. The access to a large number of TV programs can support a user for many chances with which he/she can sort and select the best one of them. However, the information overload on the user inevitably requires much effort with a lot of patience for finding his/her favorite programs. Therefore, it is useful to provide the persona1ized broadcasting service which assists the user to automatically find his/her favorite programs. As the growing requirements of the TV personalization, we introduce our automatic user preference learning algorithm which 1) analyzes a user's usage history on TV program contents: 2) extracts the user's watching pattern depending on a specific time and day and shows our automatic TV program recommendation system using MPEG-7 MDS (Multimedia Description Scheme: ISO/IEC 15938-5) and 3) automatically calculates the user's preference. For our experimental results, we have used TV audiences' watching history with the ages, genders and viewing times obtained from AC Nielson Korea. From our experimental results, we observed that our proposed algorithm of the automatic user preference learning algorithm based on the Bayesian network can effectively learn the user's preferences accordingly during the course of TV watching periods.

Fast Combinatorial Programs Generating Total Data (전수데이터를 생성하는 빠른 콤비나토리얼 프로그램)

  • Jang, Jae-Soo;Won, Shin-Jae;Cheon, Hong-Sik;Suh, Chang-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.1451-1458
    • /
    • 2013
  • This paper deals with the programs and algorithms that generate the full data set that satisfy the basic combinatorial requirement of combination, permutation, partial permutation or shortly r-permutation, which are used in the application of the total data testing or the simulation input. We search the programs able to meet the rules which is permutations and combinations, r-permutations, select the fastest program by field. With further study, we developed a new program reducing the time required to processing. Our research performs the following pre-study. Firstly, hundreds of algorithms and programs in the internet are collected and corrected to be executable. Secondly, we measure running time for all completed programs and select a few fast ones. Thirdly, the fast programs are analyzed in depth and its pseudo-code programs are provided. We succeeded in developing two programs that run faster. Firstly, the combination program can save the running time by removing recursive function and the r-permutation program become faster by combining the best combination program and the best permutation program. According to our performance test, the former and later program enhance the running speed by 22% to 34% and 62% to 226% respectively compared with the fastest collected program. The programs suggested in this study could apply to a particular cases easily based on Pseudo-code., Predicts the execution time spent on data processing, determine the validity of the processing, and also generates total data with minimum access programming.

Artificial Intelligence-based Security Control Construction and Countermeasures (인공지능기반 보안관제 구축 및 대응 방안)

  • Hong, Jun-Hyeok;Lee, Byoung Yup
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.531-540
    • /
    • 2021
  • As cyber attacks and crimes increase exponentially and hacking attacks become more intelligent and advanced, hacking attack methods and routes are evolving unpredictably and in real time. In order to reinforce the enemy's responsiveness, this study aims to propose a method for developing an artificial intelligence-based security control platform by building a next-generation security system using artificial intelligence to respond by self-learning, monitoring abnormal signs and blocking attacks.The artificial intelligence-based security control platform should be developed as the basis for data collection, data analysis, next-generation security system operation, and security system management. Big data base and control system, data collection step through external threat information, data analysis step of pre-processing and formalizing the collected data to perform positive/false detection and abnormal behavior analysis through deep learning-based algorithm, and analyzed data Through the operation of a security system of prevention, control, response, analysis, and organic circulation structure, the next generation security system to increase the scope and speed of handling new threats and to reinforce the identification of normal and abnormal behaviors, and management of the security threat response system, Harmful IP management, detection policy management, security business legal system management. Through this, we are trying to find a way to comprehensively analyze vast amounts of data and to respond preemptively in a short time.

Bike Insurance Fraud Detection Model Using Balanced Randomforest Algorithm (균형 랜덤 포레스트를 이용한 이륜차 보험사기 적발 모형 개발)

  • Kim, Seunghoon;Lee, Soo Il;Kim, Tae ho
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.241-250
    • /
    • 2022
  • Due to the COVID-19 pandemic, with increased 'untact' services and with unstable household economy, the bike insurance fraud is expected to surge. Moreover, the fraud methodology gets complicated. However, the fraud detection model for bike insurance is absent. we deal with the issue of skewed class distribution and reflect the criterion of fraud detection expert. We utilize a balanced random-forest algorithm to develop an efficient bike insurance fraud detection model. As a result, while the predictive performance of balanced random-forest model is superior than it of non-balanced model. There is no significant difference between the variables used by the experts and the confirmatory models. The important variables to detect frauds are turned out to be age and gender of driver, correspondence between insured and driver, the amount of self-repairing claim, and the amount of bodily injury liability.

Verification of Ground Subsidence Risk Map Based on Underground Cavity Data Using DNN Technique (DNN 기법을 활용한 지하공동 데이터기반의 지반침하 위험 지도 작성)

  • Han Eung Kim;Chang Hun Kim;Tae Geon Kim;Jeong Jun Park
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.2
    • /
    • pp.334-343
    • /
    • 2023
  • Purpose: In this study, the cavity data found through ground cavity exploration was combined with underground facilities to derive a correlation, and the ground subsidence prediction map was verified based on the AI algorithm. Method: The study was conducted in three stages. The stage of data investigation and big data collection related to risk assessment. Data pre-processing steps for AI analysis. And it is the step of verifying the ground subsidence risk prediction map using the AI algorithm. Result: By analyzing the ground subsidence risk prediction map prepared, it was possible to confirm the distribution of risk grades in three stages of emergency, priority, and general for Busanjin-gu and Saha-gu. In addition, by arranging the predicted ground subsidence risk ratings for each section of the road route, it was confirmed that 3 out of 61 sections in Busanjin-gu and 7 out of 68 sections in Sahagu included roads with emergency ratings. Conclusion: Based on the verified ground subsidence risk prediction map, it is possible to provide citizens with a safe road environment by setting the exploration section according to the risk level and conducting investigation.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.