• Title/Summary/Keyword: learning algorithms

Search Result 2,317, Processing Time 0.03 seconds

Principles and Current Trends of Neural Decoding (뉴럴 디코딩의 원리와 최신 연구 동향 소개)

  • Kim, Kwangsoo;Ahn, Jungryul;Cha, Seongkwang;Koo, Kyo-in;Goo, Yong Sook
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.342-351
    • /
    • 2017
  • The neural decoding is a procedure that uses spike trains fired by neurons to estimate features of original stimulus. This is a fundamental step for understanding how neurons talk each other and, ultimately, how brains manage information. In this paper, the strategies of neural decoding are classified into three methodologies: rate decoding, temporal decoding, and population decoding, which are explained. Rate decoding is the firstly used and simplest decoding method in which the stimulus is reconstructed from the numbers of the spike at given time (e. g. spike rates). Since spike number is a discrete number, the spike rate itself is often not continuous and quantized, therefore if the stimulus is not static and simple, rate decoding may not provide good estimation for stimulus. Temporal decoding is the decoding method in which stimulus is reconstructed from the timing information when the spike fires. It can be useful even for rapidly changing stimulus, and our sensory system is believed to have temporal rather than rate decoding strategy. Since the use of large numbers of neurons is one of the operating principles of most nervous systems, population decoding has advantages such as reduction of uncertainty due to neuronal variability and the ability to represent a stimulus attributes simultaneously. Here, in this paper, three different decoding methods are introduced, how the information theory can be used in the neural decoding area is also given, and at the last machinelearning based algorithms for neural decoding are introduced.

Locally Linear Embedding for Face Recognition with Simultaneous Diagonalization (얼굴 인식을 위한 연립 대각화와 국부 선형 임베딩)

  • Kim, Eun-Sol;Noh, Yung-Kyun;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.235-241
    • /
    • 2015
  • Locally linear embedding (LLE) [1] is a type of manifold algorithms, which preserves inner product value between high-dimensional data when embedding the high-dimensional data to low-dimensional space. LLE closely embeds data points on the same subspace in low-dimensional space, because the data points have significant inner product values. On the other hand, if the data points are located orthogonal to each other, these are separately embedded in low-dimensional space, even though they are in close proximity to each other in high-dimensional space. Meanwhile, it is well known that the facial images of the same person under varying illumination lie in a low-dimensional linear subspace [2]. In this study, we suggest an improved LLE method for face recognition problem. The method maximizes the characteristic of LLE, which embeds the data points totally separately when they are located orthogonal to each other. To accomplish this, all of the subspaces made by each class are forced to locate orthogonally. To make all of the subspaces orthogonal, the simultaneous Diagonalization (SD) technique was applied. From experimental results, the suggested method is shown to dramatically improve the embedding results and classification performance.

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.

Evolutionally optimized Fuzzy Polynomial Neural Networks Based on Fuzzy Relation and Genetic Algorithms: Analysis and Design (퍼지관계와 유전자 알고리즘에 기반한 진화론적 최적 퍼지다항식 뉴럴네트워크: 해석과 설계)

  • Park, Byoung-Jun;Lee, Dong-Yoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.236-244
    • /
    • 2005
  • In this study, we introduce a new topology of Fuzzy Polynomial Neural Networks(FPNN) that is based on fuzzy relation and evolutionally optimized Multi-Layer Perceptron, discuss a comprehensive design methodology and carry out a series of numeric experiments. The construction of the evolutionally optimized FPNN(EFPNN) exploits fundamental technologies of Computational Intelligence. The architecture of the resulting EFPNN results from a synergistic usage of the genetic optimization-driven hybrid system generated by combining rule-based Fuzzy Neural Networks(FNN) with polynomial neural networks(PNN). FNN contributes to the formation of the premise part of the overall rule-based structure of the EFPNN. The consequence part of the EFPNN is designed using PNN. As the consequence part of the EFPNN, the development of the genetically optimized PNN(gPNN) dwells on two general optimization mechanism: the structural optimization is realized via GAs whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the EFPNN, the models are experimented with the use of several representative numerical examples. A comparative analysis shows that the proposed EFPNN are models with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Base Location Prediction Algorithm of Serial Crimes based on the Spatio-Temporal Analysis (시공간 분석 기반 연쇄 범죄 거점 위치 예측 알고리즘)

  • Hong, Dong-Suk;Kim, Joung-Joon;Kang, Hong-Koo;Lee, Ki-Young;Seo, Jong-Soo;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.63-79
    • /
    • 2008
  • With the recent development of advanced GIS and complex spatial analysis technologies, the more sophisticated technologies are being required to support the advanced knowledge for solving geographical or spatial problems in various decision support systems. In addition, necessity for research on scientific crime investigation and forensic science is increasing particularly at law enforcement agencies and investigation institutions for efficient investigation and the prevention of crimes. There are active researches on geographic profiling to predict the base location such as criminals' residence by analyzing the spatial patterns of serial crimes. However, as previous researches on geographic profiling use simply statistical methods for spatial pattern analysis and do not apply a variety of spatial and temporal analysis technologies on serial crimes, they have the low prediction accuracy. Therefore, this paper identifies the typology the spatio-temporal patterns of serial crimes according to spatial distribution of crime sites and temporal distribution on occurrence of crimes and proposes STA-BLP(Spatio-Temporal Analysis based Base Location Prediction) algorithm which predicts the base location of serial crimes more accurately based on the patterns. STA-BLP improves the prediction accuracy by considering of the anisotropic pattern of serial crimes committed by criminals who prefer specific directions on a crime trip and the learning effect of criminals through repeated movement along the same route. In addition, it can predict base location more accurately in the serial crimes from multiple bases with the local prediction for some crime sites included in a cluster and the global prediction for all crime sites. Through a variety of experiments, we proved the superiority of the STA-BLP by comparing it with previous algorithms in terms of prediction accuracy.

  • PDF

Recognition Method of Korean Abnormal Language for Spam Mail Filtering (스팸메일 필터링을 위한 한글 변칙어 인식 방법)

  • Ahn, Hee-Kook;Han, Uk-Pyo;Shin, Seung-Ho;Yang, Dong-Il;Roh, Hee-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.2
    • /
    • pp.287-297
    • /
    • 2011
  • As electronic mails are being widely used for facility and speedness of information communication, as the amount of spam mails which have malice and advertisement increase and cause lots of social and economic problem. A number of approaches have been proposed to alleviate the impact of spam. These approaches can be categorized into pre-acceptance and post-acceptance methods. Post-acceptance methods include bayesian filters, collaborative filtering and e-mail prioritization which are based on words or sentances. But, spammers are changing those characteristics and sending to avoid filtering system. In the case of Korean, the abnormal usages can be much more than other languages because syllable is composed of chosung, jungsung, and jongsung. Existing formal expressions and learning algorithms have the limits to meet with those changes promptly and efficiently. So, we present an methods for recognizing Korean abnormal language(Koral) to improve accuracy and efficiency of filtering system. The method is based on syllabic than word and Smith-waterman algorithm. Through the experiment on filter keyword and e-mail extracted from mail server, we confirmed that Koral is recognized exactly according to similarity level. The required time and space costs are within the permitted limit.

Studies of Automatic Dental Cavity Detection System as an Auxiliary Tool for Diagnosis of Dental Caries in Digital X-ray Image (디지털 X-선 영상을 통한 치아우식증 진단 보조 시스템으로써 치아 와동 자동 검출 프로그램 연구)

  • Huh, Jangyong;Nam, Haewon;Kim, Juhae;Park, Jiman;Shin, Sukyoung;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.52-58
    • /
    • 2015
  • The automated dental cavity detection program for a new concept intra-oral dental x-ray imaging device, an auxiliary diagnosis system, which is able to assist a dentist to identify dental caries in an early stage and to make an accurate diagnosis, was to be developed. The primary theory of the automatic dental cavity detection program is divided into two algorithms; one is an image segmentation skill to discriminate between a dental cavity and a normal tooth and the other is a computational method to analyze feature of an tooth image and take an advantage of it for detection of dental cavities. In the present study, it is, first, evaluated how accurately the DRLSE (Direct Regularized Level Set Evolution) method extracts demarcation surrounding the dental cavity. In order to evaluate the ability of the developed algorithm to automatically detect dental cavities, 7 tooth phantoms from incisor to molar were fabricated which contained a various form of cavities. Then, dental cavities in the tooth phantom images were analyzed with the developed algorithm. Except for two cavities whose contours were identified partially, the contours of 12 cavities were correctly discriminated by the automated dental caries detection program, which, consequently, proved the practical feasibility of the automatic dental lesion detection algorithm. However, an efficient and enhanced algorithm is required for its application to the actual dental diagnosis since shapes or conditions of the dental caries are different between individuals and complicated. In the future, the automatic dental cavity detection system will be improved adding pattern recognition or machine learning based algorithm which can deal with information of tooth status.

Improved Focused Sampling for Class Imbalance Problem (클래스 불균형 문제를 해결하기 위한 개선된 집중 샘플링)

  • Kim, Man-Sun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Cheah, Wooi Ping
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.287-294
    • /
    • 2007
  • Many classification algorithms for real world data suffer from a data class imbalance problem. To solve this problem, various methods have been proposed such as altering the training balance and designing better sampling strategies. The previous methods are not satisfy in the distribution of the input data and the constraint. In this paper, we propose a focused sampling method which is more superior than previous methods. To solve the problem, we must select some useful data set from all training sets. To get useful data set, the proposed method devide the region according to scores which are computed based on the distribution of SOM over the input data. The scores are sorted in ascending order. They represent the distribution or the input data, which may in turn represent the characteristics or the whole data. A new training dataset is obtained by eliminating unuseful data which are located in the region between an upper bound and a lower bound. The proposed method gives a better or at least similar performance compare to classification accuracy of previous approaches. Besides, it also gives several benefits : ratio reduction of class imbalance; size reduction of training sets; prevention of over-fitting. The proposed method has been tested with kNN classifier. An experimental result in ecoli data set shows that this method achieves the precision up to 2.27 times than the other methods.

Students' Informal Knowledge of Division in Elementary School Mathematics (자연수의 나눗셈에 관한 초등학교 학생의 비형식적 지식)

  • Park, Hyoun-Mi;Kang, Wan
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.10 no.2
    • /
    • pp.221-242
    • /
    • 2006
  • For teaching division more effectively, it is necessary to know students' informal knowledge before they learned formal knowledge about division. The purpose of this study is to research students' informal knowledge of division and to analyze meaningful suggestions to link formal knowledge of division in elementary school mathematics. According to this purpose, two research questions were set up as follows: (1) What is the students' informal knowledge before they learned formal knowledge about division in elementary school mathematics? (2) What is the difference of thinking strategies between students who have learned formal knowledge and students who have not learned formal knowledge? The conclusions are as follows: First, informal knowledge of division of natural numbers used by grade 1 and 2 varies from using concrete materials to formal operations. Second, students learning formal knowledge do not use so various strategies because of limited problem solving methods by formal knowledge. Third, acquisition of algorithm is not a prior condition for solving problems. Fourth, it is necessary that formal knowledge is connected to informal knowledge when teaching mathematics. Fifth, it is necessary to teach not only algorithms but also various strategies in the area of number and operation.

  • PDF

A Study on Improved Image Matching Method using the CUDA Computing (CUDA 연산을 이용한 개선된 영상 매칭 방법에 관한 연구)

  • Cho, Kyeongrae;Park, Byungjoon;Yoon, Taebok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.4
    • /
    • pp.2749-2756
    • /
    • 2015
  • Recently, Depending on the quality of data increases, the problem of time-consuming to process the image is raised by being required to accelerate the image processing algorithms, in a traditional CPU and CUDA(Compute Unified Device Architecture) based recognition system for computing speed and performance gains compared to OpenMP When character recognition has been learned by the system to measure the input by the character data matching is implemented in an environment that recognizes the region of the well, so that the font of the characters image learning English alphabet are each constant and standardized in size and character an image matching method for calculating the matching has also been implemented. GPGPU (General Purpose GPU) programming platform technology when using the CUDA computing techniques to recognize and use the four cores of Intel i5 2500 with OpenMP to deal quickly and efficiently an algorithm, than the performance of existing CPU does not produce the rate of four times due to the delay of the data of the partition and merge operation proposed a method of improving the rate of speed of about 3.2 times, and the parallel processing of the video card that processes a result, the sequential operation of the process compared to CPU-based who performed the performance gain is about 21 tiems improvement in was confirmed.