• Title/Summary/Keyword: Bayesian

Search Result 2,691, Processing Time 0.04 seconds

Comparison of Development times of Myzus persicae (Hemiptera:Aphididae) between the Constant and Variable Temperatures and its Temperature-dependent Development Models (항온과 변온조건에서 복숭아혹진딧물의 발육비교 및 온도 발육모형)

  • Kim, Do-Ik;Choi, Duck-Soo;Ko, Suk-Ju;Kang, Beom-Ryong;Park, Chang-Gyu;Kim, Seon-Gon;Park, Jong-Dae;Kim, Sang-Soo
    • Korean journal of applied entomology
    • /
    • v.51 no.4
    • /
    • pp.431-438
    • /
    • 2012
  • The developmental time of the nymphs of Myzus persicae was studied in the laboratory (six constant temperatures from 15 to $30^{\circ}C$ with 50~60% RH, and a photoperiod of 14L:10D) and in a green-pepper plastic house. Mortality of M. persicae in laboratory was high in the first(6.7~13.3%) and second instar nymphs(6.7%) at low temperatures and high in the third (17.8%) and fourth instar nymphs(17.8%) at high temperatures. Mortality was 66.7% at $33^{\circ}C$ in laboratory and $26.7^{\circ}C$ in plastic house. The total developmental time was the longest at $14.6^{\circ}C$ (14.4 days) and shortest at $26.7^{\circ}C$ (6.0 days) in plastic house. The lower threshold temperature of the total nymphal stage was $3.0^{\circ}C$ in laboratory. The thermal constant required for nymphal stage was 111.1DD. The relationship between developmental rate and temperature was fitted nonlinear model by Logan-6 which has the lowest value on Akaike information criterion (AIC) and Bayesian information criterion (BIC). The distribution of completion of each developmental stage was well described by the 3-parameter Weibull function ($r^2=0.95{\sim}0.97$). This model accurately described the predicted and observed occurrences. Thus the model is considered to be good for use in predicting the optimal spray time for Myzus persicae.

Comparison of Temperature-dependent Development Model of Aphis gossypii (Hemiptera: Aphididae) under Constant Temperature and Fluctuating Temperature (실내 항온과 온실 변온조건에서 목화진딧물의 온도 발육비교)

  • Kim, Do-Ik;Ko, Suk-Ju;Choi, Duck-Soo;Kang, Beom-Ryong;Park, Chang-Gyu;Kim, Seon-Gon;Park, Jong-Dae;Kim, Sang-Soo
    • Korean journal of applied entomology
    • /
    • v.51 no.4
    • /
    • pp.421-429
    • /
    • 2012
  • The developmental time period of Aphis gossypii was studied in laboratory (six constant temperatures from 15 to $30^{\circ}C$ with 50~60% RH, and a photoperiod of 14L:10D) and in a cucumber plastic house. The mortality of A. gossypii in the laboratory was high in the 2nd (20.0%) and 3rd stage(13.3%) at low temperature but high in the 3rd (26.7%) and 4th stage (33.3%) at high temperatures. Mortality in the plastic house was high in the 1st and 2nd stage but there was no mortality in the 4th stage at low temperature. The total developmental period was longest at $15^{\circ}C$ (12.2 days) in the laboratory and shortest at $28.5^{\circ}C$ (4.09 days) in the plastic house. The lower threshold temperature at the total nymphal stage was $6.8^{\circ}C$ in laboratory. The thermal constant required to reach the total nymphal stage was 111.1DD. The relationship between the developmental rate and temperature fit the nonlinear model of Logan-6 which has the lowest value for the Akaike information criterion(AIC) and Bayesian information criterion(BIC). The distribution of completion of each development stage was well described by the 3-parameter Weibull function ($r^2=0.89{\sim}0.96$). This model accurately described the predicted and observed outcomes. Thus it is considered that the model can be used for predicting the optimal spray time for Aphis gossypii.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Feeding Behavior of Crustaceans (Cladocera, Copepoda and Ostracoda): Food Selection Measured by Stable Isotope Analysis Using R Package SIAR in Mesocosm Experiment (메소코즘을 이용한 지각류, 요각류 및 패충류의 섭식 성향 분석; 탄소, 질소 안정동위원소비의 믹싱모델 (R package SIAR)을 이용한 정량 분석)

  • Chang, Kwang-Hyeon;Seo, Dong-Il;Go, Soon-Mi;Sakamoto, Masaki;Nam, Gui-Sook;Choi, Jong-Yun;Kim, Min-Seob;Jeong, Kwang-Seok;La, Geung-Hwan;Kim, Hyun-Woo
    • Korean Journal of Ecology and Environment
    • /
    • v.49 no.4
    • /
    • pp.279-288
    • /
    • 2016
  • Stable Isotope Analysis(SIA) of carbon and nitrogen is useful tool for the understanding functional roles of target organisms in biological interactions in the food web. Recently, mixing model based on SIA is frequently used to determine which of the potential food sources predominantly assimilated by consumers, however, application of model is often limited and difficult for non-expert users of software. In the present study, we suggest easy manual of R software and package SIAR with example data regarding selective feeding of crustaceans dominated freshwater zooplankton community. We collected SIA data from the experimental mesocosms set up at the littoral area of eutrophic Chodae Reservoir, and analyzed the dominant crustacean species main food sources among small sized particulate organic matters (POM, <$50{\mu}m$), large sized POM (>$50{\mu}m$), and attached POM using mixing model. From the results obtained by SIAR model, Daphnia galeata and Ostracoda mainly consumed small sized POM while Simocephalus vetulus consumed both small and large sized POM simultaneously. Copepods collected from the reservoir showed no preferences on various food items, but in the mesocosm tanks, main food sources for the copepods was attached POM rather than planktonic preys including rotifers. The results have suggested that their roles as grazers in food web of eutrophicated reservoirs are different, and S. vetulus is more efficient grazer on wide range of food items such as large colony of phytoplankton and cyanobacteria during water bloom period.

The Effect of Rain on Traffic Flows in Urban Freeway Basic Segments (기상조건에 따른 도시고속도로 교통류변화 분석)

  • 최정순;손봉수;최재성
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.1
    • /
    • pp.29-39
    • /
    • 1999
  • An earlier study of the effect of rain found that the capacity of freeway systems was reduced, but did not address the effects of rain on the nature of traffic flows. Indeed, the substantial variation due to the intensity of adverse weather conditions is entirely rational so that its effects must be considered in freeway facility design. However, all of the data in Highway Capacity Manual(HCM) have come from ideal conditions. The primary objective of this study is to investigate the effect of rain on urban freeway traffic flows in Seoul. To do so, the relations between three key traffic variables(flow rates, speed, occupancy), their threshold values between congested and uncontested traffic flow regimes, and speed distribution were investigated. The traffic data from Olympic Expressway in Seoul were obtained from Imagine Detection System (Autoscope) with 30 seconds and 1 minute time periods. The slope of the regression line relating flow to occupancy in the uncongested regime decreases when it is raining. In essence, this result indicates that the average service flow rate (it may be interpreted as a capacity of freeway) is reduced as weather conditions deteriorate. The reduction is in the range between 10 and 20%, which agrees with the range proposed by 1994 US HCM. It is noteworthy that the service flow rates of inner lanes are relatively higher than those of other lanes. The average speed is also reduced in rainy day, but the flow-speed relationship and the threshold values of speed and occupancy (these are called critical speed and critical occupancy) are not very sensitive to the weather conditions.

  • PDF

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Study on the Multilevel Effects of Integrated Crisis Intervention Model for the Prevention of Elderly Suicide: Focusing on Suicidal Ideation and Depression (노인자살예방을 위한 통합적 위기개입모델 다층효과 연구: 자살생각·우울을 중심으로)

  • Kim, Eun Joo;Yook, Sung Pil
    • 한국노년학
    • /
    • v.37 no.1
    • /
    • pp.173-200
    • /
    • 2017
  • This study is designed to verify the actual effect on the prevention of the elderly suicide of the integrated crisis intervention service which has been widely provided across all local communities in Gyeonggi-province focusing on the integrated crisis intervention model developed for the prevention of elderly suicide. The integrated crisis intervention model for the local communities and its manual were developed for the prevention of elderly suicide by integrating the crisis intervention theory which contains local community's integrated system approach and the stress vulnerability theory. For the analysis of the effect, the geriatric depression and suicidal ideation scale was adopted and the data was collected as follows; The data was collected from 258 people in the first preliminary test. Then, it was collected from the secondary test of 184 people after the integrated crisis intervention service was performed for 6 months. The third collection of data was made from 124 people after 2 or 3 years later using the backward tracing method. As for the analysis, the researcher used the R Statistics computing to conduct the test equating, and the vertical scaling between measuring points. Then, the researcher conducted descriptive statistics analysis and univariate analysis of variance, and performed multi-level modeling analysis using Bayesian estimation. As a result of the study, it was found out that the integrated crisis intervention model which has been developed for the elderly suicide prevention has a statistically significant effect on the reduction of elderly suicide in terms of elderly depression and suicide ideation in the follow-up measurement after the implementation of crisis intervention rather than in the first preliminary scores. The integrated crisis intervention model for the prevention of elderly suicide was found to be effective to the extent of 0.56 for the reduction of depression and 0.39 for the reduction of suicidal ideation. However, it was found out in the backward tracing test conducted 2-3 years after the first crisis intervention that the improved values returned to its original state, thus showing that the effect of the intervention is not maintained for long. Multilevel analysis was conducted to find out the factors such as the service type(professional counseling, medication, peer counseling), characteristics of the client (sex, age), the characteristics of the counselor(age, career, major) and the interaction between the characteristics of the counselor and intervention which affect depression and suicidal ideation. It was found that only medication can significantly reduce suicidal ideation and that if the counselor's major is counseling, it significantly further reduces suicidal ideation by interacting with professional counseling. Furthermore, as the characteristics of the suicide prevention experts are found to regulate the intervention effect on elderly suicide prevention in applying integrated crisis intervention model, the primary consideration should be given to the counseling ability of these experts.

Origin and Source Appointment of Sedimentary Organic Matter in Marine Fish Cage Farms Using Carbon and Nitrogen Stable Isotopes (탄소 및 질소 안정동위원소를 활용한 어류 가두리 양식장 내 퇴적 유기물의 기원 및 기여도 평가)

  • Young-Shin Go;Dae-In Lee;Chung Sook Kim;Bo-Ram Sim;Hyung Chul Kim;Won-Chan Lee;Dong-Hun Lee
    • Korean Journal of Ecology and Environment
    • /
    • v.55 no.2
    • /
    • pp.99-110
    • /
    • 2022
  • We investigated physicochemical properties and isotopic compositions of organic matter (δ13CTOC and δ 15NTN) in the old fish farming (OFF) site after the cessation of aquaculture farming. Based on this approach, our objective is to determine the organic matter origin and their relative contributions preserved at sediments of fish farming. Temporal and spatial distribution of particulate and sinking organic matter(OFF sites: 2.0 to 3.3 mg L-1 for particulate matter concentration, 18.8 to 246.6 g m-2 day-1 for sinking organic matter rate, control sites: 2.0 to 3.5 mg L-1 for particulate matter concentration, 25.5 to 129.4 g m-2 day-1 for sinking organic matter rate) between both sites showed significant difference along seasonal precipitations. In contrast to variations of δ13CTOC and δ15NTN values at water columns, these isotopic compositions (OFF sites: -21.5‰ to -20.4‰ for δ13 CTOC, 6.0‰ to 7.6‰ for δ15NTN, control sites: -21.6‰ to -21.0‰ for δ13CTOC, 6.6‰ to 8.0‰ for δ15NTN) investigated at sediments have distinctive isotopic patterns(p<0.05) for seawater-derived nitrogen sources, indicating the increased input of aquaculture-derived sources (e.g., fish fecal). With respect to past fish farming activities, representative sources(e.g., fish fecal and algae) between both sites showed significant difference (p<0.05), confirming predominant contribution (55.9±4.6%) of fish fecal within OFF sites. Thus, our results may determine specific controlling factor for sustainable use of fish farming sites by estimating the discriminative contributions of organic matter between both sites.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.