• Title/Summary/Keyword: Independent Component Analysis

Search Result 531, Processing Time 0.03 seconds

Numerical Formula and Verification of Web Robot for Collection Speedup of Web Documents

  • Kim Weon;Kim Young-Ki;Chin Yong-Ok
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.1-10
    • /
    • 2004
  • A web robot is a software that has abilities of tracking and collecting web documents on the Internet(l), The performance scalability of recent web robots reached the limit CIS the number of web documents on the internet has increased sharply as the rapid growth of the Internet continues, Accordingly, it is strongly demanded to study on the performance scalability in searching and collecting documents on the web. 'Design of web robot based on Multi-Agent to speed up documents collection ' rather than 'Sequentially executing Web Robot based on the existing Fork-Join method' and the results of analysis on its performance scalability is presented in the thesis, For collection speedup, a Multi-Agent based web robot performs the independent process for inactive URL ('Dead-links' URL), which is caused by overloaded web documents, temporary network or web-server disturbance, after dividing them into each agent. The agents consist of four component; Loader, Extractor, Active URL Scanner and inactive URL Scanner. The thesis models a Multi-Agent based web robot based on 'Amdahl's Law' to speed up documents collection, introduces a numerical formula for collection speedup, and verifies its performance improvement by comparing data from the formula with data from experiments based on the formula. Moreover, 'Dynamic URL Partition algorithm' is introduced and realized to minimize the workload of the web server by maximizing a interval of the web server which can be a collection target.

  • PDF

Combined Adjustment of Geodetic Levelling Net in Korea (우리나라 측지수준망의 조합조정)

  • 백은기;김원익
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.7 no.2
    • /
    • pp.1-6
    • /
    • 1989
  • The adjustment of levelling net is being done to the order of nets independently by using the least square method. For the small size net, it has difficulties in verification and statistical analysis of the net since the degree of freedom is low At the same time, it is also difficult to evaluate the error of lower order net correctly. The aim of this study is to analyse the properties of combined adjustment method compared with the independent adjustment method by using the data which have been measured during 1967-1987. Another aim is to analyse the influences of normal orthometric correction and changes of datum. Finally, Korean leveling net has been evaluated by applying real redundancy and variance component estimation.

  • PDF

Dispersion of Rayleigh Waves in the Korean Peninsula

  • Cho, Kwang-Hyun;Lee, Kie-Hwa
    • Journal of the Korean Geophysical Society
    • /
    • v.9 no.3
    • /
    • pp.231-240
    • /
    • 2006
  • The crustal structure of the Korean Peninsula was investigated by analyzing phase velocity dispersion data of Rayleigh waves. Earthquakes recorded by three component broad-band velocity seismographs during 1999-2004 in South Korea were used in this study. The fundamental mode Rayleigh waves were extracted from vertical components of seismograms by multiple filter technique and phase match filter method. Phase velocity dispersion curves of the fundamental mode signal pairs for 14 surface wave propagation paths on the great circle in the range 10 to 80 sec were computed by two-station method. Treating the shear velocity of each layer as an independent parameter, phase velocity data of Rayleigh wave were inverted. All the result models can be explained by a rather homogeneous crust of shear-wave velocity increasing from 2.8 to 3.25 km/sec from top to about 33 km depth without any distinctive crustal discontinuities and an uppermost mantle of shear-wave velocity between 4.55 and 4.67 km/sec. Our results turn out to agree well with recent study of Cho et al. (2006 b) based on the analysis of seismic background noises to recover short-period (0.5-20 sec) Rayleigh- and Love-wave group velocity dispersion characteristics.

  • PDF

Evaluation of ASCE 61-14 NSPs for the estimation of seismic demands in marginal wharves

  • Smith-Pardo, J. Paul.;Reyes, Juan C.;Sandoval, Juan D.;Hassan, Wael M.
    • Structural Engineering and Mechanics
    • /
    • v.69 no.1
    • /
    • pp.95-104
    • /
    • 2019
  • The Standard ASCE 61-14 proposes the Substitute Structure Method (SSM) as a Nonlinear Static Procedure (NSP) to estimate nonlinear displacement demands at the center of mass of piers or wharves under seismic actions. To account for bidirectional earthquake excitation according to the Standard, results from independent pushover analyses in each orthogonal direction should be combined using either a 100/30 directional approach or a procedure referred to as the Dynamic Magnification Factor, DMF. The main purpose of this paper is to present an evaluation of these NSPs in relation to four wharf model structures on soil conditions ranging from soft to medium dense clay. Results from nonlinear static analyses were compared against benchmark values of relevant Engineering Design Parameters, EDPs. The latter are defined as the geometric mean demands that are obtained from nonlinear dynamic analyses using a set of 30 two-component ground motion records. It was found that SSM provides close estimates of the benchmark displacement demands at the center of mass of the wharf structures. Furthermore, for the most critical pile connection at a landside corner of the wharf the 100/30 and DMF approaches produced displacement, curvature, and force demands that were reasonably comparable to corresponding benchmark values.

Threshold Values of Institutional Quality on FDI Inflows: Evidence from Developing Economies

  • LEE, Sunhae
    • The Journal of Industrial Distribution & Business
    • /
    • v.12 no.10
    • /
    • pp.31-41
    • /
    • 2021
  • Purpose: This study estimates the threshold values of institutional quality through investigating the non-linear effect of six sub-indices of Worldwide Governance Indicators on FDI inflows in 34 developing countries in Asia and Eastern Europe over the period from 2000-2017. Research Design, data and methodology: GMM EGLS is employed which does not include the lagged value of the dependent variable as an independent variable. As a proxy for the institutional quality, either one of the six sub-indices of WGI from World Bank or the composite index obtained through a principal component analysis is used in a separate model. Results: An improvement in institutional quality, when the quality stays below a certain threshold level, does not increase FDI inflows, and only when the quality is above the threshold, it can positively influence FDI inflows. The threshold values of political stability and absence of violence, government effectiveness, and rule of law are relatively higher than those of the other dimensions of WGI. Conclusion: Institutional quality of the developing economies of Asia and Eastern Europe has a non-linear effect on FDI inflows. The target countries need to upgrade their institutional quality above the threshold in order to attract more FDIs.

Quality Control Usage in High-Density Microarrays Reveals Differential Gene Expression Profiles in Ovarian Cancer

  • Villegas-Ruiz, Vanessa;Moreno, Jose;Jacome-Lopez, Karina;Zentella-Dehesa, Alejandro;Juarez-Mendez, Sergio
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.5
    • /
    • pp.2519-2525
    • /
    • 2016
  • There are several existing reports of microarray chip use for assessment of altered gene expression in different diseases. In fact, there have been over 1.5 million assays of this kind performed over the last twenty years, which have influenced clinical and translational research studies. The most commonly used DNA microarray platforms are Affymetrix GeneChip and Quality Control Software along with their GeneChip Probe Arrays. These chips are created using several quality controls to confirm the success of each assay, but their actual impact on gene expression profiles had not been previously analyzed until the appearance of several bioinformatics tools for this purpose. We here performed a data mining analysis, in this case specifically focused on ovarian cancer, as well as healthy ovarian tissue and ovarian cell lines, in order to confirm quality control results and associated variation in gene expression profiles. The microarray data used in our research were downloaded from ArrayExpress and Gene Expression Omnibus (GEO) and analyzed with Expression Console Software using RMA, MAS5 and Plier algorithms. The gene expression profiles were obtained using Partek Genomics Suite v6.6 and data were visualized using principal component analysis, heat map, and Venn diagrams. Microarray quality control analysis showed that roughly 40% of the microarray files were false negative, demonstrating over- and under-estimation of expressed genes. Additionally, we confirmed the results performing second analysis using independent samples. About 70% of the significant expressed genes were correlated in both analyses. These results demonstrate the importance of appropriate microarray processing to obtain a reliable gene expression profile.

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.

Development and Validation of a Machine Learning-based Differential Diagnosis Model for Patients with Mild Cognitive Impairment using Resting-State Quantitative EEG (안정 상태에서의 정량 뇌파를 이용한 기계학습 기반의 경도인지장애 환자의 감별 진단 모델 개발 및 검증)

  • Moon, Kiwook;Lim, Seungeui;Kim, Jinuk;Ha, Sang-Won;Lee, Kiwon
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.185-192
    • /
    • 2022
  • Early detection of mild cognitive impairment can help prevent the progression of dementia. The purpose of this study was to design and validate a machine learning model that automatically differential diagnosed patients with mild cognitive impairment and identified cognitive decline characteristics compared to a control group with normal cognition using resting-state quantitative electroencephalogram (qEEG) with eyes closed. In the first step, a rectified signal was obtained through a preprocessing process that receives a quantitative EEG signal as an input and removes noise through a filter and independent component analysis (ICA). Frequency analysis and non-linear features were extracted from the rectified signal, and the 3067 extracted features were used as input of a linear support vector machine (SVM), a representative algorithm among machine learning algorithms, and classified into mild cognitive impairment patients and normal cognitive adults. As a result of classification analysis of 58 normal cognitive group and 80 patients in mild cognitive impairment, the accuracy of SVM was 86.2%. In patients with mild cognitive impairment, alpha band power was decreased in the frontal lobe, and high beta band power was increased in the frontal lobe compared to the normal cognitive group. Also, the gamma band power of the occipital-parietal lobe was decreased in mild cognitive impairment. These results represented that quantitative EEG can be used as a meaningful biomarker to discriminate cognitive decline.

Development of Water Level Prediction Models Using Deep Neural Network in Mountain Wetlands (딥러닝을 활용한 산지습지 수위 예측 모형 개발)

  • Kim, Donghyun;Kim, Jungwook;Kwak, Jaewon;Necesito, Imee V.;Kim, Jongsung;Kim, Hung Soo
    • Journal of Wetlands Research
    • /
    • v.22 no.2
    • /
    • pp.106-112
    • /
    • 2020
  • Wetlands play an important function and role in hydrological, environmental, and ecological, aspects of the watershed. Water level in wetlands is essential for various analysis such as for the determination of wetland function and its effects on the environment. Since several wetlands are ungauged, research on wetland water level prediction are uncommon. Therefore, this study developed a water level prediction model using multiple regression analysis, principal component regression analysis, artificial neural network, and DNN to predict wetland water level. Geumjeong-Mountain Wetland located in Yangsan-city, Gyeongsangnam-do province was selected as the target area, and the water level measurement data from April 2017 to July 2018 was used as the dependent variable. On the other hand, hydrological and meteorological data were used as independent variables in the study. As a result of evaluating the predictive power, the water level prediction model using DNN was selected as the final model as it showed an RMSE value of 6.359 and an NRMSE value of 18.91%. This research study is believed to be useful especially as a basic data for the development of wetland maintenance and management techniques using the water level of the existing unmeasured points.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.