• 제목/요약/키워드: Spatial normalization

검색결과 55건 처리시간 0.029초

An Iterative Normalization Algorithm for cDNA Microarray Medical Data Analysis

  • Kim, Yoonhee;Park, Woong-Yang;Kim, Ho
    • Genomics & Informatics
    • /
    • 제2권2호
    • /
    • pp.92-98
    • /
    • 2004
  • A cDNA microarray experiment is one of the most useful high-throughput experiments in medical informatics for monitoring gene expression levels. Statistical analysis with a cDNA microarray medical data requires a normalization procedure to reduce the systematic errors that are impossible to control by the experimental conditions. Despite the variety of normalization methods, this. paper suggests a more general and synthetic normalization algorithm with a control gene set based on previous studies of normalization. Iterative normalization method was used to select and include a new control gene set among the whole genes iteratively at every step of the normalization calculation initiated with the housekeeping genes. The objective of this iterative normalization was to maintain the pattern of the original data and to keep the gene expression levels stable. Spatial plots, M&A (ratio and average values of the intensity) plots and box plots showed a convergence to zero of the mean across all genes graphically after applying our iterative normalization. The practicability of the algorithm was demonstrated by applying our method to the data for the human photo aging study.

다층 퍼셉트론 기반 고해상도 위성영상의 상대 방사보정 (Relative Radiometric Normalization for High-Spatial Resolution Satellite Imagery Based on Multilayer Perceptron)

  • 서대교;어양담
    • 한국측량학회지
    • /
    • 제36권6호
    • /
    • pp.515-523
    • /
    • 2018
  • 다중시기의 위성영상에 대해 일관성 있는 변화탐지 결과를 획득하기 위해서는 전처리 과정이 필수적으로 이루어져야 한다. 특히, 분광값과 관련된 전처리 과정은 방사보정으로 수행될 수 있으며, 일반적으로 상대 방사보정이 활용되고 있다. 하지만, 대부분의 상대 방사보정은 두 영상간의 관계를 선형으로 가정하며, 생태학적 차이와 같은 비선형적인 분광특성은 고려되지 않는다. 따라서, 본 연구에서는 방사 및 생태학적 특성에 대한 복합적인 보정을 수행할 수 있는 비선형적인 관계를 가정한 상대 방사보정을 제안하였다. 제안된 방법은 입력영상 및 참조영상을 선정하고, no-change method를 통해 radiometric control set samples를 추출하였다. 또한, 충분한 정보를 고려하기 위하여 화소값뿐만 아니라 분광지수들이 추출되었고, 비선형적인 관계의 모델링은 다층 퍼셉트론을 통해 수행되었다. 최종적으로 기존의 상대 방사보정기법과 비교 분석을 수행하였고, 시각적 및 정략적으로 평가한 결과 제안된 방법이 기존의 상대 방사보정보다 우수한 것을 확인하였다.

New Normalization Methods using Support Vector Machine Regression Approach in cDNA Microarray Analysis

  • Sohn, In-Suk;Kim, Su-Jong;Hwang, Chang-Ha;Lee, Jae-Won
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2005년도 BIOINFO 2005
    • /
    • pp.51-56
    • /
    • 2005
  • There are many sources of systematic variations in cDNA microarray experiments which affect the measured gene expression levels like differences in labeling efficiency between the two fluorescent dyes. Print-tip lowess normalization is used in situations where dye biases can depend on spot overall intensity and/or spatial location within the array. However, print-tip lowess normalization performs poorly in situation where error variability for each gene is heterogeneous over intensity ranges. We proposed the new print-tip normalization methods based on support vector machine regression(SVMR) and support vector machine quantile regression(SVMQR). SVMQR was derived by employing the basic principle of support vector machine (SVM) for the estimation of the linear and nonlinear quantile regressions. We applied our proposed methods to previous cDNA micro array data of apolipoprotein-AI-knockout (apoAI-KO) mice, diet-induced obese mice, and genistein-fed obese mice. From our statistical analysis, we found that the proposed methods perform better than the existing print-tip lowess normalization method.

  • PDF

지역적 $X^2$-히스토그램과 정규화를 이용한 새로운 샷 경계 검출 (New Shot Boundary Detection Using Local $X^2$-Histogram and Normalization)

  • 신성윤
    • 한국컴퓨터정보학회논문지
    • /
    • 제12권2호
    • /
    • pp.103-109
    • /
    • 2007
  • 본 논문에서는 카메라와 객체의 모션에 보다 강건하고 보다 정확한 결과를 산출하여 충분한 공간 정보를 가지는 지역적 $X^2$-히스토그램 비교 방법을 이용하여 샷 경계를 검출한다. 또한 영상처리에서 영상의 명암 값 향상을 위하여 사용되는 로그함수와 상수를 변형하여 차이 값에 적용하는 정규화 방법을 제시한다. 그리고 샷 경계 검출 알고리즘을 제시하여 일반적인 샷과 갑작스런 샷의 특징을 기반으로 검출한다.

  • PDF

An Efficiency Assessment for Reflectance Normalization of RapidEye Employing BRD Components of Wide-Swath satellite

  • Kim, Sang-Il;Han, Kyung-Soo;Yeom, Jong-Min
    • 대한원격탐사학회지
    • /
    • 제27권3호
    • /
    • pp.303-314
    • /
    • 2011
  • Surface albedo is an important parameter of the surface energy budget, and its accurate quantification is of major interest to the global climate modeling community. Therefore, in this paper, we consider the direct solution of kernel based bidirectional reflectance distribution function (BRDF) models for retrieval of normalized reflectance of high resolution satellite. The BRD effects can be seen in satellite data having a wide swath such as SPOT/VGT (VEGETATION) have sufficient angular sampling, but high resolution satellites are impossible to obtain sufficient angular sampling over a pixel during short period because of their narrow swath scanning when applying semi-empirical model. This gives a difficulty to run BRDF model inferring the reflectance normalization of high resolution satellites. The principal purpose of the study is to estimate normalized reflectance of high resolution satellite (RapidEye) through BRDF components from SPOT/VGT. We use semi-empirical BRDF model to estimated BRDF components from SPOT/VGT and reflectance normalization of RapidEye. This study used SPOT/VGT satellite data acquired in the S1 (daily) data, and within this study is the multispectral sensor RapidEye. Isotropic value such as the normalized reflectance was closely related to the BRDF parameters and the kernels. Also, we show scatter plot of the SPOT/VGT and RapidEye isotropic value relationship. The linear relationship between the two linear regression analysis is performed by using the parameters of SPOTNGT like as isotropic value, geometric value and volumetric scattering value, and the kernel values of RapidEye like as geometric and volumetric scattering kernel Because BRDF parameters are difficult to directly calculate from high resolution satellites, we use to BRDF parameter of SPOT/VGT. Also, we make a decision of weighting for geometric value, volumetric scattering value and error through regression models. As a result, the weighting through linear regression analysis produced good agreement. For all sites, the SPOT/VGT isotropic and RapidEye isotropic values had the high correlation (RMSE, bias), and generally are very consistent.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

도로기상차량으로 관측한 노면온도자료를 이용한 도로살얼음 취약 구간 산정 (Estimation of Road Sections Vulnerable to Black Ice Using Road Surface Temperatures Obtained by a Mobile Road Weather Observation Vehicle)

  • 박문수;강민수;김상헌;정현채;장성빈;유동길;류성현
    • 대기
    • /
    • 제31권5호
    • /
    • pp.525-537
    • /
    • 2021
  • Black ices on road surfaces in winter tend to cause severe and terrible accidents. It is very difficult to detect black ice events in advance due to their localities as well as sensitivities to surface and upper meteorological variables. This study develops a methodology to detect the road sections vulnerable to black ice with the use of road surface temperature data obtained from a mobile road weather observation vehicle. The 7 experiments were conducted on the route from Nam-Wonju IC to Nam-Andong IC (132.5 km) on the Jungang Expressway during the period from December 2020 to February 2021. Firstly, temporal road surface temperature data were converted to the spatial data with a 50 m resolution. Then, the spatial road surface temperature was normalized with zero mean and one standard deviation using a simple normalization, a linear de-trend and normalization, and a low-pass filter and normalization. The resulting road thermal map was calculated in terms of road surface temperature differences. A road ice index was suggested using the normalized road temperatures and their horizontal differences. Road sections vulnerable to black ice were derived from road ice indices and verified with respect to road geometry and sky view, etc. It was found that black ice could occur not only over bridges, but also roads with a low sky view factor. These results are expected to be applicable to the alarm service for black ice to drivers.

2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석 (Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery)

  • 정민경;김용일
    • 대한원격탐사학회지
    • /
    • 제36권2_1호
    • /
    • pp.179-197
    • /
    • 2020
  • 지형 정규화 기법은 영상 촬영 시의 광원, 센서 및 지표면 특성에 따라 발생하는 밝기값 상의 지형적인 영향을 제거하는 방법으로, 지형 조건으로 인해 동일 피복의 픽셀들이 서로 다른 밝기값을 지닐 때 그 차이를 감소시킴으로써 평면 상의 밝기값과 같아 보이도록 보정한다. 이러한 지형적인 영향은 일반적으로 산악 지형에서 크게 나타나며, 이에 따라 산불 피해 지역 추정과 같은 산악 지형에 대한 영상 활용에서는 지형 정규화 기법이 필수적으로 고려되어야 한다. 그러나 대부분의 선행연구에서는 중저해상도의 위성영상에 대한 지형 보정 성능 및 분류 정확도 영향 분석을 수행함으로써, 고해상도 다시기 영상을 이용한 지형 정규화 기법 분석은 충분히 다루어지지 않았다. 이에 본 연구에서는 PlanetScope 영상을 이용하여 신속하고 정확한 국내 산불 피해 지역 탐지를 위한 각 밴드별 최적의 지형 정규화 기법 평가 및 선별을 수행하였다. PlanetScope 영상은 3 m 공간 해상도의 전세계 일일 위성영상을 제공한다는 점에서 신속한 영상 수급 및 영상 처리가 요구되는 재난 피해 평가 분야에 높은 활용 가능성을 지닌다. 지형 정규화 기법 비교를 위해 보편적으로 이용되고 있는 7가지 기법을 구현하였으며, 토지 피복 구성이 상이한 산불 전후 영상에 모두 적용, 분석함으로써 종합적인 피해 평가에 활용될 수 있는 밴드 별 최적 기법 조합을 제안하였다. 제안된 방법을 통해 계산된 식생 지수를 이용하여 화재 피해 지역 변화 탐지를 수행하였으며, 객체 기반 및 픽셀 기반 방법 모두에서 향상된 탐지 정확도를 나타내었다. 또한, 화재 피해 심각도(burn severity) 매핑을 통해 지형 정규화 기법이 연속적인 밝기값 분포에 미치는 효과를 확인하였다.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

뇌기능 양전자방출단층촬영영상 분석 기법의 방법론적 고찰 (Methodological Review on Functional Neuroimaging Using Positron Emission Tomography)

  • 박해정
    • Nuclear Medicine and Molecular Imaging
    • /
    • 제41권2호
    • /
    • pp.71-77
    • /
    • 2007
  • Advance of neuroimaging technique has greatly influenced recent brain research field. Among various neuroimaging modalities, positron emission tomography has played a key role in molecular neuroimaging though functional MRI has taken over its role in the cognitive neuroscience. As the analysis technique for PET data is more sophisticated, the complexity of the method is more increasing. Despite the wide usage of the neuroimaging techniques, the assumption and limitation of procedures have not often been dealt with for the clinician and researchers, which might be critical for reliability and interpretation of the results. In the current paper, steps of voxel-based statistical analysis of PET including preprocessing, intensity normalization, spatial normalization, and partial volume correction will be revisited in terms of the principles and limitations. Additionally, new image analysis techniques such as surface-based PET analysis, correlational analysis and multimodal imaging by combining PET and DTI, PET and TMS or EEG will also be discussed.