• Title/Summary/Keyword: Robust method

Search Result 4,838, Processing Time 0.037 seconds

1H Solid-state NMR Methodology Study for the Quantification of Water Content of Amorphous Silica Nanoparticles Depending on Relative Humidity (상대습도에 따른 비정질 규산염 나노입자의 함수량 정량 분석을 위한 1H 고상 핵자기 공명 분광분석 방법론 연구)

  • Oh, Sol Bi;Kim, Hyun Na
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.34 no.1
    • /
    • pp.31-40
    • /
    • 2021
  • The hydrogen in nominally anhydrous mineral is known to be associated with lattice defects, but it also can exist in the form of water and hydroxyl groups on the large surface of the nanoscale particles. In this study, we investigate the effectiveness of 1H solid-state nuclear magnetic resonance (NMR) spectroscopy as a robust experimental method to quantify the hydrogen atomic environments of amorphous silica nanoparticles with varying relative humidity. Amorphous silica nanoparticles were packed into NMR rotors in a temperature-humidity controlled glove box, then stored in different atmospheric conditions with 25% and 70% relative humidity for 2~10 days until 1H NMR experiments, and a slight difference was observed in 1H NMR spectra. These results indicate that amount of hydrous species in the sample packed in the NMR rotor is rarely changed by the external atmosphere. The amount of hydrogen atom, especially the amount of physisorbed water may vary in the range of ~10% due to the temporal and spatial inhomogeneity of relative humidity in the glove box. The quantitative analysis of 1H NMR spectra shows that the amount of hydrogen atom in amorphous silica nanoparticles linearly increases as the relative humidity increases. These results imply that the sample sealing capability of the NMR rotor is sufficient to preserve the hydrous environments of samples, and is suitable for the quantitative measurement of water content of ultrafine nominally anhydrous minerals depending on the atmospheric relative humidity. We expect that 1H solid-state NMR method is suitable to investigate systematically the effect of surface area and crystallinity on the water content of diverse nano-sized nominally anhydrous minerals with varying relative humidity.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Studies of Molecular Breeding Technique Using Genome Information on Edible Mushrooms

  • Kong, Won-Sik;Woo, Sung-I;Jang, Kab-Yeul;Shin, Pyung-Gyun;Oh, Youn-Lee;Kim, Eun-sun;Oh, Min-Jee;Park, Young-Jin;Lee, Chang-Soo;Kim, Jong-Guk
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.53-53
    • /
    • 2015
  • Agrobacterium tumefaciens-mediated transformation(ATMT) of Flammulina velutipes was used to produce a diverse number of transformants to discover the functions of gene that is vital for its variation color, spore pattern and cellulolytic activity. Futhermore, the transformant pool will be used as a good genetic resource for studying gene functions. Agrobacterium-mediated transformation was conducted in order to generate intentional mutants of F. velutipes strain KACC42777. Then Agrobacterium tumefaciens AGL-1 harboring pBGgHg was transformed into F. velutipes. This method is use to determine the functional gene of F. velutipes. Inverse PCR was used to insert T-DNA into the tagged chromosomal DNA segments and conducting sequence analysis of the F. velutipes. But this experiment had trouble in diverse morphological mutants because of dikaryotic nature of mushroom. It needed to make monokaryotic fruiting varients which introduced genes of compatible mating types. In this study, next generation sequencing data was generated from 28 strains of Flammulina velutipes with different phenotypes using Illumina Hiseq platform. Filtered short reads were initially aligned to the reference genome (KACC42780) to construct a SNP matrix. And then we built a phylogenetic tree based on the validated SNPs. The inferred tree represented that white- and brown- fruitbody forming strains were generally separated although three brown strains, 4103, 4028, and 4195, were grouped with white ones. This topological relationship was consistently reappeared even when we used randomly selected SNPs. Group I containing 4062, 4148, and 4195 strains and group II containing 4188, 4190, and 4194 strains formed early-divergent lineages with robust nodal supports, suggesting that they are independent groups from the members in main clades. To elucidate the distinction between white-fruitbody forming strains isolated from Korea and Japan, phylogenetic analysis was performed using their SNP data with group I members as outgroup. However, no significant genetic variation was noticed in this study. A total of 28 strains of Flammulina velutipes were analyzed to identify the genomic regions responsible for producing white-fruiting body. NGS data was yielded by using Illumina Hiseq platform. Short reads were filtered by quality score and read length were mapped on the reference genome (KACC42780). Between the white- and brown fruitbody forming strains. There is a high possibility that SNPs can be detected among the white strains as homozygous because white phenotype is recessive in F. velutipes. Thus, we constructed SNP matrix within 8 white strains. SNPs discovered between mono3 and mono19, the parental monokaryotic strains of 4210 strain (white), were excluded from the candidate. If the genotypes of SNPs detected between white and brown strains were identical with those in mono3 and mono19 strains, they were included in candidate as a priority. As a result, if more than 5 candidates SNPs were localized in single gene, we regarded as they are possibly related to the white color. In F. velutipes genome, chr01, chr04, chr07,chr11 regions were identified to be associated with white fruitbody forming. White and Brown Fruitbody strains can be used as an identification marker for F. veluipes. We can develop some molecular markers to identify colored strains and discriminate national white varieties against Japanese ones.

  • PDF

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Evaluation of Viral Inactivation Efficacy of a Continuous Flow Ultraviolet-C Reactor (UVivatec) (연속 유동 Ultraviolet-C 반응기(UVivatec)의 바이러스 불활화 효과 평가)

  • Bae, Jung-Eun;Jeong, Eun-Kyo;Lee, Jae-Il;Lee, Jeong-Im;Kim, In-Seop;Kim, Jong-Su
    • Microbiology and Biotechnology Letters
    • /
    • v.37 no.4
    • /
    • pp.377-382
    • /
    • 2009
  • Viral safety is an important prerequisite for clinical preparations of all biopharmaceuticals derived from plasma, cell lines, or tissues of human or animal origin. To ensure the safety, implementation of multiple viral clearance (inactivation and/or removal) steps has been highly recommended for manufacturing of biopharmaceuticals. Of the possible viral clearance strategies, Ultraviolet-C (UVC) irradiation has been known as an effective viral inactivating method. However it has been dismissed by biopharmaceutical industry as a result of the potential for protein damage and the difficulty in delivering uniform doses. Recently a continuous flow UVC reactor (UVivatec) was developed to provide highly efficient mixing and maximize virus exposure to the UV light. In order to investigate the effectiveness of UVivatec to inactivate viruses without causing significant protein damage, the feasibility of the UVC irradiation process was studied with a commercial therapeutic protein. Recovery yield in the optimized condition of $3,000\;J/m^2$ irradiation was more than 98%. The efficacy and robustness of the UVC reactor was evaluated with regard to the inactivation of human immunodeficiency virus (HIV), hepatitis A virus (HAV), bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), porcine parvovirus (PPV), bovine parvovirus (BPV), minute virus of mice (MVM), reovirus type 3 (REO), and bovine parainfluenza virus type 3 (BPIV). Non enveloped viruses (HAV, PPV, BPV, MVM, and REO) were completely inactivated to undetectable levels by $3,000\;J/m^2$ irradiation. Enveloped viruses such as HIV, BVDV, and BPIV were completely inactivated to undetectable levels. However BHV was incompletely inactivated with slight residual infectivity remaining even after $3,000\;J/m^2$ irradiation. The log reduction factors achieved by UVC irradiation were ${\geq}3.89$ for HIV, ${\geq}5.27$ for HAV, 5.29 for BHV, ${\geq}5.96$ for BVDV, ${\geq}4.37$ for PPV, ${\geq}3.55$ for BPV, ${\geq}3.51$ for MVM, ${\geq}4.20$ for REO, and ${\geq}4.15$ for BPIV. These results indicate that UVC irradiation using UVivatec was very effective and robust in inactivating all the viruses tested.

The Impacts of Smoking Bans on Smoking in Korea (금연법 강화가 흡연에 미치는 영향)

  • Kim, Beomsoo;Kim, Ahram
    • KDI Journal of Economic Policy
    • /
    • v.31 no.2
    • /
    • pp.127-153
    • /
    • 2009
  • There is a growing concern about potential harmful effect of second-hand or environmental tobacco smoking. As a result, smoking bans in workplace become more prevalent worldwide. In Korea, workplace smoking ban policy become more restrictive in 2003 when National health enhancing law was amended. The new law requires all office buildings larger than 3,000 square meters (multi-purpose buildings larger than 2,000 square meters) should be smoke free. Therefore, a lot of indoor office became non smoking area. Previous studies in other counties often found contradicting answers for the effects of workplace smoking ban on smoking behavior. In addition, there was no study in Korea yet that examines the causal impacts of smoking ban on smoking behavior. The situation in Korea might be different from other countries. Using 2001 and 2005 Korea National Health and Nutrition surveys which are representative for population in Korea we try to examine the impacts of law change on current smoker and cigarettes smoked per day. The amended law impacted the whole country at the same time and there was a declining trend in smoking rate even before the legislation update. So, the challenge here is to tease out the true impact only. We compare indoor working occupations which are constrained by the law change with outdoor working occupations which are less impacted. Since the data has been collected before (2001) and after (2005) the law change for treated (indoor working occupations) and control (outdoor working occupations) groups we will use difference in difference method. We restrict our sample to working age (between 20 and 65) since these are the relevant population by the workplace smoking ban policy. We also restrict the sample to indoor occupations (executive or administrative and administrative support) and outdoor occupations (sales and low skilled worker) after dropping unemployed and someone working for military since it is not clear whether these occupations are treated group or control group. This classification was supported when we examined the answers for workplace smoking ban policy existing only in 2005 survey. Sixty eight percent of indoor occupations reported having an office smoking ban policy compared to forty percent of outdoor occupation answering workplace smoking ban policy. The estimated impacts on current smoker are 4.1 percentage point decline and cigarettes per day show statistically significant decline of 2.5 cigarettes per day. Taking into account consumption of average sixteen cigarettes per day among smokers it is sixteen percent decline in smoking rate which is substantial. We tested robustness using the same sample across two surveys and also using tobit model. Our results are robust against both concerns. It is possible that our measure of treated and control group have measurement error which will lead to attenuation bias. However, we are finding statistically significant impacts which might be a lower bound of the true estimates. The magnitude of our finding is not much different from previous finding of significant impacts. For cigarettes per day previous estimates varied from 1.37 to 3.9 and for current smoker it showed between 1%p and 7.8%p.

  • PDF

Technical Efficiency in Korea: Interindustry Determinants and Dynamic Stability (기술적(技術的) 효율성(效率性)의 결정요인(決定要因)과 동태적(動態的) 변화(變化))

  • Yoo, Seong-min
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.21-46
    • /
    • 1990
  • This paper, a sequel to Yoo and Lee (1990), attempts to investigate the interindustry determinants of technical efficiency in Korea's manufacturing industries, and also to conduct an exploratory analysis on the stability of technical efficiency over time. The hypotheses set forth in this paper are most found in the existing literature on technical efficiency. They are, however, revised and shed a new light upon, whenever possible, to accommodate any Korea-specific conditions. The set of regressors used in the cross-sectional analysis are chosen and the hypotheses are posed in such a way that our result can be made comparable to those of similar studies conducted for the U.S. and Japan by Caves and Barton (1990) and Uekusa and Torii (1987), respectively. It is interesting to observe a certain degree of similarity as well as differentiation between the cross-section evidence on Korea's manufacturing industries and that on the U.S. and Japanese industries. As for the similarities, we can find positive and significant effects on technical efficiency of relative size of production and the extent of specialization in production, and negative and significant effect of the variations in capital-labor ratio within industries. The curvature influence of concentration ratio on technical efficiency is also confirmed in the Korean case. There are differences, too. We cannot find any significant effects of capital vintage, R&D and foreign competition on technical efficiency, all of which were shown to be robust determinants of technical efficiency in the U.S. case. We note, however, that the variables measuring capital vintage effect, R&D and the degree of foreign competition in Korean markets are suspected to suffer from serious measurement errors incurred in data collection and/or conversion of industrial classification system into the KSIC (Korea Standard Industrial Classification) system. Thus, we are reluctant to accept the findings on the effects of these variables as definitive conclusions on Korea's industrial organization. Another finding that interests us is that the cross-industry evidence becomes consistently strong when we use the efficiency estimates based on gross output instead of value added, which provides us with an ex post empirical criterion to choose an output measure between the two in estimating the production frontier. We also conduct exploratory analyses on the stability of the estimates of technical efficiency in Korea's manufacturing industries. Though the method of testing stability employed in this paper is never a complete one, we cannot find strong evidence that our efficiency estimates are stable over time. The outcome is both surprising and disappointing. We can also show that the instability of technical efficiency over time is partly explained by the way we constructed our measures of technical efficiency. To the extent that our efficiency estimates depend on the shape of the empirical distribution of plants in the input-output space, any movements of the production frontier over time are not reflected in the estimates, and possibilities exist of associating a higher level of technical efficiency with a downward movement of the production frontier over time, and so on. Thus, we find that efficiency measures that take into account not only the distributional changes, but also the shifts of the production frontier over time, increase the extent of stability, and are more appropriate for use in a dynamic context. The remaining portion of the instability of technical efficiency over time is not explained satisfactorily in this paper, and future research should address this question.

  • PDF

Preliminary Study on the MR Temperature Mapping using Center Array-Sequencing Phase Unwrapping Algorithm (Center Array-Sequencing 위상펼침 기법의 MR 온도영상 적용에 관한 기초연구)

  • Tan, Kee Chin;Kim, Tae-Hyung;Chun, Song-I;Han, Yong-Hee;Choi, Ki-Seung;Lee, Kwang-Sig;Jun, Jae-Ryang;Eun, Choong-Ki;Mun, Chi-Woong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.2
    • /
    • pp.131-141
    • /
    • 2008
  • Purpose : To investigate the feasibility and accuracy of Proton Resonance Frequency (PRF) shift based magnetic resonance (MR) temperature mapping utilizing the self-developed center array-sequencing phase unwrapping (PU) method for non-invasive temperature monitoring. Materials and Methods : The computer simulation was done on the PU algorithm for performance evaluation before further application to MR thermometry. The MR experiments were conducted in two approaches namely PU experiment, and temperature mapping experiment based on the PU technique with all the image postprocessing implemented in MATLAB. A 1.5T MR scanner employing a knee coil with $T2^*$ GRE (Gradient Recalled Echo) pulse sequence were used throughout the experiments. Various subjects such as water phantom, orange, and agarose gel phantom were used for the assessment of the self-developed PU algorithm. The MR temperature mapping experiment was initially attempted on the agarose gel phantom only with the application of a custom-made thermoregulating water pump as the heating source. Heat was generated to the phantom via hot water circulation whilst temperature variation was observed with T-type thermocouple. The PU program was implemented on the reconstructed wrapped phase images prior to map the temperature distribution of subjects. As the temperature change is directly proportional to the phase difference map, the absolute temperature could be estimated from the summation of the computed temperature difference with the measured ambient temperature of subjects. Results : The PU technique successfully recovered and removed the phase wrapping artifacts on MR phase images with various subjects by producing a smooth and continuous phase map thus producing a more reliable temperature map. Conclusion : This work presented a rapid, and robust self-developed center array-sequencing PU algorithm feasible for the application of MR temperature mapping according to the PRF phase shift property.

  • PDF

Deterimination of an Optimal Time Point for Analyzing Transcriptional Activity and Analysis of Transcripts of Avian Influenza Virus H9N2 in Cultured Cell (배양세포에서 Semi-quantitative RT-PCR에 의한 조류인플루엔자 H9N2의 전사활성 분석 최적 시기 결정 및 전사체 분석)

  • Na, Gi-Youn;Lee, Young-Min;Byun, Sung-June;Jeon, Ik-Soo;Park, Jong-Hyeon;Cho, In-Soo;Joo, Yi-Seok;Lee, Yun-Jung;Kwon, Jun-Hun;Koo, Yong-Bum
    • Korean Journal of Microbiology
    • /
    • v.45 no.3
    • /
    • pp.286-290
    • /
    • 2009
  • The transcription of mRNA of avian influenza virus is regulated temporally during infection. Therefore, the measurement of transcript level in host cells should be performed before viral release from host cells because errors can occur in the analysis of the transcript levels if the viruses released from the infected cells re-infect cells. In this study, the timing of viral release was determined by measuring the level of viral RNA from viruses released from H9N2-infected chicken fibroblast cell line UMNSAH/DF-1 by semi-quantitative RT-PCR. The viral genomic RNA was isolated together with mouse total RNA which was added to the collected medium as carrier to monitor the viral RNA recovery and to use its GAPDH as an internal control for normalizing reverse transcription reaction as well as PCR reaction. It was found that viral release of H9N2 in the chicken fibroblast cell line UMNSAH/DF-1 took between 16 and 20 h after infection. We measured all 8 viral mRNA levels. Of the 8 transcripts, 7 species of viral mRNAs (each encoding HA, NA, PB1, PB2, NP, M, NS, respectively) except PA mRNA showed robust amplification, indicating these mRNA can be used as targets for amplification to measure transcript levels. These results altogether suggest that the method in this study can be used for screening antiviral materials against viral RNA polymerase as a therapeutic target.

The Research to Correct Overestimation in TOF-MRA for Severity of Cerebrovascular Stenosis (3D-SPACE T2 기법에 의한 TOF-MRA검사 시 발생하는 혈관 내 협착 정도의 측정 오류 개선에 관한 연구)

  • Han, Yong Su;Kim, Ho Chul;Lee, Dong Young;Lee, Su Cheol;Ha, Seung Han;Kim, Min Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.180-188
    • /
    • 2014
  • It is very important accurate diagnosis and quick treatment in cerebrovascular disease, i.e. stenosis or occlusion that could be caused by risk factors such as poor dietary habits, insufficient exercise, and obesity. Time-of-flight magnetic resonance angiography (TOF-MRA), it is well known as diagnostic method without using contrast agent for cerebrovascular disease, is the most representative and reliable technique. Nevertheless, it still has measurement errors (also known as overestimation) for length of stenosis and area of occlusion in celebral infarction that is built by accumulation and rupture of plaques generated by hemodynamic turbulence. The purpose of this study is to show clinical trial feasibility for 3D-SPACE T2, which is improved by using signal attenuation effects of fluid velocity, in diagnosis of cerebrovascular disease. To model angiostenosis, strictures of different proportions (40%, 50%, 60%, and 70%) and virtual blood stream (normal saline) of different velocities (0.19 ml/sec, 1.5 ml/sec, 2.1 ml/sec, and 2.6 ml/sec) by using dialysis were made. Cross-examinations were performed for 3D-SPACE T2 and TOF-MRA (16 times each). The accuracy of measurement for length of stenosis was compared in all experimental conditions. 3D-SPACE 2T has superiority in terms of accuracy for measurements of the length of stenosis, compared with TOF-MRA. Also, it is robust in fast blood stream and large stenosis than TOF-MRA. 3D-SPACE 2T will be promising technique to increase diagnosis accuracy in narrow complex lesions as like two cerebral small vessels with stenosis, created by hemodynamic turbulence.