• Title/Summary/Keyword: traditional experiments

Search Result 1,060, Processing Time 0.028 seconds

Consideration of Standardized Uptake Value (SUV) According to the Change of Volume Size through the Application of Astonish TF Reconstruction Method (Astonish TF 재구성 기법의 적용을 통한 체적 크기의 변화에 따른 표준섭취계수(SUV)에 관한 고찰)

  • Lee, Juyoung;Nam-Kung, Sik;Kim, Ji-Hyeon;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.115-121
    • /
    • 2014
  • Purpose: In addition to improving the quality of the PET image, through much research, the development of various programs are performed. Astonish TF reconstruction techniques by Philips can confirm the improved contrast of the lesion. Also, It's image reconstruction of 2 mm is possible with rapid reconstruction rate than conventional. In this study, we compared and evaluated Standardized Uptake Value (SUV) in accordance with the 2 mm reconstruction techniques and traditional 4 mm from the $^{18}F-FDG$ PET whole body image. Materials and Methods: In the phantom experiment, NEMA IEC body phantom (sphere: 10, 13, 17, 22, 28, 37 mm) was used to obtain images by using GEMINI TF 64 PET/CT (Philips, Cleveland, USA). Also, In the clinical images, we performed $^{18}F-FDG$ PET/CT examination to 30 women (age: $55.1{\pm}11.3$, BMI: $24.1{\pm}2.9$) with a diagnosis of breast cancer. After that, we reconstructed images in 2 mm and 4 mm respectively. The region of interest was drawn to acquired images. Since then, we measured SUV and statistically analyzed with SPSS ver.18 by using EBW (Extended Brilliance Workstation) NM ver.1.0. Results: After analyzing the result of the phantom study, there was a tendency that the bigger hot sphere size, the higher SUV. If you compared the 2 mm reconstruction techniques to 4 mm, it increased 95.78% in 10 mm, 50.60% in 13 mm, 25.00% in 17 mm, 30.04% in 22 mm, 31.81% in 28 mm, and 27.84% in 37 mm. Through the result of the analysis of the 2 mm reconstruction techniques and 4 mm in clinical images, it appeared that SUV of 2 mm was higher than that of 4 mm. Also the smaller the volume was, the more the change rate increased. Conclusion: After analyzing the result of the clinical picture and phantom experiments applied by Astonish TF reconstruction techniques, as the size of the volume was small, the change rate of the SUV increased. Therefore, it was necessary to further research about the SUV correction for accurate and active utilization of 2 mm reconstruction techniques which had excellent lesion discrimination ability and contrast in clinic.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Prospects of Triticale as Fodder and Feed in Farming of Bangladesh (방글라데시 농업에서 트리티게일의 조사료 및 곡물사료이용 전망)

  • Tabassum, Nazia;Uddin, Md. Romij;Gim, Uhn-Soon
    • Korean Journal of Agricultural Science
    • /
    • v.35 no.1
    • /
    • pp.101-118
    • /
    • 2008
  • This paper reviews the present situation of Triticale cultivation and examines the potentiality of contribution to livestock as well as poultry sector in Bangladesh Agriculture. Triticale is a human-made cross between rye and durum wheat that has the ability to produce quality green fodder, and then re-grow after first and second cutting to produce grain. In Bangladesh, it is a non-traditional cereal that grows well during the cool and dry Rabi season (November-March) when fodder and feed scarcity is a major limiting factor for ruminant livestock. In Bangladesh Triticale was started to grow in the late Ninety's. The scientists of Bangladesh Agricultural Research Institute (BARI) were first introduced triticale in Bangladesh. Still now the situation of Triticale is grown as fooder and feed in Bangladesh within the scientists under trial. High quality grass fodder was obtained by cutting green triticale plants twice, at 35 and 50 days after seeding, while later the ratooning tillers produced grain to a yield of 1.1-2.4 t/ha of grain for poultry feed or human food. Triticale straw was twice as nutritious as rice or wheat straw and its grain contained more protein than other cereals. Researchers and farmers have also successfully made triticale hay and silage from a mixture of triticale green cuttings, rice straw and molasses. A feeding trial at Bangladesh Livestock Research Institute(BLRI), Savar station showed a large(46%) increase in cow live weight gain and a 36% increase in milk yield (but no change in milk quality or dry matter intake) in cows fed triticale silage compared with those fed rice straw over a period of nine weeks. In another feeding trial, it was found that triticale grain was a good replacement for wheat in the feed blend for chickens in Bangladesh. So it will be a good chance to alive our livestock as well as poultry sector if triticale enters to our existing cropping system as fodder cum grain. The challenge in Bangladesh is to identify fodder technologies that match existing small-scale farmer cropping patterns without needing major inputs or increasing risks. Preliminary field experiments revealed that triticale is a crop with good potential to produce quality fodder and grain for small scale farmers in Bangladesh.

  • PDF

The Comparison and Index Components in Quality of Salt-Fermented Anchovy Sauces (멸치 액젓의 품질 비교 및 품질 지표성분에 관한 연구)

  • Oh, Kwang-Soo
    • Korean Journal of Food Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.487-494
    • /
    • 1995
  • To assay the quality of anchovy sauce, 10 kinds of commercial anchovy sauce(CAS) were purchased from markets and traditional anchovy sauce(TAS) were prepared. And their physicochemical-microbial characteristics were compared. The compositions of CAS were as followed; pH $5.5{\sim}5.7$, salinity $21.0{\sim}23.2%$, VBN $92.8{\sim}305.4\;mg/100g$, total nitrogen $928.0{\sim}1870.9\;mg%$, amino-nitrogen $338.6{\sim}680.3\;mg%$, and acidity $11.58{\sim}24.58\;ml$. The CAS was lower in pH, smaller in contents of VBN, total-N, amino-N and larger in contents of moisture, salinity than TAS. In Hunter values, CAS was generally lower in L, b values whereas higher in a and ${\Delta}E$ values than TAS. Viable cell counts on 0% NaCl-medium of CAS and TAS were $6.4{\times}10^1{\sim}3.0{\times}10^5\;and\;8.7{\times}10^4$, and those on 2.5% NaCl-medium were $0.8{\times}10^2{\sim}2.2{\times}10^5\;and\;1.6{\times}10^4{\sim}4.5{\times}10^5$, respectively. These viable cell counts in CAS and TAS were gradually decreased according to storage time. In composition of extractives, total free amino acid contents of CAS and TAS were $5498.5{\sim}12123.8\;mg%$, 12797.9 mg%, and these contents were gradually decreased during storage. The major amino acids were found glutamic acid, alanine and leucine in CAS, and alanine, glutamic acid, leucine and valine in TAS. Also contents of hypoxanthine, TMAO, TMA in CAS and TAS were shown $86.4{\sim}161.2\;mg%,\;51.6{\sim}99.2\;mg%,\;23.2{\sim}42.9\;mg%$ and 103.7 mg%, 128.8 mg%, 55.8 mg%, respectively. We may conclude from the results of present experiments that parts of tested CAS were somewhat putrefied and there was a great difference in the quality compared with TAS, whereas TAS maintained good conditions for preserving the quality until storage 2 years.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Total Polyphenol Contents and Antioxidant Activities of Methanol Extracts from Vegetables produced in Ullung Island (울릉도산 산채류 추출물의 총 폴리페놀 함량 및 항산화 활성)

  • Lee, Syng-Ook;Lee, Hyo-Jung;Yu, Mi-Hee;Im, Hyo-Gwon;Lee, In-Seon
    • Korean Journal of Food Science and Technology
    • /
    • v.37 no.2
    • /
    • pp.233-240
    • /
    • 2005
  • To discover new functional materials using edible plants, antioxidant activities of methanol extracts from various parts of seven wild vegetables were investigated in vitro. Total polyphenol contents, determined by Folin-Denis method, varied from 16.74 to $130.22{\mu}g/mg$. Radical-scavenging activities of methanol extracts were examined using ${\alpha},\;{\alpha}-diphenyl-{\beta}-pirrylhydrazyl$ (DPPH) radicals and 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS) assay. Inhibition effects on peroxidation of linoleic acid determined by ferric thiocyanate (FTC) method and on oxidative degradation of 2-deoxy-D-ribose in Fenton-type reaction system were dose-dependent. Athyrium acutipinulum Kodama (leaf and rood), Achyranthes japonica (Miq.) Nakai (seed), and Solidago virga-aurea var. gigantea Nakai (root) showed relatively high antioxidant activities in various systems.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Current and Future Perspectives of Lung Organoid and Lung-on-chip in Biomedical and Pharmaceutical Applications

  • Junhyoung Lee;Jimin Park;Sanghun Kim;Esther Han;Sungho Maeng;Jiyou Han
    • Journal of Life Science
    • /
    • v.34 no.5
    • /
    • pp.339-355
    • /
    • 2024
  • The pulmonary system is a highly complex system that can only be understood by integrating its functional and structural aspects. Hence, in vivo animal models are generally used for pathological studies of pulmonary diseases and the evaluation of inhalation toxicity. However, to reduce the number of animals used in experimentation and with the consideration of animal welfare, alternative methods have been extensively developed. Notably, the Organization for Economic Co-operation and Development (OECD) and the United States Environmental Protection Agency (USEPA) have agreed to prohibit animal testing after 2030. Therefore, the latest advances in biotechnology are revolutionizing the approach to developing in vitro inhalation models. For example, lung organ-on-a-chip (OoC) and organoid models have been intensively studied alongside advancements in three-dimensional (3D) bioprinting and microfluidic systems. These modeling systems can more precisely imitate the complex biological environment compared to traditional in vivo animal experiments. This review paper addresses multiple aspects of the recent in vitro modeling systems of lung OoC and organoids. It includes discussions on the use of endothelial cells, epithelial cells, and fibroblasts composed of lung alveoli generated from pluripotent stem cells or cancer cells. Moreover, it covers lung air-liquid interface (ALI) systems, transwell membrane materials, and in silico models using artificial intelligence (AI) for the establishment and evaluation of in vitro pulmonary systems.