• Title/Summary/Keyword: 처리성능

Search Result 15,285, Processing Time 0.043 seconds

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Lipopolysaccharide-induced Synthesis of IL-1beta, IL-6, TNF-alpha and TGF-beta by Peripheral Blood Mononuclear Cells (내독소에 의한 말초혈액 단핵구의 IL-1beta, IL-6, TNF-alpha와 TGF-beta 생성에 관한 연구)

  • Jung, Sung-Hwan;Park, Choon-Sik;Kim, Mi-Ho;Kim, Eun-Young;Chang, Hun-Soo;Ki, Shin-Young;Uh, Soo-Taek;Moon, Seung-Hyuk;Kim, Yang-Hoon;Lee, Hi-Bal
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.846-860
    • /
    • 1998
  • Background: Endotoxin (LPS : lipopolysaccharide), a potent activator of immune system, can induce acute and chronic inflammation through the production of cytokines by a variety of cells, such as monocytes, endothelial cells, lymphocytes, eosinophils, neutrophils and fibroblasts. LPS stimulate the mononucelar cells by two different pathway, the CD14 dependent and independent way, of which the former has been well documented, but not the latter. LPS binds to the LPS-binding protein (LBP), in serum, to make the LPS-LBP complex which interacts with CD14 molecules on the mononuclear cell surface in peripheral blood or is transported to the tissues. In case of high concentration of LPS, LPS can stimulate directly the macrophages without LBP. We investigated to detect the generation of proinflammatory cytokines such as interleukin 1 (IL-1), IL-6 and TNF-$\alpha$ and fibrogenic cytokine, TGF-$\beta$, by peripheral blood mononuclear cells (PBMC) after LPS stimulation under serum-free conditions, which lacks LBPs. Methods : PBMC were obtained by centrifugation on Ficoll Hypaque solution of peripheral venous bloods from healthy normal subjects, then stimulated in the presence of LPS (0.1 ${\mu}g/mL$ to 100 ${\mu}g/mL$ ). The activities of IL-1, IL-6, TNF, and TGF-$\beta$ were measured by bioassaies using cytokines - dependent proliferating or inhibiting cell lines. The cellular sources producing the cytokines was investigated by immunohistochemical stains and in situ hybridization. Results : PBMC started to produce IL-6, TNF-$\alpha$ and TGF-$\beta$ in 1 hr, 4 hrs and 8hrs, respectively, after LPS stimulation. The production of IL-6, TNF-$\alpha$ and TGF-$\beta$ continuously increased 96 hrs after stimulation of LPS. The amount of production was 19.8 ng/ml of IL-6 by $10^5$ PBMC, 4.1 ng/mL of TNF by $10^6$ PBMC and 34.4 pg/mL of TGF-$\beta$ by $2{\times}10^6$ PBMC. The immunoreactivity to IL-6, TNF-$\alpha$ and TGF-$\beta$ were detected on monocytes in LPS-stimulated PBMC. Some of lymphocytes showed positive immunoreactivity to TGF-$\beta$. Double immunohistochemical stain showed that IL-1$\beta$, IL-6, TNF-$\alpha$ expression was not associated with CD14 postivity on monocytes. IL-1$\beta$, IL-6, TNF-$\alpha$ and TGF-$\beta$mRNA expression were same as observed in immunoreactivity for each cytokines. Conclusion: When monocytes are stimulated with LPS under serum-free conditions, IL-6 and TNF-$\alpha$ are secreted in early stage of inflammation. In contrast, the secretion of TGF-$\beta$ arise in the late stages and that is maintained after 96 hrs. The main cells releasing IL-1$\beta$, IL-6, TNF-$\alpha$ and TGF-$\beta$ are monocytes, but also lymphocytes can secret TGF-$\beta$.

  • PDF

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

A Novel in Vitro Method for the Metabolism Studies of Radiotracers Using Mouse Liver S9 Fraction (생쥐 간 S9 분획을 이용한 방사성추적자 대사물질의 새로운 체외 측정방법)

  • Ryu, Eun-Kyoung;Choe, Yearn-Seong;Kim, Dong-Hyun;Lee, Sang-Yoon;Choi, Yong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.325-329
    • /
    • 2004
  • Purpose: Usefulness of mouse liver S9 fraction was evaluated for the measurement of the metabolites in the in vitro metabolism study of $^{18}F$-labeled radiotracers. Materials and Methods: Mouse liver S9 fraction was isolated at au early step in the course of microsome preparation. The in vitro metabolism studies were tarried out by incubating a mixture containing the radiotracer, S9 fraction and NADPH at $37^{\ciirc}C$, and an aliquot of the mixture was analyzed at the indicated time points by radio-TLC. Metabolic defluorination was further confirmed by the incubation with calcium phosphate, a bone mimic. Results: The radiotracer $[^{18}F]1$ underwent metabolic defluorination within 15 min, which was consistent with the results of the in vivo method and the in vitro method using microsome. Radiotracer $[^{18}F]2$ was metabolized to three metabolites including $4-[^{18}F]fluorobenzoic$ acid within 60 min. It is likely that the one of these metabolites at the origin of radio-TLC was identical with the one that obtained from the in vivo and in vitro (microsome) method. Compared with the in vitro method using microsome, the method using S9 fraction gave a similar pattern of the metabolites but with a different ratio, which can be explained by the presence of cytosol in the S9 fraction. Conclusion: These results suggest that the findings of the in vitro metabolism studies using S9 fraction can reflect the in vivo metabolism of novel radiotracers in the liver. Moreover, this method can be used as a tool to determine metabolic defluorination along with calcium phosphate absorption method.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.