• Title/Summary/Keyword: 시스템안정도

Search Result 7,348, Processing Time 0.032 seconds

무령왕릉보존에 있어서의 지질공학적 고찰

  • 서만철;최석원;구민호
    • Proceedings of the KSEEG Conference
    • /
    • 2001.05b
    • /
    • pp.42-63
    • /
    • 2001
  • The detail survey on the Songsanri tomb site including the Muryong royal tomb was carried out during the period from May 1 , 1996 to April 30, 1997. A quantitative analysis was tried to find changes of tomb itself since the excavation. Main subjects of the survey are to find out the cause of infiltration of rain water and groundwater into the tomb and the tomb site, monitoring of the movement of tomb structure and safety, removal method of the algae inside the tomb, and air controlling system to solve high humidity condition and dew inside the tomb. For these purposes, detail survery inside and outside the tombs using a electronic distance meter and small airplane, monitoring of temperature and humidity, geophysical exploration including electrical resistivity, geomagnetic, gravity and georadar methods, drilling, measurement of physical and chemical properties of drill core and measurement of groundwater permeability were conducted. We found that the center of the subsurface tomb and the center of soil mound on ground are different 4.5 meter and 5 meter for the 5th tomb and 7th tomb, respectively. The fact has caused unequal stress on the tomb structure. In the 7th tomb (the Muryong royal tomb), 435 bricks were broken out of 6025 bricks in 1972, but 1072 bricks are broken in 1996. The break rate has been increased about 250% for just 24 years. The break rate increased about 290% in the 6th tomb. The situation in 1996 is the result for just 24 years while the situation in 1972 was the result for about 1450 years. Status of breaking of bircks represents that a severe problem is undergoing. The eastern wall of the Muryong royal tomb is moving toward inside the tomb with the rate of 2.95 mm/myr in rainy season and 1.52 mm/myr in dry season. The frontal wall shows biggest movement in the 7th tomb having a rate of 2.05 mm/myr toward the passage way. The 6th tomb shows biggest movement among the three tombs having the rate of 7.44mm/myr and 3.61mm/myr toward east for the high break rate of bricks in the 6th tomb. Georadar section of the shallow soil layer represents several faults in the top soil layer of the 5th tomb and 7th tomb. Raninwater flew through faults tnto the tomb and nearby ground and high water content in nearby ground resulted in low resistance and high humidity inside tombs. High humidity inside tomb made a good condition for algae living with high temperature and moderate light source. The 6th tomb is most severe situation and the 7th tomb is the second in terms of algae living. Artificial change of the tomb environment since the excavation, infiltration of rain water and groundwater into the tombsite and bad drainage system had resulted in dangerous status for the tomb structure. Main cause for many problems including breaking of bricks, movement of tomb walls and algae living is infiltration of rainwater and groundwater into the tomb site. Therefore, protection of the tomb site from high water content should be carried out at first. Waterproofing method includes a cover system over the tomvsith using geotextile, clay layer and geomembrane and a deep trench which is 2 meter down to the base of the 5th tomb at the north of the tomv site. Decrease and balancing of soil weight above the tomb are also needed for the sfety of tomb structures. For the algae living inside tombs, we recommend to spray K101 which developed in this study on the surface of wall and then, exposure to ultraviolet light sources for 24 hours. Air controlling system should be changed to a constant temperature and humidity system for the 6th tomb and the 7th tomb. It seems to much better to place the system at frontal room and to ciculate cold air inside tombs to solve dew problem. Above mentioned preservation methods are suggested to give least changes to tomb site and to solve the most fundmental problems. Repairing should be planned in order and some special cares are needed for the safety of tombs in reparing work. Finally, a monitoring system measuring tilting of tomb walls, water content, groundwater level, temperature and humidity is required to monitor and to evaluate the repairing work.

  • PDF

홍삼 유래 성분들의 면역조절 효능

  • Jo, Jae-Yeol
    • Food preservation and processing industry
    • /
    • v.8 no.2
    • /
    • pp.6-12
    • /
    • 2009
  • 면역반응은 외부 감염원으로부터 신체를 보호하고 외부감염원을 제거하고자 하는 주요항상성 유지기전의 하나이다. 이들 반응은 골수에서 생성되고 비장, 흉선 및 임파절 등에서 성숙되는 면역세포들에 의해 매개된다. 보통 태어나면서부터 얻어진 선천성 면역반응을 매개하는 대식세포, 수지상 세포 등과, 오랜기간 동안 감염된 다양한 면역원에 대한 경험을 토대로 얻어진 획득성 면역을 담당하는 T 임파구 등이 대표적인 면역세포로 알려져 있다. 다양한 면역질환이 최근 주요 사망률의 원인이 되고 있다. 최근, 암, 당뇨 및 뇌혈관질환 등이 생체에서 발생되는 급 만성염증에 의해 발생된다고 보고됨에 따라 면역세포 매개성 염증질환에 대한 치료제 개발을 서두르고 있다. 또한 암환자의 급격한 증가는 암발생의 주요 방어기전인 면역력 증강에 대한 요구들을 가중시키고 있다. 예로부터 사용되어 오던 고려인삼과 홍삼은 기를 보호하고 원기를 회복하는 명약으로 알려진 대표적인 우리나라 천연생약이다. 특별히, 홍삼은 단백질과 핵산의 합성을 촉진시키고, 조혈작용, 간기능 회복, 혈당강하, 운동수행 능력증대, 기억력 개선, 항피로작용 및 면역력 증대에 매우 효과가 좋은 것으로 보고되고 있다. 홍삼에 관한 많은 연구에 비해, 현재까지 홍삼이 면역력 증강에 미치는 효과에 대한 분자적 수준에서의 연구는 매우 미미한 것으로 확인되어져 있다. 홍삼의 투여는 NK 세포나 대식세포의 활성이 증가하고 항암제의 암세포 사멸을 증가시키는 것으로 확인되어졌다. 현재까지 알려진 주요 면역증강 성분은 산성다당류로 보고되었다. 또 한편으로 일부 진세노사이드류에서 항염증 효능이 확인되어졌으며, 이를 통해 피부염증 반응과 관절염에 대한 치료 효과가 있는 것으로 추측되고 있다 [본 연구는 KT&G 연구출연금 (2009-2010) 지원을 받아 이루어졌기에 이에 감사드린다]. 면역반응은 외부 감염물질의 침입으로 유도된 질병환경을 제거하고 수복하는 중요한 생체적 방어작용의 하나이다. 이들 과정은 체내로 유입된 미생물이나 미세화학물질들과 같은 독성물질을 소거하거나 파괴하는 것을 주요 역할로 한다. 외부로 부터 인체에 들어온 이물질에 대한 방어기전은 현재 두 가지 종류의 면역반응으로 구분해서 설명한다. 즉, 선천성 면역 반응 (innate immunity)과 후천성 면역 반응 (adaptive immunity)이 그것이다. 선천성 면역반응은 1) 피부나 점막의 표면과 같은 해부학적인 보호벽 구조와 2) 체온과 낮은 pH 및 chemical mediator (리소자임, collectin류) 등과 같은 생리적 방어구조, 3) phagocyte류 (대식세포, 수지상세포 및 호중구 등)에 의한 phagocytic/endocytic 방어, 그리고 4) 마지막으로 염증반응을 통한 감염에 저항하는 면역반응 등으로 구분된다. 후천성 면역반응은 획득성면역이라고도 불리고 특이성, 다양성, 기억 및 자기/비자기의 인식이라는 네 가지의 특징을 가지고 있으며, 외부 유입물질을 제거하는 반응에 따라 체액성 면역 반응 (humoral immune response)과 세포성 면역반응 (cell-mediated immune response)으로 구분된다. 체액성 면역은 침입한 항원의 구조 특이적으로 생성된 B cell 유래 항체와의 반응과 간이나 대식세포 등에서 합성되어 분비된 혈청내 보체 등에 의해 매개되는 반응으로 구성되어 있다. 세포성 면역반응은 T helper cell (CD4+), cytotoxic T cell (CD8+), B cell 및antigen presenting cell 중개를 통한 세포간 상호 작용에 의해 발생되는 면역반응이다. 선천성 면역반응의 하나인 염증은 우리 몸에서 가장 빈번히 발생되고 있는 방어작용의 하나이다. 예를 들면 감기에 걸렸을 경우, 환자의 편도선내 대식세포나 수지상세포류는 감염된 바이러스 단독 혹은 동시에 감염된 박테리아를 상대로 다양한 염증성 반응을 유도하게 된다. 또한, 상처가 생겼을 경우에도 감염원을 통해 유입된 병원성 세균과 주위조직내 선천성 면역담당 세포들 간의 면역학적 전투가 발생되게 된다. 이들 과정을 통해, 주위 세포나 조직이 손상되면, 즉각적으로 이들 면역세포들 (주로 phagocytes류)은 신속하게 손상을 극소화하고 더 나가서 손상된 부위를 원상으로 회복시키려는 일련의 염증반응을 유도하게 된다. 이들 반응은 우리가 흔히 알고 있는 발적 (redness), 부종 (swelling), 발열 (heat), 통증 (pain) 등의 증상으로 나타나게 된다. 즉, 손상된 부위 주변에 존재하는 모세혈관에 흐르는 혈류의 양이 증가하면서 혈관의 직경이 늘어나게 되고, 이로 인한 조직의 홍반과, 부어 오른 혈관에 의해 발열과 부종이 초래되는 것이다. 확장된 모세혈관의 투과성 증가는 체액과 세포들이 혈관에서 조직으로 이동하게 하는 원동력이 되고, 이를 통해 축적된 삼출물들은 단백질의 농도를 높여, 최종적으로 혈관에 존재하는 체액들이 조직으로 더 많이 이동되도록 유도하여 부종을 형성시킨다. 마지막으로 혈관 내 존재하는 면역세포들은 혈판 내벽에 점착되고 (margination), 혈관벽의 간극을 넓히는 역할을 하는 히스타민 (histamine)이나 일산화질소(nitric oxide : NO), 프로스타그린딘 (prostagladins : PGE2) 및 류코트리엔 (leukotriens) 등과 같은 chemical mediator의 도움으로 인해 혈관벽 사이로 삼출하게 되어 (extravasation), 손상된 부위로 이동하여 직접적인 외부 침입 물질의 파괴나 다른 면역세포들을 모으기 위한 cytokine (tumor necrosis factor [TNF]-$\alpha$, interleukin [IL]-1, IL-6 등) 혹은 chemokine (MIP-l, IL-8, MCP-l등)의 분비 등을 수행함으로써 염증반응을 매개하게 된다. 염증과정시 발생되는 여러 mediator 중 PGE2나 NO 및 TNF-$\alpha$ 등은 실험적 평가가 용이하여 이들 mediator 자체나 생성관련효소 (cyclooxygenase [COX] 및 nitric oxide synthase [NOS] 등)들은 현재항염증 치료제의 개발 연구시 주요 표적으로 연구되고 있다. 염증 반응은 지속기간에 따라 크게 급성염증과 만성염증으로 나뉘며, 삼출물의 종류에 따라서는 장액성, 섬유소성, 화농성 및 출혈성 염증 등으로 구분된다. 급성 염증 (acute inflammation)반응은 수일 내지 수주간 지속되는 일반적인 염증반응이라고 볼 수 있다. 국소반응은 기본징후인 발열과 발적, 부종, 통증 및 기능 상실이 특징적이며, 현미경적 소견으로는 혈관성 변화와 삼출물 형성이 주 작용이므로 일명 삼출성 염증이라고 한다. 만성 염증 (chronic inflammation)은, 급성 염증으로부터 이행되거나 만성으로 시작된다. 염증지속 기간은 보통 4주 이상 장기화 된다. 보통 염증의 경우에는 염증 생성 cytokine인 Th1 cytokine (IL-2, interferone [IFN]-$\gamma$ 및 TNF-$\alpha$ 등)의 생성 후, 거의 즉각적으로 항 염증성 cytokine인 Th2 cytokine(IL-4, IL-6, IL-10 및 transforming growth factor [TGF]-$\beta$ 등)이 생성되어 정상반응으로 회복된다. 그러나, 어떤 원인에서든 면역세포에 의한 염증원 제거 반응이 문제가 되면, 만성염증으로 진행된다. 이 반응에 주로 작용을 하는 염증세포로는 단핵구와 대식세포, 림프구, 형질세포 등이 있다. 암은 전세계적으로 사망률 1위의 원인이 되는 면역질환의 하나이다. 산화적 스트레스나 자외선 조사 혹은 암유발 물질들에 의해 염색체내 protooncogene, tumor-suppressor gene 혹은 DNA repairing gene의 일부 DNA의 돌연변이 혹은 결손 등이 발행되면 정상세포는 암화과정을 시작하게 된다. 양성세포 수준에서 약 5에서 10여년 후 악성수준의 암세포가 생성되게 되면 이들 세포는 새로운 환경을 찾아 전이하게 되는데 이를 통해 암환자들은 다양한 장기에 동인 오리진의 암세포들이 생성한 종양들을 가지게 된다. 이들 종양세포는 정상 장기의 기능을 손상시켜며 결국 생명을 잃게 만든다. 이들 염색체 수준에서의 돌연변이 유래 암세포는 거의 대부분이 체내 면역시스템에 의해 사멸되는 것으로 알려져 있다. 그러나 계속되는 스트레스나 암유발 물질의 노출은 체내 면역체계를 파괴하면서 최후의 방어선을 무너뜨리면서 암발생에 무방비 상태를 만들게 된다. 이런 이유로 체내 면역시스템의 정상적 가동 및 증강을 유도하게 하는 전략이 암예방시 매우 중요한 표적으로 인식되면서 다양한 형태의 면역증강 물질 개발을 시도하고 있다. 인삼은 두릅나무과의 여러해살이 풀로써, 오랜동안 한방 및 민간에서 원기를 회복시키고, 각종 질병을 치료할 수단으로 사용되고 있는 대표적인 전통생약이다. 예로부터 불로(不老), 장생(長生), 익기(益氣), 경신(經身)의 명약으로 구전되어졌는데, 이는 약 2천년 전 중국의 신농본초경(神農本草經)에서 "인삼은 오장(五腸)을 보하고, 정신을 안정시키고, 혼백을 고정하며 경계를 멈추게 하고, 외부로부터 침입하는 병사를 제거하여주며, 눈을 밝게 하고 마음을 열어 더욱 지혜롭게 하고 오랫동안 복용하면 몸이 가벼워지고 장수한다" 라고 기술되어있는 데에서 유래한 것이다. 다양한 연구를 통해 우리나라에서 생산되는 고려인삼 (Panax ginseng)이 효능 면에서 가장 탁월한 것으로 알려져 있으며 특별이 고려인삼으로부터 제조된 고려홍삼은 전세계적으로도 그 효능이 우수한 것으로 보고되어 있다. 대부분의 홍삼 약효는 dammarane계열의 triterpenoid인 ginsenosides라고 불리는 인삼 saponin에 의해 기인된 것으로 알려져 있다. 이들 화합물군의 기본 골격에 따라, protopanaxadiol (PD)계 (22종) 및 protopanaxatriol (PT)계 (10종)으로 구분되고 있다 (표 1). 실험적 접근을 통해 인삼의 약리작용 이해를 위한 다양한 노력들이 경주되고 있으나, 여전히 많은 부분에서 충분히 이해되고 있지 않다. 그러나, 현재까지 연구된 인삼의 약리작용 관련 연구들은 심혈관, 당뇨, 항암 및 항스트레스 등과 같은 분야에서 인삼효능이 우수한 것으로 보고하고 있다. 그러나 면역조절 및 염증현상과 관련된 최근 연구결과들은 많지 않으나, 향후 다양하게 연구될 효능부분으로 인식되고 있다.

  • PDF

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Triptolide-induced Transrepression of IL-8 NF-${\kappa}B$ in Lung Epithelial Cells (폐상피세포에서 Triptolide에 의한 NF-${\kappa}B$ 의존성 IL-8 유전자 전사활성 억제기전)

  • Jee, Young-Koo;Kim, Yoon-Seup;Yun, Se-Young;Kim, Yong-Ho;Choi, Eun-Kyoung;Park, Jae-Seuk;Kim, Keu-Youl;Chea, Gi-Nam;Kwak, Sahng-June;Lee, Kye-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.50 no.1
    • /
    • pp.52-66
    • /
    • 2001
  • Background : NF-${\kappa}B$ is the most important transcriptional factor in IL-8 gene expression. Triptolide is a new compound that recently has been shown to inhibit NF-${\kappa}B$ activation. The purpose of this study is to investigate how triptolide inhibits NF-${\kappa}B$-dependent IL-8 gene transcription in lung epithelial cells and to pilot the potential for the clinical application of triptolide in inflammatory lung diseases. Methods : A549 cells were used and triptolide was provided from Pharmagenesis Company (Palo Alto, CA). In order to examine NF-${\kappa}B$-dependent IL-8 transcriptional activity, we established stable A549 IL-8-NF-${\kappa}B$-luc. cells and performed luciferase assays. IL-8 gene expression was measured by RT-PCR and ELISA. A Western blot was done for the study of $I{\kappa}B{\alpha}$ degradation and an electromobility shift assay was done to analyze NF-${\kappa}B$ DNA binding. p65 specific transactivation was analyzed by a cotransfection study using a Gal4-p65 fusion protein expression system. To investigate the involvement of transcriptional coactivators, we perfomed a transfection study with CBP and SRC-1 expression vectors. Results : We observed that triptolide significantly suppresses NF-${\kappa}B$-dependent IL-8 transcriptional activity induced by IL-$1{\beta}$ and PMA. RT-PCR showed that triptolide represses both IL-$1{\beta}$ and PMA-induced IL-8 mRNA expression and ELISA confirmed this triptolide-mediated IL-8 suppression at the protein level. However, triptolide did not affect $I{\kappa}B{\alpha}$ degradation and NF-$_{\kappa}B$ DNA binding. In a p65-specific transactivation study, triptolide significantly suppressed Gal4-p65T Al and Gal4-p65T A2 activity suggesting that triptolide inhibits NF-${\kappa}B$ activation by inhibiting p65 transactivation. However, this triptolide-mediated inhibition of p65 transactivation was not rescued by the overexpression of CBP or SRC-1, thereby excluding the role of transcriptional coactivators. Conclusions : Triptolide is a new compound that inhibits NF-${\kappa}B$-dependent IL-8 transcriptional activation by inhibiting p65 transactivation, but not by an $I{\kappa}B{\alpha}$-dependent mechanism. This suggests that triptolide may have a therapeutic potential for inflammatory lung diseases.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.