• Title/Summary/Keyword: 대표값

Search Result 1,855, Processing Time 0.029 seconds

The Study on the Height Characteristics of Abies Nephrolepis Community in South Korea - In the Case of Seorak·Odae·Taebaek National Park - (우리나라 분비나무의 수고 특성 연구 - 설악·오대·태백산국립공원을 대상으로 -)

  • Jin-Won Kim;Ho-Young Lee;Young-Moon Chun;Choong-Hyeon Oh
    • Korean Journal of Environment and Ecology
    • /
    • v.38 no.2
    • /
    • pp.169-177
    • /
    • 2024
  • This study investigated whether population dynamic analysis based on the height characteristics of Abies nephrolepis was feasible. It was necessary because existing population dynamic analyses based on age and diameter at breast height (DBH) made it difficult to reflect the slow growth characteristics of Abies nephrolepis in harsh environments of high altitudes. The limitations of population dynamics analysis based on the age and DBH distribution of Abies nephrolepis in Seoraksan, Odaesan, and Taebaeksan National Parks, where Abies nephrolepis populations are representative, were verified, and the characteristics of height growth were investigated to comprehensively analyze whether a vertical structure based on height could reveal the population dynamics. The result of this study showed some limitations in understanding the population dynamics of Abies nephrolepis based on age distribution due to practical difficulties in sampling all trees and variations in age distribution within the same community depending on factors such as light conditions. Moreover, it was challenging to differentiate the distribution of DBH classes at fine levels, making it difficult to reflect the rapid growth characteristics of Abies nephrolepis when light conditions become suitable after prolonged stays in smaller DBH classes under shade conditions. However, a comprehensive analysis of the height characteristics of Abies nephrolepis revealed that the density corresponding to the population dynamic characteristics of Abies was high and adequately reflected the predominant tree death at similar height stages, as well as the U-shaped population dynamics at the lower stratum. Moreover, it was possible to identify a transition point in height values under shaded conditions, where the annual growth of Abies nephrolepis individuals in the lower stratum increases significantly, indicating that Abies nephrolepis individuals can escape from competition with other shrubs and undergo vigorous growth only at this height level. Therefore, this study confirmed that a vertical structure based on height can be utilized to understand the population dynamics of Abies nephrolepis in high altitudes, and it is expected that future studies on height characteristics can intuitively reveal the maintenance status of Abies nephrolepis populations in the field.

Comparison of the Nutritional and Functional Compounds in Naked Oats (Avena sativa L.) Cultivated in Different Regions (재배지역 차이에 따른 쌀귀리 영양성분 및 기능성 성분 비교)

  • Ji-Hye Song;Dea-Wook Kim;Hak-Young Oh;Jong-Tak Yun;Yong-In Kuk;Kwang-Yeol Yang
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.68 no.4
    • /
    • pp.402-412
    • /
    • 2023
  • To cope with climate change, we compared the quality of naked oats (Avena sativa L.) cultivated in different regions. Naked oats were collected from domestic farms in different cultivation regions grouped as G1 and G2 for 3 years (2020-2022). The appearance, quality, and nutritional and functional compounds in the samples were assessed. In terms of appearance quality, the brightness and yellowness of the samples from the G1 region were significantly lower than those of the samples from the G2 region in 2020; however, no differences were observed between cultivation regions in the other 2 years. The results of testing the vitality of naked oats seeds showed that the electrical conductivity value was significantly lower in the samples from the G1 region than in those from the G2 region only in 2022. Among the nutritional components, moisture content was higher in the G2 region than in the G1 region over all 3 years, and the crude protein content was significantly higher in the G2 region than in the G1 region over all years. Carbohydrate content was significantly higher in the G1 region than in the G2 region in all 3 years and was inversely proportional to the crude protein content. The crude fat content tended to be significantly higher in the G1 region than in the G2 region, except in 2022. The levels of beta-glucan, a functional compound rich in naked oats, ranged between 3.4% and 4.2%, and except in 2020, there was no significant difference between cultivation regions. In addition, the content of avenanthramides, representative functional compounds that exist only in oats, was assessed. Over 2 years, in 2021 and 2022, the avenanthramide content was in the range of 2.4-20.7 ㎍/g and tended to be significantly higher in the G2 region than in the G1 region in both years. According to a survey of the average and minimum temperatures during the growing season of naked oats from 2020 to 2022, the average and minimum temperatures in January in the G2 region, which is the cultivation-limit area, were similar to those in Haenam in the G1 region. In conclusion, differences in nutritional and functional compounds were observed in naked oats grown in different cultivation areas. Therefore, considering the cultivation area of naked oats is expanding because of climate change, changes in the compounds that affect quality should be investigated.

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.

Limno-Biological Investigation of Lake Ok-Jeong (옥정호의 육수생물학적 연구)

  • SONG Hyung-Ho
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.15 no.1
    • /
    • pp.1-25
    • /
    • 1982
  • Limnological study on the physico-chemical properties and biological characteristics of the Lake Ok-Jeong was made from May 1980 to August 1981. For the planktonic organisms in the lake, species composition, seasonal change and diurnal vertical distribution based on the monthly plankton samples were investigated in conjunction with the physico-chemical properties of the body of water in the lake. Analysis of temperature revealed that there were three distinctive periods in terms of vertical mixing of the water column. During the winter season (November-March) the vertical column was completely mixed, and no temperature gradient was observed. In February temperature of the whole column from the surface to the bottom was $3.5^{\circ}C$, which was the minimum value. With seasonal warming in spring, surface water forms thermoclines at the depth of 0-10 m from April to June. In summer (July-October) the surface mixing layer was deepened to form a strong thermocline at the depth of 15-25 m. At this time surface water reached up to $28.2^{\circ}C$ in August, accompanied by a significant increase in the temperature of bottom layer. Maximum bottom temperature was $r5^{\circ}C$ which occurred in September, thus showing that this lake keeps a significant turbulence Aehgh the hypolimnial layer. As autumn cooling proceeded summer stratification was destroyed from the end of October resulting in vertical mixing. In surface layer seasonal changes of pH were within the range from 6.8 in January to 9.0 in guutuost. Thighest value observed in August was mainly due to the photosynthetic activity of the phytoplankton. In the surface layer DO was always saturated throughout the year. Particularly in winter (January-April) the surface water was oversaturated (Max. 15.2 ppm in March). Vertical variation of DO was not remarkable, and bottom water was fairly well oxygenated. Transparency was closely related to the phytoplankton bloom. The highest value (4.6 m) was recorded in February when the primary production was low. During summer transparency decreased hand the lowest value (0.9 m) was recorded in August. It is mainly due to the dense blooming of gnabaena spiroides var. crassa in the surface layer. A. The amount of inorganic matters (Ca, Mg, Fe) reveals that Lake Ok-Jeong is classified as a soft-water lake. The amount of Cl, $NO_3-N$ and COD in 1981 was slightly higher than those in 1980. Heavy metals (Zn, Cu, Pb, Cd and Hg) were not detectable throughout the study period. During the study period 107 species of planktonic organisms representing 72 genera were identified. They include 12 species of Cyanophyta, 19 species of Bacillariophyta, 23 species of Chlorophyta, 14 species of Protozoa, 29 species of Rotifera, 4 species of Cladocera and 6 species of Copepoda. Bimodal blooming of phytoplankton was observed. A large blooming ($1,504\times10^3\;cells/l$ in October) was observed from July to October; a small blooming was present ($236\times10^3\;cells/l$ in February) from January to April. The dominant phytoplankton species include Melosira granulata, Anabaena spiroides, Asterionella gracillima and Microcystis aeruginota, which were classified into three seasonal groups : summer group, winter group and the whole year group. The sumner group includes Melosira granulate and Anabaena spiroides ; the winter group includes Asterionella gracillima and Synedra acus, S. ulna: the whole year group includes Microtystis aeruginosa and Ankistrodesmus falcatus. It is noted that M. granulate tends to aggregate in the bottom layer from January to August. The dominant zooplankters were Thermocpclops taihokuensis, Difflugia corona, Bosmina longirostris, Bosminopsis deitersi, Keratelle quadrata and Asplanchna priodonta. A single peak of zooplankton growth was observed and maximum zooplankton occurrence was present in July. Diurnal vertical migration was revealed by Microcystis aeruginosa, M. incerta, Anabaena spiroides, Melosira granulata, and Bosmina longirostris. Of these, M. granulata descends to the bottom and forms aggregation after sunset. B. longirostris shows fairly typical nocturnal migration. They ascends to the surface after sunset and disperse in the whole water column during night. Foully one species of fish representing 31 genera were collected. Of these 13 species including Pseudoperilnmpus uyekii and Coreoleuciscus splendidus were indigenous species of Korean inland waters. The indicator species of water quality determination include Microcystis aeruginosa, Melosira granulata, Asterionelta gracillima, Brachionus calyciflorus, Filinia longiseta, Conochiloides natans, Asplanchna priodonta, Difflugia corona, Eudorina elegans, Ceratium hirundinella, Bosmina longirostris, Bosminopsis deitersi, Heliodiaptomus kikuchii and Thermocyclops taihokuensis. These species have been known the indicator groups which are commonly found in the eutrophic lakes. Based on these planktonic indicators Lake Ok-Jeong can be classified into an eutrophic lake.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Characteristics of Bronchioloalveolar Carcinoma Presenting with Solitary Pulmonary Nodule (고립성 폐결절로 나타난 기관지폐포암의 임상적 고찰)

  • Kim, Ho-Cheol;Cheon, Eun-Mee;Suh, Gee-Young;Chung, Man-Pyo;Kim, Ho-Joong;Kwon, O-Jung;Rhee, Chong-H.;Han, Yong-Chol;Lee, Kyoung-Soo;Han, Jung-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.2
    • /
    • pp.280-289
    • /
    • 1997
  • Background : Bronchioloalveolar carcinoma (BAC) has been reported to diveres spectrum of chinical presentations and radiologic patterns. The three representative radiologic patterns are followings ; 1) a solitary nodule or mass, 2) a localized consolidation, and 3) multicentric or diffuse disease. While, the localized consolidation and solitary nodular patterns has favorable prognosis, the multicentric of diffuse pattern has worse prognosis regardless of treatment. BAC presenting as a solitary pulmonary nodule is often misdiagnosed as other benign disease such as tuberculoma. Therefore it is very important to make proper diagnosis of BAC with solitary nodular pattern, since this pattern of BAC is usually curable with a surgical resection. Methods : We reviewed the clinical and radiologic features of patients with pathologically-proven BAC with solitary nodular pattern from January 1995 to September 1996 at Samsung Medical Center. Results : Total 11 patients were identified. 6 were men and 5 were women. Age ranged from 37 to 69. Median age was 60. Most patients with BAC with solitary nodular pattern were asymptomatic and were detected by incidental radiologic abnormality. The chest radiograph showed poorly defined opacity or nodule and computed tomography showed consolidation, ground glass appearance, internal bubble-like lucencies, air bronchogram, open bronchus sign, spiculated margin or pleural tag in most patients. The initial diagnosis on chest X-ray were pulmonary tuberculosis in 4 patients, benign nodule in 2 patients and malignant nodule in 5 patients. The FDG-positron emission tomogram was performed in eight patients. The FDG-PET revealed suggestive findings of malignancy in only 3 patients. The pathologic diagnosis was obtained by transbronchial lung biopsy in 1 patient, by CT guided percutaneous needle aspiration in 2 patients, and by lung biopsy via video-assited thoracocopy in 8 patients. Lobectomy was performed in all patients and postoperative pathologic staging were $T_1N_0N_0$ in 8 patients and $T_2N_0M_0$ in 3 patients. Conclusion : Patients of BAC presenting with solitary nodular pattern were most often asymptomatic and incidentally detected by radiologic abnormality. The chest X-ray showed poorly defined nodule or opacity and these findings were often regarded as benign lesion. If poorly nodule or opacity does not disappear on follow up chest X-ray, computed tomography should be performed. If consolidation, ground glass appearance, open bronchus sign, air bronchogram, internal bubble like lucency, pleural tag or spiculated margin are found on computed tomography, further diagnostic procedures, including open thoracotomy, should be performed to exclude the possiblity of BAC with solitary nodular pattern.

  • PDF

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.