• Title/Summary/Keyword: 시계열

Search Result 3,237, Processing Time 0.039 seconds

The comparative study of determinants of family policy expenditure : focused on OECD 14 countries (복지국가의 아동·가족 복지 지출 결정요인에 대한 비교연구: OECD 국가를 중심으로)

  • Ryu, Yunkyu;Baek, Seungho
    • Korean Journal of Social Welfare Studies
    • /
    • v.41 no.1
    • /
    • pp.145-173
    • /
    • 2010
  • The purpose of this study is to verify that several theories explaining the determinants of welfare expenditure is applied to the family policy expenditure and to find out if there' re unique determinants of the family policy expenditure. We analyzed the data (OECD 14 countries for 1980~2005) by pooled time series analysis. As for industrialization theory, female labor force participation rate has positive effect on family policy expenditure while population under 15-year children has negative effect, which refers to the demand of family policies is that of female workers, not children's. Power resource theory is applied to the determinants of family policy expenditure as those of welfare expenditure. Women's political & economic empowerment has partly positive effects on family policy expenditure, which is the evidence of the effectiveness of feminist theory. In the institutional theory, we verified the effect of policy legacy but couldn't find out the crowding-out effect. The theoretical implication of this study is the empirical verification of the theories explaining the determinants of welfare expenditure being applied to the family policy expenditure. We also suggested the political and institutional foundation to effectively respond to the new social risks in spite of budget constraints, which can be a policy implication.

A Study of Rent Determinants of Small and Medium-Sized Office Buildings in Seoul Using a Dynamic Panel Model: Focusing on CBD and GBD Comparison (동적패널모형을 활용한 서울시 중소형 오피스 빌딩 임대료 결정 요인 연구: CBD(도심권)와 GBD(강남권) 비교를 중심으로)

  • NaRa Kim;JinSeok Yu;Jongjin Kim
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.47-62
    • /
    • 2023
  • Using the dynamic panel model, this study investigates rent determinants for small and medium-sized office buildings in Korea's CBD and Gangnam areas, key business districts. The results reveal that rents for small and medium-sized office buildings in CBD and Gangnam areas are influenced by macroeconomic fluctuations and characteristics of buildings and locations, suggesting a market with both spatial consumer and investment goods attributes. There are several investment implications as follows. First, even if the location in the CBD area is advantageous, the practical limitations in renovating aging small and medium-sized office buildings must be taken into account when investing. Second, parking conditions are a key factor influencing rent prices in CBD areas, so evaluating the parking facilities and improvement potential of small and medium-sized office buildings is essential for investors. Finally, due to the high sensitivity of Gangnam's small and medium-sized office market to macroeconomic trends, it's vital to prioritize monetary policy shifts as a key factor in investment decisions.

Structure and Variation of Tidal Flat Temperature in Gomso Bay, West Coast of Korea (서해안 곰소만 갯벌 온도의 구조 및 변화)

  • Lee, Sang-Ho;Cho, Yang-Ki;You, Kwang-Woo;Kim, Young-Gon;Choi, Hyun-Yong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.10 no.1
    • /
    • pp.100-112
    • /
    • 2005
  • Soil temperature was measured from the surface to 40 cm depth at three stations with different heights in tidal flat of Gomso Bay, west coast of Korea, for one month in every season 2004 to examine the thermal structure and the variation. Mean temperature in surface layer was higher in summer and lower in winter than in lower layer, reflecting the seasonal variation of vertically propagating structure of temperature by heating and cooling from the tidal flat surface. Standard deviation of temperature decreased from the surface to lower layer. Periodic variations of solar radiation energy and tide mainly caused short term variation of soil temperature, which was also intermittently influenced by precipitation and wind. Time series analysis showed the power spectral energy peaks at the periods of 24, 12 and 8 hours, and the strongest peak appeared at 24 hour period. These peaks can be interpreted as temperature waves forced by variations of solar radiation, diurnal tide and interaction of both variations, respectively. EOF analysis showed that the first and the second modes resolved 96% of variation of vertical temperature structure. The first mode was interpreted as the heating antl cooling from tidal flat surface and the second mode as the effect of phase lag produced by temperature wave propagation in the soil. The phase of heat transfer by 24 hour period wave, analyzed by cross spectrum, showed that mean phase difference of the temperature wave increased almost linearly with the soil depth. The time lags by the phase difference from surface to 10, 20 and 40cm were 3.2,6.5 and 9.8 hours, respectively. Vertical thermal diffusivity of temperature wave of 24 hour period was estimated using one dimensional thermal diffusion model. Average diffusivity over the soil depths and seasons resulted in $0.70{\times}10^{-6}m^2/s$ at the middle station and $0.57{\times}10^{-6}m^2/s$ at the lowest station. The depth-averaged diffusivity was large in spring and small in summer and the seasonal mean diffusivity vertically increased from 2 cm to 10 cm and decreased from 10 cm to 40 cm. Thermal propagation speeds were estimated by $8.75{\times}10^{-4}cm/s,\;3.8{\times}10{-4}cm/s,\;and\;1.7{\times}10^{-4}cm/s$ from 2 cm to 10 cm, 20 cm and 40 cm, respectively, indicating the speed reduction with depth increasing from the surface.

Study on PM10, PM2.5 Reduction Effects and Measurement Method of Vegetation Bio-Filters System in Multi-Use Facility (다중이용시설 내 식생바이오필터 시스템의 PM10, PM2.5 저감효과 및 측정방법에 대한 연구)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.5
    • /
    • pp.80-88
    • /
    • 2020
  • With the issuance of one-week fine dust emergency reduction measures in March 2019, the public's anxiety about fine dust is increasingly growing. In order to assess the application of air purifying plant-based bio-filters to public facilities, this study presented a method for measuring pollutant reduction effects by creating an indoor environment for continuous discharge of particle pollutants and conducted basic studies to verify whether indoor air quality has improved through the system. In this study conducted in a lecture room in spring, the background concentration was created by using mosquito repellent incense as a pollutant one hour before monitoring. Then, according to the schedule, the fine dust reduction capacity was monitored by irrigating for two hours and venting air for one hour. PM10, PM2.5, and temperature & humidity sensors were installed two meters front of the bio-filters, and velocity probes were installed at the center of the three air vents to conduct time-series monitoring. The average face velocity of three air vents set up in the bio-filter was 0.38±0.16 m/s. Total air-conditioning air volume was calculated at 776.89±320.16㎥/h by applying an air vent area of 0.29m×0.65m after deducing damper area. With the system in operation, average temperature and average relative humidity were maintained at 21.5-22.3℃, and 63.79-73.6%, respectively, which indicates that it satisfies temperature and humidity range of various conditions of preceding studies. When the effects of raising relatively humidity rapidly by operating system's air-conditioning function are used efficiently, it would be possible to reduce indoor fine dust and maintain appropriate relative humidity seasonally. Concentration of fine dust increased the same in all cycles before operating the bio-filter system. After operating the system, in cycle 1 blast section (C-1, β=-3.83, β=-2.45), particulate matters (PM10) were lowered by up to 28.8% or 560.3㎍/㎥ and fine particulate matters (PM2.5) were reduced by up to 28.0% or 350.0㎍/㎥. Then, the concentration of find dust (PM10, PM2.5) was reduced by up to 32.6% or 647.0㎍/㎥ and 32.4% or 401.3㎍/㎥ respectively through reduction in cycle 2 blast section (C-2, β=-5.50, β=-3.30) and up to 30.8% or 732.7㎍/㎥ and 31.0% or 459.3㎍/㎥ respectively through reduction in cycle 3 blast section (C-3, β=5.48, β=-3.51). By referring to standards and regulations related to the installation of vegetation bio-filters in public facilities, this study provided plans on how to set up objective performance evaluation environment. By doing so, it was possible to create monitoring infrastructure more objective than a regular lecture room environment and secure relatively reliable data.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

The Effectiveness of Fiscal Policies for R&D Investment (R&D 투자 촉진을 위한 재정지원정책의 효과분석)

  • Song, Jong-Guk;Kim, Hyuk-Joon
    • Journal of Technology Innovation
    • /
    • v.17 no.1
    • /
    • pp.1-48
    • /
    • 2009
  • Recently we have found some symptoms that R&D fiscal incentives might not work well what it has intended through the analysis of current statistics of firm's R&D data. Firstly, we found that the growth rate of R&D investment in private sector during the recent decade has been slowdown. The average of growth rate (real value) of R&D investment is 7.1% from 1998 to 2005, while it was 13.9% from 1980 to 1997. Secondly, the relative share of R&D investment of SME has been decreased to 21%('05) from 29%('01), even though the tax credit for SME has been more beneficial than large size firm, Thirdly, The R&D expenditure of large size firms (besides 3 leading firms) has not been increased since late of 1990s. We need to find some evidence whether fiscal incentives are effective in increasing firm's R&D investment. To analyse econometric model we use firm level unbalanced panel data for 4 years (from 2002 to 2005) derived from MOST database compiled from the annual survey, "Report on the Survey of Research and Development in Science and Technology". Also we use fixed effect model (Hausman test results accept fixed effect model with 1% of significant level) and estimate the model for all firms, large firms and SME respectively. We have following results from the analysis of econometric model. For large firm: i ) R&D investment responds elastically (1.20) to sales volume. ii) government R&D subsidy induces R&D investment (0.03) not so effectively. iii) Tax price elasticity is almost unity (-0.99). iv) For large firm tax incentive is more effective than R&D subsidy For SME: i ) Sales volume increase R&D investment of SME (0.043) not so effectively. ii ) government R&D subsidy is crowding out R&D investment of SME not seriously (-0.0079) iii) Tax price elasticity is very inelastic (-0.054) To compare with other studies, Koga(2003) has a similar result of tax price elasticity for Japanese firm (-1.0036), Hall((l992) has a unit tax price elasticity, Bloom et al. (2002) has $-0.354{\sim}-0.124$ in the short run. From the results of our analysis we recommend that government R&D subsidy has to focus on such an areas like basic research and public sector (defense, energy, health etc.) not overlapped private R&D sector. For SME government has to focus on establishing R&D infrastructure. To promote tax incentive policy, we need to strengthen the tax incentive scheme for large size firm's R&D investment. We recommend tax credit for large size film be extended to total volume of R&D investment.

  • PDF

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Trend of Medical Care Utilization and Medical Expenditure of the Elderly Cohort (노인 코호트의 의료이용 및 입원진료비 변화 추이 -공.교 의료보험 대상자를 대상으로-)

  • Lee, Kyeong-Soo;Kang, Pock-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.2 s.57
    • /
    • pp.437-461
    • /
    • 1997
  • Because of a significant improvement in the economic situation and development of scientific techniques in Korea during the last 30 years, the life expectancy of the Korean people has lengthened considerably and as a result, the number of the elderly has markedly increased. Such an increase of the number of aged population brought about many social, economic, and medical problems which were never seriously considered before. This study was conducted to assess the trend of medical care utilization and medical expenditure of the elderly. The data of each patient in the study were taken from computer database maintained for administrative purpose by the Korea Medical Insurance Corporation. The study population was 132,670 who were 60 years old or more and registered in Korean Medical Insurance Corporation from 1989 to 1993. The study subjects were predominantly female(56.3%) and 10,000-20,000 Won premium group(50.6%). The following are summaries of findings : The total increase of the number of inpatient cases was 40.5% from 1989 through 1993. The average annual increase was 3.7% in inpatient medical expenditures per case, 4.4% in inpatient medical expenditures per day and 0.08% in length of stay per case from 1989 through 1993. Cataract was the most prevalent disease of 10 leading frequent diseases in all ages from 1989 through 1993. The case mix in 1993 compared to 1989 revealed that cataract and ischemic cerebral disease were increased whereas essential hypertension and pulmonary tuberculosis were decreased . The average annual increase of medical expenditures was 3.8% in general hospitals, 6.3% in hospitals and 2.4% in clinics. From 1989 through 1993, medical expenditures used by high-cost patients accounted for about 14% to 20% of all expenditures for inpatient care, while they represented less than 2.5% of the elderly population. Time series analysis revealed that total medical expenditures and doctor's fee for inpatient will be progressively increased whereas drug expenditures for inpatient will be decreased. And there will be no change in length of stay. Based on the above results, the factors increasing medical cost and utilization should be identified and the method of cost containment for the elderly health care should be developed systematically.

  • PDF

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.