• Title/Summary/Keyword: 정보량 평가

Search Result 2,095, Processing Time 0.036 seconds

Diffraction Efficiency Change in PVA/AA Photopolymer Films by SeO2 and TiO2 Nano Particle Addition (PVA/AA계 광 고분자 필름의 SeO2 및 TiO2 나노 입자 첨가에 의한 회절 효율 변화)

  • Joe, Ji-Hun;Lee, Ju-Chul;Yoon, Sung;Nam, Seung-Woong;Kim, Dae-Heum
    • Korean Journal of Optics and Photonics
    • /
    • v.21 no.2
    • /
    • pp.82-88
    • /
    • 2010
  • Photopolymer is a material for recording three dimensional holograms containing photo information. Photopolymer has been found to be a proper material due to many advantages such as high DE value, easy processing, and low price. Compositions of PVA, monomer, initiater and photosensitizer were determined by previous experiments and the compositions of $SeO_2$ and $TiO_2$ were considered as variable to find out the effects of $TiO_2$ on DE. The DE values were constant for the varying compositions of $TiO_2$ (0.1 mg~1.0 mg). In other words, $TiO_2$ is not directly effective on the DE values. Composition change experiments from $SeO_2$ 0.1 mg, $TiO_2$ 0.9 mg to $SeO_2$ 0.9 mg, $TiO_2$ 0.1 showed a maximum DE value of 73.75% at a component of $SeO_2$ 0.8 mg, $TiO_2$ 0.2 mg. It seemed that regardless of the amount of $TiO_2$, increasing the amount of $SeO_2$ gently increases DE`s. If nano particles are heavily added, transparent films could not be made due to the separation of particles by the solubility decrease. Photopolymer films could be made with high DE values for an extensive angle range if $TiO_2$ additions were kept minimum and $SeO_2$ additions were kept maximum.

Estimation of Heading Date using Mean Temperature and the Effect of Sowing Date on the Yield of Sweet Sorghum in Jellabuk Province (평균온도를 이용한 전북지역 단수수의 출수기 추정 및 파종시기별 수량 변화)

  • Choi, Young Min;Choi, Kyu-Hwan;Shin, So-Hee;Han, Hyun-Ah;Heo, Byong Soo;Kwon, Suk-Ju
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.2
    • /
    • pp.127-136
    • /
    • 2019
  • Sweet sorghum (Sorghum bicolor L. Moench), compared to traditional crops, has been evaluated as a useful crop with high adaptability to the environment and various uses, but cultivation has not expanded owing to a lack of related research and information in Korea. This study was conducted to estimate heading date in 'Chorong' sweet sorghum based on climate data of the last 30 years (1989 - 2018) from six regions (Jeonju, Buan, Jeongup, Imsil, Namwon, and Jangsu) in Jellabuk Province. In addition, we compared the growth and quality factors by sowing date (April 10, April 25, May 10, May 25, June 10, June 25, and July 10) in 2018. Days from sowing to heading (DSH) increased to 107, 96, 83, 70, 59, 64, and 65 days in order of the sowing dates, respectively, and the average was 77.7 days. The effective accumulated temperature for heading date was $1,120.3^{\circ}C$. The mean annual temperature was the highest in Jeonju, followed in descending order by Jeongup, Buan, Namwon, Imsil, and Jangsu. The DSH based on effective accumulated temperature gradually decreased in all sowing date treatments in the six regions during the last 30 years. DSH of the six regions showed a negative relationship with mean temperature (sowing date to heading date) and predicted DSH ($R^2=0.9987**$) calculated by mean temperature was explained with a probability of 89% of observed DSH in 2017 and 2018. At harvest, fresh stem weight and soluble solids content were higher in the April and July sowings, but sugar content was higher in the May 10 ($3.4Mg{\cdot}ha^{-1}$) and May 25 ($3.1Mg{\cdot}ha^{-1}$) sowings. Overall, the April and July sowings were of low quality and yield, and there is a risk of frost damage; thus, we found May sowings to be the most effective. Additionally, sowing dates must be considered in terms of proper harvest stage, harvesting target (juice or grain), cultivation altitude, and microclimate.

A Qualitative Study on the Cause of Low Science Affective Achievement of Elementary, Middle, and High School Students in Korea (초·중·고등학생들의 과학 정의적 성취가 낮은 원인에 대한 질적 연구)

  • Jeong, Eunyoung;Park, Jisun;Lee, Sunghee;Yoon, Hye-Gyoung;Kim, Hyunjung;Kang, Hunsik;Lee, Jaewon;Kim, Yool;Jeong, Jihyeon
    • Journal of The Korean Association For Science Education
    • /
    • v.42 no.3
    • /
    • pp.325-340
    • /
    • 2022
  • This study attempts to analyze the causes of low affective achievement of elementary, middle, and high school students in Korea in science. To this end, a total of 27 students, three to four students per grade, were interviewed by grade from the fourth grade of elementary school to the first grade of high school, and a total of nine teachers were interviewed by school level. In the interview, related questions were asked in five sub-areas of the 'Indicators of Positive Experiences about Science': 'Science Academic Emotion', 'Science-Related Self-Concept', 'Science Learning Motivation', 'Science-Related Career Aspiration', and 'Science-Related Attitude'. Interview contents were recorded, transcribed, and categorized. As a result of examining the causes of low science academic emotion, it was found that students experienced negative emotions when experiments are not carried out properly, scientific theories and terms are difficult, and recording the inquiry results is burdensome. In addition, students responded that science-related self-concept changed negatively due to poor science grades, difficult scientific terms, and a large amount of learning. The reasons for the decline in science learning motivation were the lack of awareness of relationship between science class content and daily life, difficulty in science class content, poor science grades, and lack of relevance to one's interest or career path. The main reason for the decline in science-related career aspirations was that they feel their career path was not related to science, and due to poor science performance. Science-related attitudes changed negatively due to difficulties in science classes or negative feelings about science classes, and high school students recognized the ambivalence of science on society. Based on the results of the interview, support for experiments and basic science education, improvement of elementary school supplementary textbook 'experiment & observation', development of teaching and learning materials, and provision of science-related career information were proposed.

Development of Summer Leaf Vegetable Crop Energy Model for Rooftop Greenhouse (옥상온실에서의 여름철 엽채류 작물에너지 교환 모델 개발)

  • Cho, Jeong-Hwa;Lee, In-Bok;Lee, Sang-Yeon;Kim, Jun-Gyu;Decano, Cristina;Choi, Young-Bae;Lee, Min-Hyung;Jeong, Hyo-Hyeog;Jeong, Deuk-Young
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.246-254
    • /
    • 2022
  • Domestic facility agriculture grows rapidly, such as modernization and large-scale. And the production scale increases significantly compared to the area, accounting for about 60% of the total agricultural production. Greenhouses require energy input to create an appropriate environment for stable mass production throughout the year, but the energy load per unit area is large because of low insulation properties. Through the rooftop greenhouse, one of the types of urban agriculture, energy that is not discarded or utilized in the building can be used in the rooftop greenhouse. And the cooling and heating load of the building can be reduced through optimal greenhouse operation. Dynamic energy analysis for various environmental conditions should be preceded for efficient operation of rooftop greenhouses, and about 40% of the solar energy introduced in the greenhouse is energy exchange for crops, so it should be considered essential. A major analysis is needed for each sensible heat and latent heat load by leaf surface temperature and evapotranspiration, dominant in energy flow. Therefore, an experiment was conducted in a rooftop greenhouse located at the Korea Institute of Machinery and Materials to analyze the energy exchange according to the growth stage of crops. A micro-meteorological and nutrient solution environment and growth survey were conducted around the crops. Finally, a regression model of leaf temperature and evapotranspiration according to the growth stage of leafy vegetables was developed, and using this, the dynamic energy model of the rooftop greenhouse considering heat transfer between crops and the surrounding air can be analyzed.

Trend Analysis of Vegetation Changes of Korean Fir (Abies koreana Wilson) in Hallasan and Jirisan Using MODIS Imagery (MODIS 시계열 위성영상을 이용한 한라산과 지리산 구상나무 식생 변동 추세 분석)

  • Minki Choo;Cheolhee Yoo;Jungho Im;Dongjin Cho;Yoojin Kang;Hyunkyung Oh;Jongsung Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.325-338
    • /
    • 2023
  • Korean fir (Abies koreana Wilson) is one of the most important environmental indicator tree species for assessing climate change impacts on coniferous forests in the Korean Peninsula. However, due to the nature of alpine and subalpine regions, it is difficult to conduct regular field surveys of Korean fir, which is mainly distributed in regions with altitudes greater than 1,000 m. Therefore, this study analyzed the vegetation change trend of Korean fir using regularly observed remote sensing data. Specifically, normalized difference vegetation index (NDVI) from Moderate Resolution Imaging Spectroradiometer (MODIS), land surface temperature (LST), and precipitation data from Global Precipitation Measurement (GPM) Integrated Multi-satellitE Retrievalsfor GPM from September 2003 to 2020 for Hallasan and Jirisan were used to analyze vegetation changes and their association with environmental variables. We identified a decrease in NDVI in 2020 compared to 2003 for both sites. Based on the NDVI difference maps, areas for healthy vegetation and high mortality of Korean fir were selected. Long-term NDVI time-series analysis demonstrated that both Hallasan and Jirisan had a decrease in NDVI at the high mortality areas (Hallasan: -0.46, Jirisan: -0.43). Furthermore, when analyzing the long-term fluctuations of Korean fir vegetation through the Hodrick-Prescott filter-applied NDVI, LST, and precipitation, the NDVI difference between the Korean fir healthy vegetation and high mortality sitesincreased with the increasing LST and decreasing precipitation in Hallasan. Thissuggests that the increase in LST and the decrease in precipitation contribute to the decline of Korean fir in Hallasan. In contrast, Jirisan confirmed a long-term trend of declining NDVI in the areas of Korean fir mortality but did not find a significant correlation between the changes in NDVI and environmental variables (LST and precipitation). Further analyses of environmental factors, such as soil moisture, insolation, and wind that have been identified to be related to Korean fir habitats in previous studies should be conducted. This study demonstrated the feasibility of using satellite data for long-term monitoring of Korean fir ecosystems and investigating their changes in conjunction with environmental conditions. Thisstudy provided the potential forsatellite-based monitoring to improve our understanding of the ecology of Korean fir.

End-use Analysis of Household Water by Metering (가정용수의 용도별 사용 원단위 분석)

  • Kim, Hwa Soo;Lee, Doo Jin;Kim, Ju Whan;Jung, Kwan Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5B
    • /
    • pp.595-601
    • /
    • 2008
  • The purpose of this study is to investigate the trends and patterns of various kind of water uses in a household by metering in Korea. Water use components are classified by toilet, washbowl, bathing, laundry, kitchen, miscellaneous. Flow meters are installed in 140 household selected by sampling in all around Korea. The data are gathered by web-based data collection system from the year 2002 to 2006, considering pre-investigated data such as occupation, revenue, family members, housing types, age, floor area, water saving devices, education, miscellaneous. Reliable data are selected by upper fence method for each observed water use component and statistical characteristics are estimated for each residential type to determine liter per capita per day. Estimated domestic per capita day show an indoor water use with the range from 150 lpcd to 169 lpcd for each housing type as the order of high rise apartment, multi-house, and single house. As the order of consuming amount among water use components, it is investigated that toilet (38.5 lpcd) is the first, and the second is laundry water (30.8 lpcd), the third is kitchen (28.4 lpcd), the fourth is bathtub (24.7 lpcd), the next is washbowl (15.4 lpcd). The results are compared with water uses in U.K. and U.S. As life style has been changed into western style, pattern of water use in Korea is tend to be similar with the U.S. water use pattern. Compared with the surveying results by Bradley, on 1985. Thirty liter of total use increased with the advancement of economic level, and a little change of water use pattern can be found. Especially, toilet water take almost half part of total water use and laundry water shows lowest as 11% in surveying at the year of 1985. But, this study shows that 39 liter, 28% of toilet water, has been decreased by the spread of saving devices and campaign. It is supposed that the spread large sized laundry machine make by-hand laundry has been decreased and water use increased. Unit water amount of each end-use in household can be applied to design factor for water and wastewater facilities, and it play a role as information in establishing water demand forecasting and conservation policy.

A survey on status of quality and risk assessment in dentifrices and mouthwashes (치약제 및 구중청량제의 품질 실태 조사 및 안전성 평가)

  • Jaeeun Kwak;Wonhee Park;Hoejin Ryu;Jin Han;Jeongeun Choe;Sungdan Kim;Insook Hwang;Yongseung Shin
    • Analytical Science and Technology
    • /
    • v.36 no.6
    • /
    • pp.300-314
    • /
    • 2023
  • The quality of the products was investigated by analyzing fluorine content, pH, preservatives and tar colors in 31 dentifrice products (6 items for children) and 15 mouthwash products (2 items for children) marketed. It was intended to provide correct information to consumers by checking whether the standards and product indications match. As a result of measuring the fluoride concentration, 26 dentifrice and 15 mouthwash products contained from 48 to 1,472 ppm and from 85 to 225 ppm, respectively. Fluorine detection rates of dentifrice and mouthwash products were 83.9 and 83.3 %, respectively showing similar levels. Of the 41 fluoride-detected dentifrice and mouthwash products, 40 were 90.7~109.8 % of the displayed amount and suitable for the fluorine content standard of 90.0 to 110.0 %, but one dentifrice was found to be inappropriate at 36.3 % of the content indicated on the product. The pH of the dentifrice was 5.1~9.4, and the mouthwash was 4.2~6.2, which met all standards. As a result of simultaneous analysis of the concentration of six preservatives, benzoic acid was detected the most in 15 cases with a 30.6 % detection rate, sorbic acid was detected in 9 cases (detection rate of 18.4 %), and all four types of methyl p-hydroxybenzoate, ethyl p-hydroxybenzoate, propyl p-hydroxybenzoate, butyl p-hydroxybenzoate were not detected. As a result of analyzing the concentration of 10 types of tar colors, six types including red40, yellow4, yellow5, yellow203, green3, and blue1 were detected in a total of 9 cases (2 dentifrices and 7 mouthwashes) with blue1 being the most frequently detected. Detected fluorine concentration, added preservatives and tar colors were consistent with the product markings and it was well written on product packaging. The detected preservatives and tar colors were at a safe level due to low risk compared to Acceptable Daily Intake.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.