• Title/Summary/Keyword: 대한

Search Result 633,002, Processing Time 0.518 seconds

Word-of-Mouth Effect for Online Sales of K-Beauty Products: Centered on China SINA Weibo and Meipai (K-Beauty 구전효과가 온라인 매출액에 미치는 영향: 중국 SINA Weibo와 Meipai 중심으로)

  • Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.197-218
    • /
    • 2019
  • In addition to economic growth and national income increase, China is also experiencing rapid growth in consumption of cosmetics. About 67% of the total trade volume of Chinese cosmetics is made by e-commerce and especially K-Beauty products, which are Korean cosmetics are very popular. According to previous studies, 80% of consumer goods such as cosmetics are affected by the word of mouth information, searching the product information before purchase. Mostly, consumers acquire information related to cosmetics through comments made by other consumers on SNS such as SINA Weibo and Wechat, and recently they also use information about beauty related video channels. Most of the previous online word-of-mouth researches were mainly focused on media itself such as Facebook, Twitter, and blogs. However, the informational characteristics and the expression forms are also diverse. Typical types are text, picture, and video. This study focused on these types. We analyze the unstructured data of SINA Weibo, the SNS representative platform of China, and Meipai, the video platform, and analyze the impact of K-Beauty brand sales by dividing online word-of-mouth information with quantity and direction information. We analyzed about 330,000 data from Meipai, and 110,000 data from SINA Weibo and analyzed the basic properties of cosmetics. As a result of analysis, the amount of online word-of-mouth information has a positive effect on the sales of cosmetics irrespective of the type of media. However, the online videos showed higher impacts than the pictures and texts. Therefore, it is more effective for companies to carry out advertising and promotional activities in parallel with the existing SNS as well as video related information. It is understood that it is important to generate the frequency of exposure irrespective of media type. The positiveness of the video media was significant but the positiveness of the picture and text media was not significant. Due to the nature of information types, the amount of information in video media is more than that in text-oriented media, and video-related channels are emerging all over the world. In particular, China has made a number of video platforms in recent years and has enjoyed popularity among teenagers and thirties. As a result, existing SNS users are being dispersed to video media. We also analyzed the effect of online type of information on the online cosmetics sales by dividing the product type of cosmetics into basic cosmetics and color cosmetics. As a result, basic cosmetics had a positive effect on the sales according to the number of online videos and it was affected by the negative information of the videos. In the case of basic cosmetics, effects or characteristics do not appear immediately like color cosmetics, so information such as changes after use is often transmitted over a period of time. Therefore, it is important for companies to move more quickly to issues generated from video media. Color cosmetics are largely influenced by negative oral statements and sensitive to picture and text-oriented media. Information such as picture and text has the advantage and disadvantage that the process of making it can be made easier than video. Therefore, complaints and opinions are generally expressed in SNS quickly and immediately. Finally, we analyzed how product diversity affects sales according to online word of mouth information type. As a result of the analysis, it can be confirmed that when a variety of products are introduced in a video channel, they have a positive effect on online cosmetics sales. The significance of this study in the theoretical aspect is that, as in the previous studies, online sales have basically proved that K-Beauty cosmetics are also influenced by word-of-mouth. However this study focused on media types and both media have a positive impact on sales, as in previous studies, but it has been proven that video is more informative and influencing than text, depending on media abundance. In addition, according to the existing research on information direction, it is said that the negative influence has more influence, but in the basic study, the correlation is not significant, but the effect of negation in the case of color cosmetics is large. In the case of temporal fashion products such as color cosmetics, fast oral effect is influenced. In practical terms, it is expected that it will be helpful to use advertising strategies on the sales and advertising strategy of K-Beauty cosmetics in China by distinguishing basic and color cosmetics. In addition, it can be said that it recognized the importance of a video advertising strategy such as YouTube and one-person media. The results of this study can be used as basic data for analyzing the big data in understanding the Chinese cosmetics market and establishing appropriate strategies and marketing utilization of related companies.

Statistical Characteristics of East Sea Mesoscale Eddies Detected, Tracked, and Grouped Using Satellite Altimeter Data from 1993 to 2017 (인공위성 고도계 자료(1993-2017년)를 이용하여 탐지‧추적‧분류한 동해 중규모 소용돌이의 통계적 특성)

  • LEE, KYUNGJAE;NAM, SUNGHYUN;KIM, YOUNG-GYU
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.2
    • /
    • pp.267-281
    • /
    • 2019
  • Energetic mesoscale eddies in the East Sea (ES) associated with strong mesoscale variability impacting circulation and environments were statistically characterized by analyzing satellite altimeter data collected during 1993-2017 and in-situ data obtained from four cruises conducted between 2015 and 2017. A total of 1,008 mesoscale eddies were detected, tracked, and identified and then classified into 27 groups characterized by mean lifetime (L, day), amplitude (H, m), radius (R, km), intensity per unit area (EI, $cm^2/s^2/km^2$), ellipticity (e), eddy kinetic energy (EKE, TJ), available potential energy (APE, TJ), and direction of movement. The center, boundary, and amplitude of mesoscale eddies identified from satellite altimeter data were compared to those from the in-situ observational data for the four cases, yielding uncertainties in the center position of 2-10 km, boundary position of 10-20 km, and amplitude of 0.6-5.9 cm. The mean L, H, R, EI, e, EKE, and APE of the ES mesoscale eddies during the total period are $95{\pm}104$ days, $3.5{\pm}1.5cm$, $39{\pm}6km$, $0.023{\pm}0.017cm^2/s^2/km^2$, $0.72{\pm}0.07$, $23{\pm}21TJ$, and $588{\pm}250TJ$, respectively. The ES mesoscale eddies tend to move following the mean surface current rather than propagating westward. The southern groups (south of the subpolar front) have a longer L, larger H, R, and higher EKE, APE; and stronger EI than those of the northern groups and tend to move a longer distance following surface currents. There are exceptions to the average characteristics, such as the quasi-stationary groups (the Wonsan Warm, Wonsan Cold, Western Japan Basin Warm, and Northern Subpolar Frontal Cold Eddy groups) and short-lived groups with a relatively larger H, higher EKE, and APE and stronger EI (the Yamato Coastal Warm, Central Yamato Warm, and Eastern Japan Basin Coastal Warm eddy groups). Small eddies in the northern ES hardly resolved using the satellite altimetry data only, were not identified here and discussed with potential over-estimations of the mean L, H, R, EI, EKE, and APE. This study suggests that the ES mesoscale eddies 1) include newly identified groups such as the Hokkaido and the Yamato Rise Warm Eddies in addition to relatively well-known groups (e.g., the Ulleung Warm and the Dok Cold Eddies); 2) have a shorter L; smaller H, R, and lower EKE; and stronger EI and higher APE than those of the global ocean, and move following surface currents rather than propagating westward; and 3) show large spatial inhomogeneity among groups.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

The Effect of Shading on Pedestrians' Thermal Comfort in the E-W Street (동-서 가로에서 차양이 보행자의 열적 쾌적성에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.6
    • /
    • pp.60-74
    • /
    • 2018
  • This study was to investigate the pedestrian's thermal environments in the North Sidewalk of E-W Street during summer heatwave. We carried out detailed measurements with four human-biometeorological stations on Dongjin Street, Jinju, Korea ($N35^{\circ}10.73{\sim}10.75^{\prime}$, $E128^{\circ}55.90{\sim}58.00^{\prime}$, elevation: 50m). Two of the stations stood under one row street tree and hedge(One-Tree), two row street tree and hedge (Two-Tree), one of the stations stood under shelter and awning(Shelter), while the other in the sun (Sunlit). The measurement spots were instrumented with microclimate monitoring stations to continuously measure microclimate, radiation from the six cardinal directions at the height of 1.1m so as to calculate the Universal Thermal Climate Index (UTCI) from 24th July to 21th August 2018. The radiant temperature of sidewalk's elements were measured by the reflective sphere and thermal camera at 29th July 2018. The analysis results of 9 day's 1 minute term human-biometeorological data absorbed by a man in standing position from 10am to 4pm, and 1 day's radiant temperature of sidewalk elements from 1:16pm to 1:35pm, showed the following. The shading of street tree and shelter were mitigated heat stress by the lowered UTCI at mid and late summer's daytime, One-Tree and Two-Tree lowered respectively 0.4~0.5 level, 0.5~0.8 level of the heat stress, Shelter lowered respectively 0.3~1.0 level of the heat stress compared with those in the Sunlit. But the thermal environments in the One-Tree, Two-Tree and Shelter during the heat wave supposed to user "very strong heat stress" while those in the Sunlit supposed to user "very strong heat stres" and "exterme heat stress". The main heat load temperature compared with body temperature ($37^{\circ}C$) were respectively $7.4^{\circ}C{\sim}21.4^{\circ}C$ (pavement), $14.7^{\circ}C{\sim}15.8^{\circ}C$ (road), $12.7^{\circ}C$ (shelter canopy), $7.0^{\circ}C$ (street funiture), $3.5^{\circ}C{\sim}6.4^{\circ}C$ (building facade). The main heat load percentage were respectively 34.9%~81.0% (pavement), 9.6%~25.2% (road), 24.8% (shelter canopy), 14.1%~15.4% (building facade), 5.7% (street facility). Reducing the radiant temperature of the pavement, road, building surfaces by shading is the most effective means to achieve outdoor thermal comfort for pedestrians in sidewalk. Therefore, increasing the projected canopy area and LAI of street tree through the minimal training and pruning, building dense roadside hedge are essential for pedestrians thermal comfort. In addition, thermal liner, high reflective materials, greening etc. should be introduced for reducing the surface temperature of shelter and awning canopy. Also, retro-reflective materials of building facade should be introduced for the control of reflective sun radiation. More aggressively pavement watering should be introduced for reducing the surface temperature of sidewalk's pavement.

Development of a Simultaneous Analytical Method for Determination of Insecticide Broflanilide and Its Metabolite Residues in Agricultural Products Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살충제 Broflanilide 및 대사물질 동시시험법 개발)

  • Park, Ji-Su;Do, Jung-Ah;Lee, Han Sol;Park, Shin-min;Cho, Sung Min;Kim, Ji-Young;Shin, Hye-Sun;Jang, Dong Eun;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.124-134
    • /
    • 2019
  • An analytical method was developed for the determination of broflanilide and its metabolites in agricultural products. Sample preparation was conducted using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method and LC-MS/MS (liquid chromatograph-tandem mass spectrometer). The analytes were extracted with acetonitrile and cleaned up using d-SPE (dispersive solid phase extraction) sorbents such as anhydrous magnesium sulfate, primary secondary amine (PSA) and octadecyl ($C_{18}$). The limit of detection (LOD) and quantification (LOQ) were 0.004 and 0.01 mg/kg, respectively. The recovery results for broflanilide, DM-8007 and S(PFP-OH)-8007 ranged between 90.7 to 113.7%, 88.2 to 109.7% and 79.8 to 97.8% at different concentration levels (LOQ, 10LOQ, 50LOQ) with relative standard deviation (RSD) less than 8.8%. The inter-laboratory study recovery results for broflanilide and DM-8007 and S (PFP-OH)-8007 ranged between 86.3 to 109.1%, 87.8 to 109.7% and 78.8 to 102.1%, and RSD values were also below 21%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the Food and Drug Safety Evaluation guidelines (2016). Therefore, the proposed analytical method was accurate, effective and sensitive for broflanilide determination in agricultural commodities.

Facile [11C]PIB Synthesis Using an On-cartridge Methylation and Purification Showed Higher Specific Activity than Conventional Method Using Loop and High Performance Liquid Chromatography Purification (Loop와 HPLC Purification 방법보다 더 높은 비방사능을 보여주는 카트리지 Methylation과 Purification을 이용한 손쉬운 [ 11C]PIB 합성)

  • Lee, Yong-Seok;Cho, Yong-Hyun;Lee, Hong-Jae;Lee, Yun-Sang;Jeong, Jae Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.67-73
    • /
    • 2018
  • $[^{11}C]PIB$ synthesis has been performed by a loop-methylation and HPLC purification in our lab. However, this method is time-consuming and requires complicated systems. Thus, we developed an on-cartridge method which simplified the synthetic procedure and reduced time greatly by removing HPLC purification step. We compared 6 different cartridges and evaluated the $[^{11}C]PIB$ production yields and specific activities. $[^{11}C]MeOTf$ was synthesized by using TRACERlab FXC Pro and was transferred into the cartridge by blowing with helium gas for 3 min. To remove byproducts and impurities, cartridges were washed out by 20 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ solution (pH 5.1) and 10 mL of distilled water. And then, $[^{11}C]PIB$ was eluted by 5 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ into the collecting vial containing 10 mL saline. Among the 6 cartridges, only tC18 environmental cartridge could remove impurities and byproducts from $[^{11}C]PIB$ completely and showed higher specific activity than traditional HPLC purification method. This method took only 8 ~ 9 min from methylation to formulation. For the tC18 environmental cartridge and conventional HPLC loop methods, the radiochemical yields were $12.3{\pm}2.2%$ and $13.9{\pm}4.4%$, respectively, and the molar activities were $420.6{\pm}20.4GBq/{\mu}mol$ (n=3) and $78.7{\pm}39.7GBq/{\mu}mol$ (n=41), respectively. We successfully developed a facile on-cartridge methylation method for $[^{11}C]PIB$ synthesis which enabled the procedure more simple and rapid, and showed higher molar radio-activity than HPLC purification method.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Collision of New and Old Control Ideologies, Witnessed through the Moving of Jeong-regun (Tomb of Queen Sindeok) and Repair of Gwangtong-gyo (정릉(貞陵) 이장과 광통교(廣通橋) 개수를 통해 본 조선 초기 지배 이데올로기의 대립)

  • Nam, Hohyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.53 no.4
    • /
    • pp.234-249
    • /
    • 2020
  • The dispute involving the construction of the Tomb of Queen Sindeok (hereinafter "Jeongreung"), King Taejo's wife in Seoul, and the moving of that tomb, represents the most clearly demonstrated case for the collision of new and old ideologies between political powers in the early period of Joseon. Jeongreung, the tomb of Queen Sindeok from the Kang Clan, was built inside the capital fortress, but in 1409, King Taejong forced the tomb to be moved outside the capital, and the stone relics remaining at the original location were used to build the stone bridge, Gwangtong-gyo. In an unofficial story, King Taejong moved the tomb outside the capital and used the stone items there to make the Cheonggyecheon Gwang-gyo so that the people would step upon the area in order to curse Lady Kang. In the final year of King Taejo, Lady Kang and King Taejong were in a politically conflictual relationship, but they were close to being political partners until King Taejo became the king. Sillok records pertaining to the establishment of Jeongreung or Gwangtong-gyo in fact state things more plainly, indicating that the moving of Jeongreung was a result of following the sangeon (a written statement to the king) of Uijeongbu (the highest administrative agency in Joseon), which stated that having the tomb of a king or queen in the capital was inappropriate, and since it was close to the official quarter of envoys, it had to be moved. The assertion that it was aimed at degrading Jeongreung in order to repair Gwangtong-gyo thus does not reflect the factual relationship. This article presents the possibility that the use of stone items from Jeongreung to repair Gwangtong-gyo reflected an emerging need for efficient material procurement that accompanied a drastic increase in demand for materials required in civil works both in- and outside the capital. The cause for constructing Jeongreung within the capital and the cause of moving the tomb outside the capital would therefore be attributable to the heterogeneity of the ideological backgrounds of King Taejo and King Taejong. King Taejo was the ruler of the Confucius state, as he reigned through the Yeokseong Revolution, but he constructed the tomb and Hongcheon-sa, the temple in the capital for his wife Queen Sindeok. In this respect, it is considered that, with the power of Buddhism, there was an attempt to rally supporters and gather the force needed to establish the authority of Queen Sindeok. Yi Seong-gye, who was raised in the Dorugachi clan of Yuan, lived as a military man in the border area, and so he would not have had a high level of understanding in Confucian scholarship. Rather, he was a man of the old system with its 'Buddhist" tendency. On the other hand, King Taejong Yi Bang-won was an elite Confucian student who passed the national examination at the end of the Goryeo era, and he is also known to have held a profound understanding of Neo-Confucianism. To state it differently, it would be reasonable to say that the understanding of symbolic implications for the capital would be more profound in a Confucian state. Since the national system that was ruled by laws had been established following the Three-Kingdom era, the principle of burial outside of the capital that would have seen a grave constructed on the outskirts of the capital was not upheld, without exception. Jeongreung was built inside the capital due to the strong individual desire of King Taejo, but since he was a Confucian scholar prior to becoming king, it would not have been accepted as desirable. After taking the throne, King Taejong took the initiative to begin overhauling the capital in order to reflect his intent to clearly realize Confucian ideology emphasizing 'Yechi' ("ruling with good manners") with the scenic view of the Capital's Hanyang river. It would be reasonable to conclude accordingly that the moving of Jeongreung was undertaken in the context of such a historic background.