• Title/Summary/Keyword: techniques

Search Result 34,619, Processing Time 0.062 seconds

Usefulness of MRI 3D Image Reconstruction Techniques for the Diagnosis and Treatment of Femoral Acetabular Impingement Syndrome(Cam type) (대퇴 골두 충돌 증후군(Cam type)의 진단과 치료를 위한 자기공명 3D 영상 재구성 기법의 유용성)

  • Kwak, Yeong-Gon;Kim, Chong-Yeal;Cho, Yeong-Gi
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.11
    • /
    • pp.313-321
    • /
    • 2015
  • To minimize CT examination for Hip FAI diagnosis and operation plan. also, whether the MRI 3D images can replace Hip Clock face image was evaluated when performing Hip FAI MRI by using additional 3D image. This study analyzed Hip MRI and 3D Hip CT images of 31 patients in this hospital. For the purpose of evaluating the images, one orthopedic surgeon and one radiology specialist reconstructed Clock face, at MR and CT modality, by superior 12 o'clock, labrum front 3 o'clock, and the other side 9 o'clock, centering on Hip joint articular transverse ligament 6 o'clock. Afterwards, by the Likert Scale 5 point scale (independent t-test p<0.005), this study evaluated the check-up of A. retinacular vessel, B. head neck junction at 11 o'clock, A. Epiphyseal line, B. Cam lesion at 12 o'clock, and Cam lesion, Posterior Cam lesion at 1,2,3 and 4 o'clock. As for the verification of reliability among observers, this study verified coincidence by Cohen's weighted Kappa verification. As a result of Likert scale for the purpose of qualitative evaluation about the image, 11 o'clock A. retinacular vessel MR average was $3.69{\pm}1.0$ and CT average was $2.8{\pm}0.78$. B. head neck juncton didn't have a difference between two observers (p <0.416). 12 o'clock A. Epiphyseal line MR average was $3.54{\pm}1.00$ and CT average was $4.5{\pm}0.62$(p<0.000). B. Cam lesion didn't have a difference between two observers (p <0.532). 1,2,3,4 Cam lesion and Posterior Cam lesion were not statistically significant (p <0.656, p <0.658). As a result of weighted Kappa verification, 11 o'clock A.retinacular vessel CT K value was 0.663 and the lowest conformity. As a result of coincidence evaluation on respective item, a very high result was drawn, and two observers showed high reliability.

Barrier Techniques for Spinal Cord Protection from Thermal Injury in Polymethylmethacrylate Reconstruction of Vertebral Body : Experimental and Theoretical Analyses (Polymethylmethacrylate를 이용한 척추체 재건술에서 척수의 열 손상을 방지하기 위한 방어벽 기법 : 실험적 및 이론적 분석)

  • Park, Choon Keun;Ji, Chul;Hwang, Jang Hoe;Kwun, Sung Oh;Sung, Jae Hoon;Choi, Seung Jin;Lee, Sang Won;Park, Sung Chan;Cho, Kyeung Suok;Park, Chun Kun;Yuan, Hansen;Kang, Joon Ki
    • Journal of Korean Neurosurgical Society
    • /
    • v.30 no.3
    • /
    • pp.272-277
    • /
    • 2001
  • Objective : Polymethylmethacrylate(PMMA) is often used to reconstruct the spine after total corpectomy, but the exothermic curing of liquid PMMA poses a risk of thermal injury to the spinal cord. The purposes of this study are to analyze the heat blocking effect of pre-polymerized PMMA sheet in the corpectomy model and to establish the minimal thickness of PMMA sheet to protect the spinal cord from the thermal injury during PMMA cementation of vertebral body. Materials & Methods : An experimental fixture was fabricated with dimensions similar to those of a T12 corpectomy defect. Sixty milliliters of liquid PMMA were poured into the fixture, and temperature recordings were obtained at the center of the curing PMMA mass and on the undersurface(representing the spinal cord surface) of a prepolymerized PMMA sheet of variable thickness(group 1 : 0mm, group 2 : 5mm, or group 3 : 8mm). Six replicates were tested for each barrier thickness group. Results : Consistent temperatures($106.8{\pm}3.9^{\circ}C$) at center of the curing PMMA mass in eighteen experiments confirmed the reproducibility of the experimental fixture. Peak temperatures on the spinal cord surface were $47.3^{\circ}C$ in group 2, and $43.3^{\circ}C$ in group 3, compared with $60.0^{\circ}C$ in group 1(p<0.00005). So pre-polymerized PMMA provided statistically significant protection from heat transfer. The difference of peak temperature between theoretical and experimental value was less than 1%, while the predicted time was within 35% of experimental values. The data from the theoretical model indicate that a 10mm barrier of PMMA should protect the spinal cord from temperatures greater than $39^{\circ}C$(the threshold for thermal injury in the spinal cord). Conclusion : These results suggest that pre-polymerized PMMA sheet of 10mm thickness may protect the spinal cord from the thermal injury during PMMA reconstruction of vertebral body.

  • PDF

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A study on a flow of the technological convergence in webtoon - Focused on the interactiontoon of webtoon (기술 융합형 웹툰의 몰입도 연구 -인터랙션 툰 <마주쳤다>를 중심으로)

  • Baek, Eun-Ji;Son, Ki-Hwan
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.101-130
    • /
    • 2018
  • Since the advent of the Smart Devices, the smartphone has become a popular tool to view Webtoon. This phenomenon has caused the convergence of cutting-edge technologies and Webtoons in diverse forms, creating unique versions of Webtoons including, but not limited to Smart-toon, Effect-toon, Cut-toon, Dubbing-toon, Moving-toon, AR-toon, VR-toon, and Interaction-toon. By comparison to these rich diversities of Webtoons in the online industry, there is a lack of academic research on this topic. There are some papers which talk about the different types of multimedia technology conversion and its case presentation or the effectiveness and problems of visual effect, but the effects of these convergence technologies on comic readers' concentration and reading effectiveness have never been investigated so far. Therefore, this paper will discuss the unique method of immersive storytelling that is often used in comics and analyze each aspects of immersive method in technology-converged Webtoons along with its problems. Furthermore, this paper will analyze different aspects of "immersion" and interaction elements that can be found in the popular Interaction-toon, (Encountered). Through this, this paper discusses the positive influence of the interaction elements on comic readers' immersion level and its limitation. To classify the technology-converged Webtoons in terms of the immersion level, the Effect-toon sometimes interferes viewer's flow by using excessive use of multimedia effect, creating information overload. The Smart-toon which applied motions to each frame under horizontal mode of smartphones was a good try, but it hindered the readers' activeness and made it hard for the readers to be fully absorbed in the story. The VR-toon, which utilizes virtual reality gadgets to allow viewers to explore the world of Webtoon was also a nice try to overcome the limitation of vertical screens. However, it often caused dispersion of user's attention and reduced the users' immersion level. The Moving-toon which only emphasized the reading convenience also invaded readers' activeness and disturb users' concentration. On the other hand, the cartoonist Il-Kwon Ha applied high technologies such as face recognition technology, augmented reality techniques, 360-degree panorama technology, and haptic technology to his cartoon (Encountered). This allowed the readers to form a sense of closeness towards the cartoon characters which let the readers to identify themselves with the characters and interact with them. By this way, the readers can be fully immersed in the story. However, technology abuse, impractical production and hackneyed storylining often showed later in the story remains as its limitations.

A Study on practical use about Kinetic Typography of Ethics Character Picture of filial piety and brotherly love (효제문자도(孝悌文字圖)의 키네틱 타이포그래피 활용 연구)

  • Chung, Chi-Won
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.327-347
    • /
    • 2018
  • From the end of the 18th century to the end of the 19th century, the late 19th century was a genre of a new art that was in contrast to the distribution between social class and low class, and it was also a popular culture that attempted to transform the late Joseon Dynasty's social class. It is no exaggeration to say that it is the origin of the Korean folk art, started as popular art concepts, use colorful techniques and decorations which doesn't yield to ordinary iconography. But, because of the attempt of this technique was used by lower class, the meaning of the idea was lowered from iconography to secular picture. Ethics character picture, passed on to the present from going through the upheaval cultural time, was started from secular picture and transformed into hyukpil time illustration, and it represented popular arts until now. This thesis aims to reflect the meaning, various visual expressions and the lifestyle of Ethics Character Picture of filial piety and brotherly love, which is a unique genre of popular arts. Also, propose to suggest about the kinetic typography using video media, and how the traditional ethics character picture, which are combined with video technology, effects to the advertisements. These kind of attempts will show the world about the korea's traditional contents, and through the various media information it can be recreated as national symbolic key words. Furthermore, its meaningful to pass down the noble and cultural Ethics Character Picture of filial piety and brotherly love to younger generations. And by realigning to modern expression, it is predicted that it will be significantly meaningful to pass down and make the younger generations to understand to spirit of the ancestors. This will allow various attempts to reconstruct various items of contents from Korea's traditional contents to new media content that merged with video media.