• Title/Summary/Keyword: experimental techniques

Search Result 3,187, Processing Time 0.03 seconds

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.

SURFACE ROUGHNESS OF EXPERIMENTAL COMPOSITE RESINS USING CONFOCAL LASER SCANNING MICROSCOPE (공초점 레이저 주사 현미경을 이용한 실험적 레진의 표면 조도에 대한 연구)

  • Bae, J.H.;Lee, M.A.;Cho, B.H.
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.1
    • /
    • pp.1-8
    • /
    • 2008
  • The purpose of this study was to evaluate the effect of a new resin monomer, filler size and polishing technique on the surface roughness of composite resin restorations using confocal laser scanning microscopy. By adding new methoxylated Bis-GMA (Bis-M-GMA, 2,2-bis[4-(2-methoxy-3-methacryloyloxy propoxy) phenyl] propane) having low viscosity, the content of TEGDMA might be decreased. Three experimental composite resins were made: EX1 (Bis-M-GMA/TEGDMA = 95/5 wt%, 40 nm nanofillers); EX2 (Bis-M-GMA/TEGDMA = 95/5 wt%, 20 nm nanofillers); EX3 (Bis-GMA/TEGDMA = 70/30 wt%, 40 nm nanofillers). Filtek Z250 was used as a reference. Nine specimens (6 mm in diameter and 2 mm in thickness) for each experimental composite resin and Filtek Z250 were fabricated in a teflon mold and assigned to three groups. In Mylar strip group, specimens were left undisturbed. In Sof-lex group, specimens were ground with #1000 SiC paper and polished with Sof-lex discs. In DiaPolisher group, specimens were ground with #1000 SiC paper and polished with DiaPolisher polishing points. The Ra (Average roughness), Rq (Root mean square roughness), Rv (Valley roughness), Rp (Peak roughness), Rc (2D roughness) and Sc (3D roughness) values were determined using confocal laser scanning microscopy. The data were statistically analyzed by Two-way ANOVA and Tukey multiple comparisons test (p = 0.05). The type of composite resin and polishing technique significantly affected the surface roughness of the composite resin restorations (p < 0.001). EX3 showed the smoothest surface compared to the other composite resins (p < 0.05). Mylar strip resulted in smoother surface than other polishing techniques (p < 0.05). Bis-M-GMA. a new resin monomer having low viscosity, might reduce the amount of diluent, but showed adverse effect on the surface roughness of composite resin restorations.

A Study of Experimental Image Direction for Short Animation Movies -focusing in short film and (단편애니메이션의 실험적 영상연출 연구 -<탱고>와 <페스트 필름>을 중심으로)

  • Choi, Don-Ill
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.375-391
    • /
    • 2014
  • Animation movie is a non-photorealistic animated art that consists of formative language forming a frame based on a story and cuts describing frames that form the cuts. Therefore, in expressing an image, artistic expression methods and devices for a formative space are should be provided in a frame while cuts have the images between frames faithfully. Short animation movie is produced by various image experiments with unique image expressions rather than narration for expressing subjective discourse of a writer. Therefore, image style that forms unique images and various image directions are important factors. This study compared the experimental image directions of and , both of which showed a production method of film manipulation. First, while uses pixilation that produces images obtained from live images through painting and many optical disclosure process on a cell mat, was made with diverse collage techniques such as tearing, cutting, pasting, and folding hundreds of scenes from action movies. Second, expresses non-causal relationship of characters by their repetitive behaviors and circulatory image structure through a fixed camera angle, resisting typical scene transition. On the other hand, has an advancing structure that progresses antagonistic relationship of characters through diverse camera angles and scene transition of unique images. Third, in terms of editing, uses a long-take short cut technique in which the whole image consists of one short cut, though it seems to be many scenes with the appearance of various characters. On the other hand, maximizes visual fun and commitment by image reconstruction with hundreds of various short cuts. That is, both works have common features of an experimental work that shows expansion of animated image expressions through film manipulation that is different form general animation productions. On top of that, delivers routine life of diverse human beings without clear narration through image of conceptualized spaces. expresses it in a new image space through image reconstruction with collage technique and speedy progress, setting a binary opposition structure.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Clinical Characteristics and Genetic Analysis of Prader-Willi Syndrome (Prader-Willi 증후군의 임상 양상 및 유전학적 진단에 관한 고찰)

  • Lee, Ji Eun;Moon, Kwang Bin;Hwang, Jong Hee;Kwon, Eun Kyung;Kim, Sun Hee;Kim, Jong Won;Jin, Dong Kyu
    • Clinical and Experimental Pediatrics
    • /
    • v.45 no.9
    • /
    • pp.1126-1133
    • /
    • 2002
  • Purpose : Prader-Willi syndrome(PWS) is a complex disorder affecting multisystems with characteristic clinical features. Its genetic basis is an expression defect in the paternally derived chromosome 15q11-q13. We analyzed the clinical features and genetic basis of PWS patients for early detection and treatment. Methods : We retrospectively studied 24 patients with PWS in Department of Pediatrics, Samsung Medical Center, from September 1997 to September 2001. We performed cytogenetic and molecular genetic techniques using high resolution GTG banding techniques, fluorescent in situ hybridization and methylation-specific PCR for CpG island of SNRPN gene region. Results : The average birth weight of PWS patients was $2.67{\pm}0.47kg$ and median age at diagnosis was 1.3 years. The average height and weight of PWS patients under one year at diagnostic time were located in a 3-10 percentile relatively, and a rapid weight gain was seen between two and six years. Feeding problems in infancy and neonatal hypotonia were the two most consistently positive major criteria in over 95% of the patients. In 18 of the 24 cases(75%), deletion of chromosome 15q11-q13 was demonstrated and one case among 18 had an unbalanced 14;15 translocation. In four cases without any cytogenetic abnormality, it may be considered as maternal uniparental disomy and the rest showed another findings. Conclusion : We suggest diagnostic testing for PWS in all infants/neonates with unexplained feeding problems and hypotonia. It is necessary for clinically suspicious patients to undergo an early genetic test. As the genetic basis of PWS was heterogenous and complex, further study is required.

The Study of the Use of 'Korean Traditional Paper' as An Object in Korean Ink Painting (한국화의 '한지(韓紙)' 오브제 사용에 대한 연구)

  • Oh Se-Kwon
    • Journal of Science of Art and Design
    • /
    • v.8
    • /
    • pp.161-184
    • /
    • 2005
  • Traditionally, Korean ink painting used paper as a background medium. This tradition has sought a new attempt to meet today's demands while its expressive techniques and mediums become more diverse. The attempt of Korean ink painting to explore new mediums could change both its structure and style when considering the significance of the medium in the work. These new attempts have encouraged a further study of 'Korean Traditional Paper'. Today, 'Korean Traditional Paper' is considered to be an object itself rather than just being a background. In other words, there is no implication of a separation between the medium 'Korean Traditional Paper' and the work. Instead, the medium itself becomes the work. Therefore, 'Korean Traditional Paper' is not only a 'background' that contains the artist's desire to express, but at the same time also an 'object'. This study focuses on the attributes of 'Korean Traditional Paper' as an object, examines how this is visually applied in contemporary Korean ink painting in the relation to 'Korean Beauty', and reviews the work of some artists that use Korean Traditional Paper. The use of Korean Traditional Paper as an object first began with the experimental techniques of Lee , Eung-Noh and Kwon, Young-Woo in the 1960s, Then it seemed to stop for a while in the 1970s and 1980s until there was a renewed interest in the material personality of Korean Traditional Paper with the birth of the 'Korean Traditional Paper Artists Association' in the 1990s. This interest increased and Korean Traditional Paper was earnestly adopted by artists like Yim, Hyo Lee, Ki-Sook Won, Moon-Ja and Choi, Moo-Young, who used the paper in Broussonetia, the previous fibered state of rice paper. Here, the expression of the object through the characteristics of Korean Traditional Paper is a visual experiment to discover Korea's traditional art mediums that were forgotten once, focusing on the manifestation of Korean Beauty through Korean Traditional Paper. In this respect, this attempt has a valuable meaning in its use with a contemporary visual sense , based on the Korean sense of beauty.

  • PDF