• Title/Summary/Keyword: IT Techniques

Search Result 13,856, Processing Time 0.045 seconds

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Case study of Music & Imagery for Woman with Depression (우울한 내담자를 위한 MI(Music & Imagery) 치료사례)

  • Song, In Ryeong
    • Journal of Music and Human Behavior
    • /
    • v.5 no.1
    • /
    • pp.67-90
    • /
    • 2008
  • This case used MI techniques that give an imagery experience to depressed client's mental resource, and that makes in to verbalism. Also those images are supportive level therapy examples that apply to positive variation. MI is simple word of 'Music and Imagery' with one of psychology cure called GIM(Guided Imagery and Music). It makes client can through to the inner world and search, confront, discern and solve with suitable music. Supportive Level MI is only used from safety level music. Introduction of private session can associate specification feeling, subject, word or image. And those images are guide to positive experience. The First session step of MI program is a prelude that makes concrete goal like first interview. The Second step is a transition that can concretely express about client's story. The third step is induction and music listening. And it helps to associate imagery more easily by used tension relaxation. Also it can search and associate about various imagery from the music. The last step is process that process drawing imagery, talking about personal imagery experience in common with therapist that bring the power by expansion the positive experience. Client A case targets rapport forming(empathy, understanding and support), searching positive recourse(child hood, family), client's emotion and positive support. Music must be used simple tone, repetition melody, steady rhythm and organized by harmony music of what therapist and client's preference. The client used defense mechanism and couldn't control emotion by depression in 1 & 2 sessions. But the result was client A could experience about support and understanding after 3 sessions. After session 4 the client had stable, changed to positive emotion from the negative emotion and found her spontaneous. Therefore, at the session 6, the client recognized that she will have step of positive time at the future. About client B, she established rapport forming(empathy, understanding and support) and searching issues and positive recognition(child hood, family), expression and insight(present, future). The music was comfortable, organizational at the session 1 & 2, but after session 3, its development was getting bigger and the main melody changed variation with high and low of tune. Also it used the classic and romantic music. The client avoids bad personal relations to religious relationship. But at the session 1 & 2, client had supportive experience and empathy because of her favorite, supportive music. After session 3, client B recognized and face to face the present issue. But she had avoidance and face to face of ambivalence. The client B had a experience about emotion change according depression and face to face client's issues After session 4. At the session 5 & 6, client tried to have will power of healthy life and fairly attitude, train mental power and solution attitude in the future. On this wise, MI program had actuality and clients' issues solution more than GIM program. MI can solute the issue by client's based issue without approach to unconsciousness like GIM. Especially it can use variety music and listening time is shorter than GIM and structuralize. Also can express client's emotion very well. So it can use corrective and complement MI program to children, adolescent and adult.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Study on Influencing Factors of Traffic Accidents in Urban Tunnel Using Quantification Theory (In Busan Metropolitan City) (수량화 이론을 이용한 도시부 터널 내 교통사고 영향요인에 관한 연구 - 부산광역시를 중심으로 -)

  • Lim, Chang Sik;Choi, Yang Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.1
    • /
    • pp.173-185
    • /
    • 2015
  • This study aims to investigate the characteristics and types of car accidents and establish a prediction model by analyzing 456 car accidents having occurred in the 11 tunnels in Busan, through statistical analysis techniques. The results of this study can be summarized as below. As a result of analyzing the characteristics of car accidents, it was found that 64.9% of all the car accidents took place in the tunnels between 08:00 and 18:00, which was higher than 45.8 to 46.1% of the car accidents in common roads. As a result of analyzing the types of car accidents, the car-to-car accident type was the majority, and the sole-car accident type in the tunnels was relatively high, compared to that in common roads. Besides, people at the age between 21 and 40 were most involved in car accidents, and in the vehicle type of the first party to car accidents, trucks showed a high proportion, and in the cloud cover, rainy days or cloudy days showed a high proportion unlike clear days. As a result of analyzing the principal components of car accident influence factors, it was found that the first principal components were road, tunnel structure and traffic flow-related factors, the second principal components lighting facility and road structure-related factors, the third principal factors stand-by and lighting facility-related factors, the fourth principal components human and time series-related factors, the fifth principal components human-related factors, the sixth principal components vehicle and traffic flow-related factors, and the seventh principal components meteorological factors. As a result of classifying car accident spots, there were 5 optimized groups classified, and as a result of analyzing each group based on Quantification Theory Type I, it was found that the first group showed low explanation power for the prediction model, while the fourth group showed a middle explanation power and the second, third and fifth groups showed high explanation power for the prediction model. Out of all the items(principal components) over 0.2(a weak correlation) in the partial correlation coefficient absolute value of the prediction model, this study analyzed variables including road environment variables. As a result, main examination items were summarized as proper traffic flow processing, cross-section composition(the width of a road), tunnel structure(the length of a tunnel), the lineal of a road, ventilation facilities and lighting facilities.

A Study on Design Techniques of Palace Gardens presented in Donggwoldo (동궐도에 보이는 궁궐정원의 조영수법)

  • Chin, Sang-Chul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.33 no.4
    • /
    • pp.26-37
    • /
    • 2015
  • This paper aims to review all landscaping elements of Donggwoldo by building and identify the palatial garden landscaping characteristics and landscaping methods in a bid to explore landscaping methods applicable for modern-time gardens of Korea through Succession of Tradition. The research methodology was adopted by which the palatial gardens appearing in Donggwoldo were observed according to garden elements to identify their characteristics. Garden elements in Donggwoldo include oddly shaped stones, ponds, buildings and Madang, borders and areas, and trees. Their characteristics were analyzed, and as a result they are outlined as follows. Location : Buildings in Donggwoldo were located in the optimal areas within the Myungdang (the best location), with the building sites being created by transforming the natural topography positively according to the existing topography and uses. Tree planting : The construction of the buildings involved using the existing trees. There were no specific principle and method of planting trees, and no specific criteria for choosing the kind of tree. Symmetrical planting was adopted and its is considered embracing the viewpoint of making gardens based on the expression of Yin and Yang. Strongly symbolical kinds of trees were also adopted. Bangji : it takes a nearly circular shape in palatial gardens, and such shapes represent conceptual and abstract symbols. They were also frequently used as the place of public entertainment. Pavilions : they did not take a certain standard shape. They had diverse shapes, including a triangle, square, pentagon, hexagon, and octagon and cross. Oddly shaped stones : Oddly shapes stones and stone cases were deployed mainly near the bedroom and the crow prince's residence and in the rear garden. Hwagye : it appeared mainly in the back of the bedroom, the crown prince's residence, the princess's residence, and other women's quarters. Chwibyeong : it was installed for the purpose of drawing the natural energy like a natural inlet instead of being the nature-dividing wall. Korea's garden composition method was very different from the Western and Chinese method. Overall, Chosun palatial garden style was characterized by strict and Confucian features, while the garden construction method adopted the Taoism thought. Yet, the gardens had a carefree aspect.

Meteorological Constraints and Countermeasures in Major Summer Crop Production (하작물의 기상재해와 그 대책)

  • Shin-Han Kwon;Hong-Suk Lee;Eun-Hui Hong
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.27 no.4
    • /
    • pp.398-410
    • /
    • 1982
  • Summer crops grown in uplands are greatly diversified and show a large variation in difference with year and location in Korea. The principal factor for the variation is weather, in which precipitation and temperature play a leading role and such a weather factors as wind, sun lights also influence production of the summer crops. Since artificial control of weather conditions as a main stress factor for crop production is almost impossible, it must be minimized only by an improvement of cultivation techniques and crop improvement. Precipitation plays a role as one of the most important factor for production of the summer crops and it is considered in two aspects, drought and excess moisture. This country, which belongs to monsoon territory, necessarily encounter one of this stress almost every year, even though the level is different. Therefore, the facilities for both drought and excess moisture are required, but actually it is not easy to complete for them. On this account, crops tolerant to drought, excess moisture and pests should be considered for establishing summer crops. For the districts damaged habitually every season, adequate crops should be cultured and appropriate method of planting, drainage and weed control should be applied diversely. Injuries by temperature is mainly attributed to lower temperature particularly in late fall and early spring, although higher temperature often causes some damages depending upon the kind of crops. Sometimes, lower temperature in summer season playa critical role for yield reduction in the summer crops. However, certain crops are prevented to some extent from this kind of stress by improving varieties tolerant to cold, hot weather or early maturing varieties. As is often the case, control of planting time or harvesting is able to be a good management for escaping the stress. Lodging, plant diseases and pests are considered as a direct or indirect damage due to weather stress, but these are characters able to be overcome by means of crop improvement and also controlled by other suitable methods. In addition, polytical supports capable of improving constitution of agriculture into modern industry is urgently required by programming of data for the damages, establishment of damage forecasting and compensation system.

  • PDF