• Title/Summary/Keyword: 가치

Search Result 19,062, Processing Time 0.044 seconds

A Study on the Meaning and Future of the Moon Treaty (달조약의 의미와 전망에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.215-236
    • /
    • 2006
  • This article focused on the meaning of the 1979 Moon Treaty and its future. Although the Moon Treaty is one of the major 5 space related treaties, it was accepted by only 11 member states which are non-space powers, thus having the least enfluences on the field of space law. And this article analysed the relationship between the 1979 Moon Treay and 1967 Space Treaty which was the first principle treaty, and searched the meaning of the "Common Heritage of Mankind(hereinafter CHM)" stipulated in the Moon treaty in terms of international law. This article also dealt with the present and future problems arising from the Moon Treaty. As far as the 1967 Space Treaty is concerned the main standpoint is that outer space including the moon and the other celestial bodies is res extra commercium, areas not subject to national appropriation like high seas. It proclaims the principle non-appropriation concerning the celestial bodies in outer space. But the concept of CHM stipulated in the Moon Treaty created an entirely new category of territory in international law. This concept basically conveys the idea that the management, exploitation and distribution of natural resources of the area in question are matters to be decided by the international community and are not to be left to the initiative and discretion of individual states or their nationals. Similar provision is found in the 1982 Law of the Sea Convention that operates the International Sea-bed Authority created by the concept of CHM. According to the Moon Treaty international regime will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Before the establishment of an international regime we could imagine moratorium upon the expoitation of the natural resources on the celestial bodies. But the drafting history of the Moon Treaty indicates that no moratorium on the exploitation of natural resources was intended prior to the setting up of the international regime. So each State Party could exploit the natural resources bearing in mind that those resouces are CHM. In this respect it would be better for Korea, now not a party to the Moon Treaty, to be a member state in the near future. According to the Moon Treaty the efforts of those countries which have contributed either directly or indirectly the exploitation of the moon shall be given special consideration. The Moon Treaty, which although is criticised by some space law experts represents a solid basis upon which further space exploration can continue, shows the expression of the common collective wisdom of all member States of the United Nations and responds the needs and possibilities of those that have already their technologies into outer space.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.

A Study on the Growth Diagnosis and Management Prescription for Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214) (진안 평지리 이팝나무군(천연기념물 제214호)의 생육진단 및 관리방안)

  • Rho, Jae-Hyun;Oh, Hyun-Kyung;Han, Sang-Yub;Choi, Yung-Hyun;Son, Hee-Kyung
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.115-127
    • /
    • 2018
  • This study was attempted to find out the value of cultural assets through the clear diagnosis and prescription of the dead and weakness factors of the Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214), The results are as follows. First, Since the designation of 13 natural monuments in 1968, since 1973, many years have passed since then. In particular, despite the removal of some of the buried soil during the maintenance process, such as retreating from the fence of the primary school after 2010, Second, The first and third surviving tree of the designated trees also have many branches that are dead, the leaves are dull, and the amount of leaves is small. vitality of tree is 'extremely bad', and the first branch has already been faded by a large number of branches, and the amount of leaves is considerably low this year, so that only two flowers are bloomed. The second is also in a 'bad'state, with small leaves, low leaf density, and deformed water. The largest number 1 in the world is added to the concern that the s coverd oil is assumed to be paddy soils. Third, It is found that the composition ratio of silt is high because it is known as '[silty loam(SiL)]'. In addition, the pH of the northern soil at pH 1 was 6.6, which was significantly different from that of the other soil. In addition, the organic matter content was higher than the appropriate range, which is considered to reflect the result of continuous application for protection management. Fourth, It is considered that the root cause of failure and growth of Jinan pyeongji-ri Population of Retusa Fringe Trees group is chronic syndrome of serious menstrual deterioration due to covered soil. This can also be attributed to the newly planted succession and to some of the deaths. Fifthly, It is urgent to gradually remove the subsoil part, which is estimated to be the cause of the initial damage. Above all, it is almost impossible to remove the coverd soil after grasping the details of the soil, such as clayey soil, which is buried in the rootstock. After removal of the coverd soil, a pestle is installed to improve the respiration of the roots and the ground with Masato. And the dead 4th dead wood and the 5th and 6th dead wood are the best, and the lower layer vegetation is mown. The viable neck should be removed from the upper surface, and the bark defect should undergo surgery and induce the development of blindness by vestibule below the growth point. Sixth, The underground roots should be identified to prepare a method to improve the decompression of the root and the respiration of the soil. It is induced by the shortening of rotten roots by tracing the first half of the rootstock to induce the generation of new roots. Seventh, We try mulching to suppress weed occurrence, trampling pressure, and soil moisturizing effect. In addition, consideration should be given to the fertilization of the foliar fertilizer, the injection of the nutrients, and the soil management of the inorganic fertilizer for the continuous nutrition supply. Future monitoring and forecasting plans should be developed to check for changes continuously.

Soil Surface Fixation by Direct Sowing of Zoysia japonica with Soil Improvement on the Dredged Soil Slope (해저준설토 사면에서 개량제 처리에 의한 한국들잔디 직파 지표고정 공법에 관한 연구)

  • Jeong, Yong-Ho;Lee, Im-Kyun;Seo, Kyung-Won;Lim, Joo-Hoon;Kim, Jung-Ho;Shin, Moon-Hyun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.14 no.4
    • /
    • pp.1-10
    • /
    • 2011
  • This study was conducted to compare the growth of Zoysia japonica depending on different soil treatments in Saemangeum sea dike, which is filled with dredged soil. Zoysia japonica was planted using sod-pitching method on the control plot. On plots which were treated with forest soil and soil improvement, Zoysia japonica seeds were sprayed mechanically. Sixteen months after planting, coverage rate, leaf length, leaf width, and root length were measured and analyzed. Also, three Zoysia japonica samples per plot were collected to analyze nutrient contents. Coverage rate was 100% in B treatment plot(dredged soil+$40kg/m^3$ soil improvement+forest soil), in C treatment plots (dredged soil+$60kg/m^3$ soil improvement+forest soil), and D treatment plots (dredged soil+$60kg/m^3$ soil improvement), while only 43% of the soil surface was covered with Zoysia japonica on control plots. The width of the leaf on C treatment plots (3.79mm) was the highest followed by D treatment (3.49mm), B treatment (2.40mm) and control plots (1.97mm). Leaf and root length of D treatment was 30.18cm and 13.18cm, which were highest among different treatments. The leaf length of D treatment was highest followed by C, B, and A treatments. The root length of D treatment was highest followed by C, A, and B treatments. The nitrogen and phosphate contents of the above ground part of Zoysia japonica were highest in C treatment, followed by D, B, and A treatments. The nitrogen and phosphate contents of the underground part of Zoysia japonica were highest in D treatment, followed by C, A, and B treatments. C and D treatments showed the best results in every aspect of grass growth. The results of this study could be used to identify the cost effective way to improve soil quality for soil surface fixation on reclaimed areas using grass species.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

Yesterday and Today of Twelve Excellent Sceneries at Banbyeoncheon Expressed in Heojoo's Sansuyucheop (허주(虛舟) 산수유첩(山水遺帖)에 표현된 반변천(半邊川) 십이승경(十二勝景)의 어제와 오늘)

  • Kim, Jeong-Moon;Rho, Jae-Hyun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.30 no.1
    • /
    • pp.90-102
    • /
    • 2012
  • Sansuyucheop by Heojoobugun(虛舟府君) as the subject of this study is a 십이-width picture album by the eldest grandson of 11 generations for Goseong Lee family, Lee Jong Ak(李宗岳: 1726-1773), a figure having five habits(五癖) for ancient documents(古書癖), playing the gayageum(彈琴癖), flowering plant(花卉癖), paintings and calligraphic works(書畵癖) and boating(舟遊癖) etc., who boated with 18 relatives, and those by marriage from old home, home of mother's side, wife's home, and his home for 5 days Apr. 4 through 8, 1763, starting from Imcheonggak, through Yangjeong(羊汀), Chiltan(七灘), Sabin Auditorium(泗濱書院), Seonchang(船倉), Nakyeon(落淵), Seonchal(仙刹), Seonyujeong(仙遊亭), Mongseongak(夢仙閣), Baekwoonjeong(白雲亭) and Naeap Village(川前里), Iho(伊湖), Seoeodae(鮮魚帶) to the returning point, Bangujeong(伴鷗亭), cruised magnificent views around Banbyeoncheon called 'Andong 8 Gyeong' or 'Imhagugok', and whenever the boat anchored, appreciated the scenery at each point, and enjoyed and loved arts playing the geomungo. This study reached following findings through grasping physical, ecological, visual and aesthetic changes about the places, sceneries, plant elements and past and current scenery of the width pictures expressed at this Sansuyucheop. The refinement on the boat seeing the clear river water, white sand beach, fantastically-shaped cliffs expressed at this Sansuyucheop, exchanging poems and calligraphies, and enjoying the geomungo is a good example displaying the play culture of high-class in Joseon Dynasty. Also construction of Imha Dam and Andong Dam has caused serious visual and ecological changes, making us not enable to feel the original mood of the background spots such as Yangjeonggwabeom(羊汀過帆), Chiltanhuseon(七灘候船), Sasubeomjoo(泗水泛舟), Seonchanggyeram(船倉繫纜), Nakyeonmosaek(落淵莫色), Mangcheonguido(輞川歸棹), Ihojeongdo(伊湖停棹), but only discern then landscape or sentiment through the landscape described at the canvas. The 1st picture(Donghohaeram, 東湖解纜), and the 11th picture(Seoeobanjo, 鮮魚返照) of Heojoobugun's Sansuyucheop expressed trees thought to be fallen, brad-leaf tall trees, and the 9th picture(Unjeongpungbeom, 雲亭風帆) formed a pine forest called 'Gaeho(開湖)' by Uncheongong planting 1,000 pine trees with the village people in 1617. In addition, Seunggyeongdo expressed ever-green needle leaf trees at the natural topography, and fallen-leaf tall trees around the pavilion and building. Comparative consideration of Heojoobugun's Sansuyucheop and Shinam's Dongyusipsogi(東遊十小記) showed that the location of Samgok is assumed to be Macheon and Chiltan, so Imhagugok is assumed to start from Baekunjeong of Ilgok, Igok from Imcheon and Imcheon auditorium, Samgok from Mangcheon and Chiltan, Sagok from Sabin Auditorium of Sasoo, Ogok from Songseok, Yukgok from Sooseok of Seonchang, Chilgok from Nakyeonhyeonryu, Palgok from Seonchalsa and Seonyoojeong, and Gugok from Pyong Yuheo. This study can be significant in that it could clarify that Heojoobugun's Sansuyucheop is judged to be valuable in exquisitively expressing the coast of Banbyeon River, the biggest branch stream in the Nakdong River at the latter half of Joseon Dynasty, and as a vital diagrammatical historical data to make a comparative analysis of currently rarely-seen ancestors' life traces and landscape factors with present ones.

Investigation on a Way to Maximize the Productivity in Poultry Industry (양계산업에 있어서 생산성 향상방안에 대한 조사 연구)

  • 오세정
    • Korean Journal of Poultry Science
    • /
    • v.16 no.2
    • /
    • pp.105-127
    • /
    • 1989
  • Although poultry industry in Japan has been much developed in recent years, it still needs to be developed , compared with developed countries. Since the poultry market in Korea is expected to be opened in the near future it is necessary to maximize the Productivity to reduce the production costs and to develop the scientific, technologies and management organization systems for the improvement of the quality in poultry production. Followings ale the summary of poultry industry in Japan. 1. Poultry industry in Japan is almost specized and commercialized and its management system is : integrated, cooperative and developed to industrialized intensive style. Therefore, they have competitive power in the international poultry markets. 2. Average egg weight is 48-50g per day (Max. 54g) and feed requirement is 2. 1-2. 3. 3. The management organization system is specialized and farmers in small scale form complex and farmers in large scale are integrated.

  • PDF