The UN COPUOS was established in 1959 as a permanent committee of the UN General Assembly with the aims to promote international cooperation in peaceful uses of outer space, to formulate space-related programmes within the UN, to encourage research and dissemination of information on space, and to study legal problems arising from the outer space activities. Its members have been enlarged from 24 members in 1959 to 76 in 2014. The Legal Subcommittee, which has been established under COPUOS in 1962 to deal with legal problems associated with space activities, through its first three decades of work has set up a framework of international space law: the five treaties and agreements - namely the Outer Space Treaty, Rescue Agreement, Liability Convention, Registration Convention, Moon Agreement - and the five declarations and legal principles. However, some sceptical views on this legal framework has been expressed, concerning the applicability of existing international space law to practical issues and new kinds of emerging space activities. UNISPACE III, which took place in 1999, served as a momentum to revitalize the discussions of the legal issues faced by the international community in outer space activities. The agenda of the Legal Subcommittee is currently structured into three categories: regular items, single issue/items, and items considered under a multi-year workplan. The regular items, which deal with basic legal issues, include definition and delimitation of outer space, status and application of the five UN treaties on outer space, and national legislation relevant to the peaceful exploration and use of outer space. The single issues/items, which are decided upon the preceding year, are discussed only for one year in the plenary unless renewed. They include items related to the use of nuclear power sources in outer space and to the space debris mitigation. The agenda items considered under a multi-year work plan are discussed in working group. Items under this category deal with non-legally binding UN instruments on outer space and international mechanism for cooperation. In recent years, the Subcommittee has made some progress on agenda items related to nuclear power sources, space debris, and international cooperation by means of establishing non-legally binding instruments, or soft law. The Republic of Korea became the member state of COPUOS in 2001, after rotating seats every two years with Cuba and Peru since 1994. Korea's joining of COPUOS seems to be late, in considering that some countries with hardly any space activity, such Chad, Sierra Leone, Kenya, Lebanon, Cameroon, joined COPUOS as early as 1960s and 1970s and contributed to the drafting of the aforementioned treaties, declarations, and legal principles. Given the difficulties to conclude a treaty and un urgency to regulate newly emerging space activities, Legal Subcommittee now focuses its effort on developing soft law such as resolutions and guideline to be adopted by UN General Assembly. In order to have its own practices reflected in the international practices, one of the constituent elements of international customary law, Korea should analyse its technical capability, policy, and law related to outer space activities and participate actively in the formation process of the soft law.
Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.
Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.
Introduction As consumers' purchase behavior change into a rational and practical direction, the discount store industry came to have keen competition along with rapid external growth. Therefore as a solution, distribution businesses are concentrating on developing PB(Private Brand) which can realize differentiation and profitability at the same time. And as improvement in customer loyalty beyond customer satisfaction is effective in surviving in an environment with keen competition, PB is being used as a strategic tool to improve customer loyalty. To improve loyalty among PB users, it is necessary to develop PB by examining properties of a customer group, first of all, quality level perceived by consumers should be met to obtain customer satisfaction and customer trust and consequently induce customer loyalty. To provide results of systematic analysis on relations between antecedents influenced perceived quality and variables affecting customer loyalty, this study proposed a research model based on causal relations verified in prior researches and set 16 hypotheses about relations among 9 theoretical variables. Data was collected from 400 adult customers residing in Seoul and the Metropolitan area and using large scale discount stores, among them, 375 copies were analyzed using SPSS 15.0 and Amos 7.0. The findings of the present study followed as; We ascertained that the higher company reputation, brand reputation, product experience and brand familiarity, the higher perceived quality. The study also examined the higher perceived quality, the higher customer satisfaction, customer trust and customer loyalty. The findings showed that the higher customer satisfaction and customer trust, the higher customer loyalty. As for moderating effects between PB and NB in terms of influences of perceived quality factors on perceived quality, we can ascertain that PB was higher than NB in the influences of company reputation on perceived quality while NB was higher than PB in the influences of brand reputation and brand familiarity on perceived quality. These results of empirical analysis will be useful for those concerned to do marketing activities based on a clearer understanding of antecedents and consecutive factors influenced perceived quality. At last, discussions about academical and managerial implications in these results, we suggested the limitations of this study and the future research directions. Research Model and Hypotheses Test After analyzing if antecedent variables having influence on perceived quality shows any difference between PB and NB in terms of their influences on them, the relation between variables that have influence on customer loyalty was determined as Figure 1. We established 16 hypotheses to test and hypotheses are as follows; H1-1: Perceived price has a positive effect on perceived quality. H1-2: It is expected that PB and NB would have different influence in terms of perceived price on perceived quality. H2-1: Company reputation has a positive effect on perceived quality. H2-2: It is expected that PB and NB would have different influence in terms of company reputation on perceived quality. H3-1: Brand reputation has a positive effect on perceived quality. H3-2: It is expected that PB and NB would have different influence in terms of brand reputation on perceived quality. H4-1: Product experience has a positive effect on perceived quality. H4-2: It is expected that PB and NB would have different influence in terms of product experience on perceived quality. H5-1: Brand familiarity has a positive effect on perceived quality. H5-2: It is expected that PB and NB would have different influence in terms of brand familiarity on perceived quality. H6: Perceived quality has a positive effect on customer satisfaction. H7: Perceived quality has a positive effect on customer trust. H8: Perceived quality has a positive effect on customer loyalty. H9: Customer satisfaction has a positive effect on customer trust. H10: Customer satisfaction has a positive effect on customer loyalty. H11: Customer trust has a positive effect on customer loyalty. Results from analyzing main effects of research model is shown as