• Title/Summary/Keyword: 과정모델

Search Result 8,476, Processing Time 0.045 seconds

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Evaluation of Liver Function Using $^{99m}-Lactosylated$ Serum Albumin Liver Scintigraphy in Rat with Acute Hepatic Injury Induced by Dimethylnitrosamine (Dimethylnitrosamine 유발 급성 간 손상 흰쥐에서 $^{99m}-Lactosylated$ Serum Albumin을 이용한 간 기능의 평가)

  • Jeong, Shin-Young;Seo, Myung-Rang;Yoo, Jeong-Ah;Bae, Jin-Ho;Ahn, Byeong-Cheol;Hwang, Jae-Seok;Jeong, Jae-Min;Ha, Jeong-Hee;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.6
    • /
    • pp.418-427
    • /
    • 2003
  • Objects: $^{99m}-lactosylated$ human serum albumin (LSA) is a newly synthesized radiopharmaceutical that binds to asialoglycoprotein receptors, which are specifically presented on the hepatocyte membrane. Hepatic uptake and blood clearance of LSA were evaluated in rat with acute hepatic injury induced by dimethylnitrosamine (DMN) and results were compared with corresponding findings of liver enzyme profile and these of histologic changes. Materials and Methods: DMN (27 mg/kg) was injected intraperitoneally in Sprague-Dawley rat to induce acute hepatic injury. At 3(DMN-3), 8(DMN-8), and 21 (DMN-21) days after injection of DMN, LSA injected intravenously, and dynamic images of the liver and heart were recorded for 30 minutes. Time-activity curves of the heart and liver were generated from regions of interest drawn over liver and heart area. Degree of hepatic uptake and blood clearance of LSA were evaluated with visual interpretation and semiquantitative analysis using parameters (receptor index : LHL3 and index of blood clearance : HH3), analysis of time-activity curve was also performed with curve fitting using Prism program. Results: Visual assessment of LSA images revealed decreased hepatic uptake in DMN treated rat, compared to control group. In semiquantitative analysis, LHL3 was significantly lower in DMN treated rat group than control rat group (DMN-3: 0.842, DMN-8: 0.898, DMN-21: 0.91, Control: 0.96, p<0.05), whereas HH3 was significantly higher than control rat group (DMN-3: 0.731,.DMN-8: 0.654, DMN-21: 0.604, Control: 0.473, p<0.05). AST and ALT were significantly higher in DMN-3 group than those of control group. Centrilobular necrosis and infiltration of inflammatory cells were most prominent in DMN-3 group, and were decreased over time. Conclusion: The degree of hepatic uptake of LSA was inversely correlated with liver transaminase and degree of histologic liver injury in rat with acute hepatic injury.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

Literature Analysis of Radiotherapy in Uterine Cervix Cancer for the Processing of the Patterns of Care Study in Korea (한국에서 자궁경부알 방사선치료의 Patterns of Care Study 진행을 위한 문헌 비교 연구)

  • Choi Doo Ho;Kim Eun Seog;Kim Yong Ho;Kim Jin Hee;Yang Dae Sik;Kang Seung Hee;Wu Hong Gyun;Kim Il Han
    • Radiation Oncology Journal
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2005
  • Purpose: Uterine cervix cancer is one of the most prevalent women cancer in Korea. We analysed published papers in Korea with comparing Patterns of Care Study (PCS) articles of United States and Japan for the purpose of developing and processing Korean PCS. Materials and Methods: We searched PCS related foreign-produced papers in the PCS homepage (212 articles and abstracts) and from the Pub Med to find Structure and Process of the PCS. To compare their study with Korean papers, we used the internet site 'Korean Pub Med' to search 99 articles regarding uterine cervix cancer and radiation therapy. We analysed Korean paper by comparing them with selected PCS papers regarding Structure, Process and Outcome and compared their items between the period of before 1980's and 1990's. Results: Evaluable papers were 28 from United States, 10 from the Japan and 73 from the Korea which treated cervix PCS items. PCS papers for United States and Japan commonly stratified into $3\~4$ categories on the bases of the scales characteristics of the facilities, numbers of the patients, doctors, Researchers restricted eligible patients strictly. For the process of the study, they analysed factors regarding pretreatment staging in chronological order, treatment related factors, factors in addition to FIGO staging and treatment machine. Papers in United States dealt with racial characteristics, socioeconomic characteristics of the patients, tumor size (6), and bilaterality of parametrial or pelvic side wail invasion (5), whereas papers from Japan treated of the tumor markers. The common trend in the process of staging work-up was decreased use of lymphangiogram, barium enema and increased use of CT and MRI over the times. The recent subject from the Korean papers dealt with concurrent chemoradiotherapy (9 papers), treatment duration (4), tumor markers (B) and unconventional fractionation. Conclusion: By comparing papers among 3 nations, we collected items for Korean uterine cervix cancer PCS. By consensus meeting and close communication, survey items for cervix cancer PCS were developed to measure structure, process and outcome of the radiation treatment of the cervix cancer. Subsequent future research will focus on the use of brachytherapy and its impact on outcome including complications. These finding and future PCS studies will direct the development of educational programs aimed at correcting identified deficits in care.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Development of New 4D Phantom Model in Respiratory Gated Volumetric Modulated Arc Therapy for Lung SBRT (폐암 SBRT에서 호흡동조 VMAT의 정확성 분석을 위한 새로운 4D 팬텀 모델 개발)

  • Yoon, KyoungJun;Kwak, JungWon;Cho, ByungChul;Song, SiYeol;Lee, SangWook;Ahn, SeungDo;Nam, SangHee
    • Progress in Medical Physics
    • /
    • v.25 no.2
    • /
    • pp.100-109
    • /
    • 2014
  • In stereotactic body radiotherapy (SBRT), the accurate location of treatment sites should be guaranteed from the respiratory motions of patients. Lots of studies on this topic have been conducted. In this letter, a new verification method simulating the real respiratory motion of heterogenous treatment regions was proposed to investigate the accuracy of lung SBRT for Volumetric Modulated Arc Therapy. Based on the CT images of lung cancer patients, lung phantoms were fabricated to equip in $QUASAR^{TM}$ respiratory moving phantom using 3D printer. The phantom was bisected in order to measure 2D dose distributions by the insertion of EBT3 film. To ensure the dose calculation accuracy in heterogeneous condition, The homogeneous plastic phantom were also utilized. Two dose algorithms; Analytical Anisotropic Algorithm (AAA) and AcurosXB (AXB) were applied in plan dose calculation processes. In order to evaluate the accuracy of treatments under respiratory motion, we analyzed the gamma index between the plan dose and film dose measured under various moving conditions; static and moving target with or without gating. The CT number of GTV region was 78 HU for real patient and 92 HU for the homemade lung phantom. The gamma pass rates with 3%/3 mm criteria between the plan dose calculated by AAA algorithm and the film doses measured in heterogeneous lung phantom under gated and no gated beam delivery with respiratory motion were 88% and 78%. In static case, 95% of gamma pass rate was presented. In the all cases of homogeneous phantom, the gamma pass rates were more than 99%. Applied AcurosXB algorithm, for heterogeneous phantom, more than 98% and for homogeneous phantom, more than 99% of gamma pass rates were achieved. Since the respiratory amplitude was relatively small and the breath pattern had the longer exhale phase than inhale, the gamma pass rates in 3%/3 mm criteria didn't make any significant difference for various motion conditions. In this study, the new phantom model of 4D dose distribution verification using patient-specific lung phantoms moving in real breathing patterns was successfully implemented. It was also evaluated that the model provides the capability to verify dose distributions delivered in the more realistic condition and also the accuracy of dose calculation.

A Study on the Historical Development of Research Community in Korea: Focused on the Government Supported Institutes (연구자 집단의 성장과 변천: 정부 출연 연구 기관을 중심으로)

  • Park Jin-Hee
    • Journal of Science and Technology Studies
    • /
    • v.6 no.1 s.11
    • /
    • pp.119-152
    • /
    • 2006
  • This paper deals with the historical development of research community in Korea. As the former studies of the korean scientific community show, the government supported institutes played an important role in the formation of research community. Therefore the theme of this study is concerned with the historical development of the government supported institutes and the features of their researcher group. In this paper following questions will be answered: How the social status of these researcher group is changed, what kind of response on social problems or national politics they had, and which characteristic they showed with regards to the identity problem. After the korean liberation the government institutes, such as the Chungang Kongop Yonguso(industrial research center)and the Korean Atomic Energy Research Institute, contributed to the development of the first generation of research group. However this research group could hardly identify themselves as researcher, because they spent much time on testing, evaluation or education. The identity problem is also resulted from the deficiency of authority as research institute. The status of researcher had no difference from that of civil servant. With the establishment of KIST the korean research community came into blossom. The government supported institutes, which were founded after the model of KIST, allowed quantitative and qualitative growth of research community. Thanks to the guarantee of institutional authority and the new reward system, the researcher could get respect and improve its social status. During this period the researcher volunteered to help the government policies. We can find often the nationalistic statements in the research community. During 1990s the research group demonstrated different behaviors and attitude toward the government. The nationalistic ideology disappeared. Instead of that, the research group criticized the government policies and took actions against the government. Those changes are related with the lowered position of government supported institutes.

  • PDF

A Comparative Study between Space Law and the Law of the Sea (우주법과 해양법의 비교 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.2
    • /
    • pp.187-210
    • /
    • 2009
  • Space law(or outer space law) and the law of the sea are branches of international law dealing with activities in geographical ares which do not or do only in part come under national sovereignty. Legal rules pertaining to the outer space and sea began to develop once activities emerged in those areas: amongst others, activities dealing with transportation, research, exploration, defense and exploitation. Naturally the law of the sea developed first, followed, early in the twentieth century, by air law, and later in the century by space law. Obviously the law of the sea, of the air and of outer space influence each other. Ideas have been borrowed from one field and applied to another. This article examines some analogies and differences between the outer space law and the law of the sea, especially from the perspective of the legal status, the exploration and exploitation of the natural resources and environment. As far as the comparisons of the legal status between the outer space and high seas are concerned the two areas are res extra commercium. The latter is res extra commercium based on both the customary international law and treaty, however, the former is different respectively according to the customary law and treaty. Under international customary law, whilst outer space constitutes res extra commercium, celestial bodies are res nullius. However as among contracting States of the 1967 Outer Space Treaty, both outer space and celestial bodies are declared res extra commercium. As for the comparisons of the exploration and exploitation of natural resources between the Moon including other celestial bodies in 1979 Moon Agreement and the deep sea bed in the 1982 United Nations Convention on the Law of the Sea, the both areas are the common heritage of mankind. The latter gives us very systematic models such as International Sea-bed Authority, however, the international regime for the former will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Thus Moon Agreement could not impose a moratorium, but would merely permit orderly attempts to establish that such exploitation was in fact feasible and practicable, by allowing experimental beginnings and thereafter pilot operations. As Professor Carl Christol said until the parties of the Moon Agreement were able to put into operation the legal regime for the equitable sharing of benefits, they would remain free to disregard the Common Heritage of Mankind principle. Parties to one or both of the agreements would retain jurisdiction over national space activities. In so far as the comparisons of the protection of the environment between the outer space and sea is concerned the legal instruments for the latter are more systematically developed than the former. In the case of the former there are growing tendencies of concerning the environmental threats arising from space activities these days. There is no separate legal instrument to deal with those problems.

  • PDF