• Title/Summary/Keyword: System integration

Search Result 5,172, Processing Time 0.035 seconds

A Study on the establishment of IoT management process in terms of business according to Paradigm Shift (패러다임 전환에 의한 기업 측면의 IoT 경영 프로세스 구축방안 연구)

  • Jeong, Min-Eui;Yu, Song-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.151-171
    • /
    • 2015
  • This study examined the concepts of the Internet of Things(IoT), the major issue and IoT trend in the domestic and international market. also reviewed the advent of IoT era which caused a 'Paradigm Shift'. This study proposed a solution for the appropriate corresponding strategy in terms of Enterprise. Global competition began in the IoT market. So, Businesses to be competitive and responsive, the government's efforts, as well as the efforts of companies themselves is needed. In particular, in order to cope with the dynamic environment appropriately, faster and more efficient strategy is required. In other words, proposed a management strategy that can respond the IoT competitive era on tipping point through the vision of paradigm shift. We forecasted and proposed the emergence of paradigm shift through a comparative analysis of past management paradigm and IoT management paradigm as follow; I) Knowledge & learning oriented management, II) Technology & innovation oriented management, III) Demand driven management, IV) Global collaboration management. The Knowledge & learning oriented management paradigm is expected to be a new management paradigm due to the development of IT technology development and information processing technology. In addition to the rapid development such as IT infrastructure and processing of data, storage, knowledge sharing and learning has become more important. Currently Hardware-oriented management paradigm will be changed to the software-oriented paradigm. In particular, the software and platform market is a key component of the IoT ecosystem, has been estimated to be led by Technology & innovation oriented management. In 2011, Gartner announced the concept of "Demand-Driven Value Networks(DDVN)", DDVN emphasizes value of the whole of the network. Therefore, Demand driven management paradigm is creating demand for advanced process, not the process corresponding to the demand simply. Global collaboration management paradigm create the value creation through the fusion between technology, between countries, between industries. In particular, cooperation between enterprises that has financial resources and brand power and venture companies with creative ideas and technical will generate positive synergies. Through this, The large enterprises and small companies that can be win-win environment would be built. Cope with the a paradigm shift and to establish a management strategy of Enterprise process, this study utilized the 'RTE cyclone model' which proposed by Gartner. RTE concept consists of three stages, Lead, Operate, Manage. The Lead stage is utilizing capital to strengthen the business competitiveness. This stages has the goal of linking to external stimuli strategy development, also Execute the business strategy of the company for capital and investment activities and environmental changes. Manege stage is to respond appropriately to threats and internalize the goals of the enterprise. Operate stage proceeds to action for increasing the efficiency of the services across the enterprise, also achieve the integration and simplification of the process, with real-time data capture. RTE(Real Time Enterprise) concept has the value for practical use with the management strategy. Appropriately applied in this study, we propose a 'IoT-RTE Cyclone model' which emphasizes the agility of the enterprise. In addition, based on the real-time monitoring, analysis, act through IT and IoT technology. 'IoT-RTE Cyclone model' that could integrate the business processes of the enterprise each sector and support the overall service. therefore the model be used as an effective response strategy for Enterprise. In particular, IoT-RTE Cyclone Model is to respond to external events, waste elements are removed according to the process is repeated. Therefore, it is possible to model the operation of the process more efficient and agile. This IoT-RTE Cyclone Model can be used as an effective response strategy of the enterprise in terms of IoT era of rapidly changing because it supports the overall service of the enterprise. When this model leverages a collaborative system among enterprises it expects breakthrough cost savings through competitiveness, global lead time, minimizing duplication.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A Study of Family Caregiver's Burden for the Terminally III Patients (지역사회 말기질환자 가족 부담감에 관한 연구)

  • Han, Sung-Suk;Ro, You-Ja;Yang, Soo;Yoo, Yang-Sook;Kim, Sek-Il;Hwang, Hee-Hyung
    • Journal of Korean Academic Society of Home Health Care Nursing
    • /
    • v.10 no.1
    • /
    • pp.58-72
    • /
    • 2003
  • The purpose of this study was to describe the perceived burden of the terminally III patients's caregiver and to analyze relationship between the perceived burden and the various demographics, illness characteristics, family relationships, and economic factor of the family & patients. The sample of 132 caregivers who care for the terminally III patients Kyung-Gi province, Seoul, Korea. The period of this study was from August to September, 2002. The perceived burden of the family caregiver was measured by the burden scale(20 items, 4 point scale) developed by Montgomery et al. (1985). The Data was analyzed using SAS-program by t-test and ANOVA. The results were as follows; 1. The mean of the family caregiver's burden score was 3.02. The score showed that caregivers perceive severe the level of burden. The hight items of the family caregiver's burden were' I feel it is painful to watch patient's diseases'(3.77). 'I feel afraid for what the future holds for my patients'(3.66), 'I feel it reduced to amount of privacy time'(3.64). 2. The caregiver's burden was significantly related to patient's gender(F=3.17, p= 0.0020), patient's job(F=2.49, p=0.0476), caregiver's age(F=4.29, p=0.0030), and caregiver's job(F=2.49, p=0.0476). 3. The caregiver's burden according to illness characteristics showed no significant difference. 4. The caregiver's burden was significantly associated with patient's family relationship (F=4.05, p=0.0041), patient's care mean period in a day(F=47.18,

  • PDF

An Analysis of Proper Curriculum Organization Plan for Elementary and Secondary Invention/Intellectual Property Education (초·중등 발명·지식재산 교육과정의 적정 편성 방안 연구)

  • Lee, Kyu-Nyo;Lee, Byung-Wook
    • 대한공업교육학회지
    • /
    • v.42 no.1
    • /
    • pp.106-124
    • /
    • 2017
  • This study used the secondary Delphi method for experts, in order to propse a proper formation plan for the goal and curriculum of elementary and secondary invention/intellection property education. Its results are as following; First, the key objective of invention/intellectual property education for each school level is evaluated as appropriate. With regard to the key objective, elementary schools are aiming at 'fostering awareness and attitude for invention'(M=4.5), middle schools, 'understanding of invention process and method'(M=4.2), general high schools, 'application and evaluation of invention method'(M=4.1), and specialized high schools, 'understanding and application of Employee Invention'(M=4.6). The objective and goal of education for each school level are also evaluated as appropriate. Second, although the proper formation plans for a key learning element of elementary and secondary invention/intellectual property education were almost identical to an actual formation of preceding literature, overall change is required for the formation balance of each learning element, according to the objective and goal of school-leveled invention/intellectual property education. An appropriate formation shall be focusing on basic learning elements (A, B, C, D, E, and F) for elementary and middle schools(73.2%, 65.1%), lowering somewhat the former elements and increasing expanded learning elements for high schools(51.0%), which are connected to the invention, course(H), and patent application(K). Third, elementary and secondary invention/intellectual property education system should be oriented to its objective and goal. In order to reach this, an appropriate formation plan should be made for each school level, based on the principle of Tyler's learning organization, such as continuity, sequence and integration, which are key learning element. Specialized high schools, in particular, need to be differentiated from general ones, as well as elementary and middle schools. Additionally, for understanding and applying an employee invention, invention/intellectual property education system needs to be established in the phase of secondary occupational education.

The Characteristic of Research Regulation in Recent Japanese Medical World (최근 일본의 의학계 연구규율의 특색)

  • Song, Young-mi
    • The Korean Society of Law and Medicine
    • /
    • v.20 no.2
    • /
    • pp.173-206
    • /
    • 2019
  • This research examines the characteristic of regulation on Japanese clinical research in recent years. First, Japan has had a severe punishment policy on research misconduct like Korea, but, in recent days, Japan has changed the direction of research ethics policy from restriction to research publicness securement by educational training, in addition, Act of Clinical Research, effected April 2018, has recruited excellent researchers, and then integrated clinical research and medicine clinical trial through raising transparency of funding and integrating ethics screening by mandating announcement on funding information of clinical research. Second, Japan has integrated and organized ethics guideline from dual system that consists of ethics guideline on dynamic research(here after, referred to as 「dynamic guideline」) and ethics guideline on clinical research(here after, referred to as 「clinical guideline」) to ethics guideline on medical research aimed at human(here after, referred to as 「integrated guideline」), thus, it complements repetition and deficit of ethics guideline needed in clinical research and dynamic research, and it has risk evaluation system for protecting human subjects, and also it clarifies the concept of 「invasiveness」, a preliminary consideration of evaluation. 「Evaluation issue of risk and profit」, common contents of international regulation related clinical research, is the method to check whether the research is designed appropriately or not, this is the method for Institutional Review Board to decide whether the risk on human subjects could be justified, and also this is the important standard for future human subjects to participate in clinical trial. Therefore, it is meaningful to define 「invasiveness」 concept, a preliminary consideration of risk evaluation for human subjects. This research examines Japanese clinical trial focusing on change of awareness on prevention of research misconduct, efficiency improvement of research through research screening and integration of human subjects, and clarification and extension of range of 「invasiveness」 concept, a preliminary of risk evaluation to protect human subjects.

Clinical Usefulness of PET-MRI in Lymph Node Metastasis Evaluation of Head and Neck Cancer (두경부암 림프절 전이 평가에서 PET-MRI의 임상적 유용성)

  • Kim, Jung-Soo;Lee, Hong-Jae;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.26-32
    • /
    • 2014
  • Purpose: As PET-MRI which has excellent soft tissue contrast is developed as integration system, many researches about clinical application are being conducted by comparing with existing display equipments. Because PET-MRI is actively used for head and neck cancer diagnosis in our hospital, lymph node metastasis before the patient's surgery was diagnosed and clinical usefulness of head and neck cancer PET-MRI scan was evaluated using pathological opinions and idiopathy surrounding tissue metastasis evaluation method. Materials and Methods: Targeting 100 head and neck cancer patients in SNUH from January to August in 2013. $^{18}F-FDG$ (5.18 MBq/kg) was intravenous injected and after 60 min of rest, torso (body TIM coil, Vibe-Dixon) and dedication (head-neck TIM coil, UTE, Dotarem injection) scans were conducted using $Bio-graph^{TM}$ mMR 3T (SIEMENS, Munich). Data were reorganized using iterative reconstruction and lymph node metastasis was read with Syngo.Via workstation. Subsequently, pathological observations and diagnosis before-and-after surgery were examined with integrated medical information system (EMR, best-care) in SNUH. Patient's diagnostic information was entered in each category of $2{\times}2$ decision matrix and was classified into true positive (TP), true negative (TN), false positive (FP) and false negative (FN). Based on these classified test results, sensitivity, specificity, accuracy, false negative and false positive rate were calculated. Results: In PET-MRI scan results of head and neck cancer patients, positive and negative cases of lymph node metastasis were 49 and 51 cases respectively and positive and negative lymph node metastasis through before-and-after surgery pathological results were 46 and 54 cases respectively. In both tests, TP which received positive lymph node metastasis were analyzed as 34 cases, FP which received positive lymph node metastasis in PET-MRI scan but received negative lymph node metastasis in pathological test were 4 cases, FN which received negative lymph node metastasis but received positive lymph node metastasis in pathological test was 1 case, and TN which received negative lymph node metastasis in both two tests were 50 cases. Based on these data, sensitivity in PET-MRI scan of head and neck cancer patient was identified to be 97.8%, specificity was 92.5%, accuracy was 95%, FN rate was 2.1% and FP rate was 7.00% respectively. Conclusion: PET-MRI which can apply the acquired functional information using high tissue contrast and various sequences was considered to be useful in determining the weapons before-and-after surgery in head and neck cancer diagnosis or in the evaluation of recurrence and remote detection of metastasis and uncertain idiopathy cervical lymph node metastasis. Additionally, clinical usefulness of PET-MRI through pathological test and integrated diagnosis and follow-up scan was considered to be sufficient as a standard diagnosis scan of head and neck cancer, and additional researches about the development of optimum MR sequence and clinical application are required.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Generation of Transgenic Rice without Antibiotic Selection Marker through Agrobacterium-mediated Co-transformation System (아그로박테리움 동시 형질전환 시스템을 통한 항생제 선발 마커가 없는 형질전환벼의 생산)

  • Park, Soo-Kwon;Kwon, Tack-Min;Lee, Jong-Hee;Shin, Dong-Jin;Hwang, Woon-Ha;Song, You-Chun;Cho, Jun-Hyun;Nam, Min-Hee;Jeon, Seung-Ho;Lee, Sang-Yeol;Park, Dong-Soo
    • Journal of Life Science
    • /
    • v.22 no.9
    • /
    • pp.1152-1158
    • /
    • 2012
  • Development of transgenic plant increasing crop yield or disease resistance is good way to solve the world food shortage. However, the persistence of marker genes in crops leads to serious public concerns about the safety of transgenic crops. In the present paper, we developed marker-free transgenic rice inserted high molecular-weight glutenin subunit (HMW-GS) gene ($D{\times}5$) from the Korean wheat cultivar 'Jokyeong' using Agrobacterium-mediated co-transformation method. Two expression cassettes comprised of separate DNA fragments containing only the $D{\times}5$ and hygromycin resistance (HPTII) genes were introduced separately into Agrobacterium tumefaciens EHA105 strain for co-infection. Each EHA105 strain harboring $D{\times}5$ or HPTII was infected into rice calli at a 3: 1 ratio of EHA105 with $D{\times}5$ gene and EHA105 with HPTII gene expressing cassette. Then, among 66 hygromycin-resistant transformants, we obtained two transgenic lines inserted with both the $D{\times}5$ and HPTII genes into the rice genome. We reconfirmed integration of the $D{\times}5$ and HPTII genes into the rice genome by Southern blot analysis. Wheat $D{\times}5$ transcripts in $T_1$ rice seeds were examined with semi-quantitative RT-PCR. Finally, the marker-free plants containing only the $D{\times}5$ gene were successfully screened at the $T_1$ generation. These results show that a co-infection system with two expression cassettes could be an efficient strategy to generate marker-free transgenic rice plants.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

EU Integration and Its Aviation Relationship with Third Countries (유럽연합(EU) 통합과 제3국과의 항공관계)

  • Lee, Jong-Sik
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.135-167
    • /
    • 2006
  • Air service agreements between EU Member States and third countries concluded by Sweden, Finland, Belgium, Luxembourg, Austria, the Netherlands, Denmark and the United Kingdom after the Second World War infringe EU law. They authorize the third countries to withdraw, suspend or limit the traffic rights of air carriers designated by the signatory States. According to the Court of Justice of the European Communities (CJEC), these agreements infringe EU law in two respects. On the one hand, the presence of nationality clauses infringes the right of European airlines to non-discriminatory market access to routes between all Member States and third countries. On the other hand, only the EU has the authority to sign up to this type of commitment where agreements affect the exercise of EU competence, i.e. involve an area covered by EU legislation. The Court held that since the third countries have the right to refuse a carrier, these agreements therefore constitute an obstacle to the freedom of establishment and freedom to provide services, as the opening of European skies to third countries' companies is not reciprocal for all EU airlines. In the conclusion, in order to reconstruct these public international air law, The new negotiations between EU member states and third countries, especially the US, must be designed to ensure an adequate set of principles, so that Member States, in their bilateral relations with third countries in the area of air service, should consider following three models. The 1st, to develop a new model of public international air law such as a new Bermuda III. The 2nd, to reconstruct new freedoms of the air, for example, the 7th, 8th, and 9th freedoms. The 3rd, to explore new approaching models, such as complex system theory explored in the recent social sciences, to make access world-wide global problems instead of bilateral problems between EU member states and United States. The example will show any lessons to air talks between European Union and ROK.

  • PDF