• Title/Summary/Keyword: rule application

Search Result 764, Processing Time 0.03 seconds

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Strategic Antitrust Policy Promoting Mergers to Enhance Domestic Competitiveness (기업결합규제(企業結合規制)와 국제경쟁력(國際競爭力))

  • Seong, So-mi
    • KDI Journal of Economic Policy
    • /
    • v.12 no.3
    • /
    • pp.153-172
    • /
    • 1990
  • The present paper investigates the potential value of strategic antitrust policy in an oligopolistic international market. The market is characterized by a non-cooperative Cournot-Nash equilibrium and by asymmetry in costs among firms in the world market. The model is useful for two reasons. First, it is important in the context of policy-making to examine the conditions under which it may be beneficial to relax antitrust law to enhance competitiveness. Second, the explicit derivation of the level of cost-saving required for a gain in total domestic surplus provides an empirical rule for excluding industries that do not satisfy the requirements for a socially beneficial antitrust exemption. Results of the analysis include a criterion that tells how the cost-saving and concentration effects of a merger offset each other. The criterion is derived from fairly general assumptions on demand functions and is simple enough to be applied as a part of the merger guidelines. Another interesting policy implication of our analysis is that promoting mergers would not be a beneficial strategy in a net importing industry where cost-saving opportunities are thin. Cost-saving domestic mergers are more likely to increase national welfare in exporting industries. The best candidate industries for application of strategic antitrust policy are those with the following characteristics: (i) a large potential for efficiency enhancement; (ii) high market concentration at the world but not the domestic level; (iii) a high ratio of exports to imports. Recently, many policymakers and economists in Korea have also come to believe that the appropriate antitrust policy in an era of increased foreign competition may actually be to encourage rather than to prohibit domestic mergers. The Industry Development Act of 1986 and the proposed bill for Mergers and Conversions in the Financial Industry of 1990 reflect this changing perspective on antitrust policy. Antitrust laws may burden domestic firms in the sense that they have a more constrained strategy set. Expenditures to avoid antitrust attacks could also increase costs for domestic firms. But there is no clear evidence that the impact of antitrust policy is significant enough to harm the competitiveness of domestic firms. As a matter of fact, it is necessary for domestic financial institutions to become large in scale in this era of globalization. However, the absence of empirical evidence for efficiency enhancement from mergers suggests caution in the relaxation of antitrust standards.

  • PDF

Development of Traffic Volume Estimation System in Main and Branch Roads to Estimate Greenhouse Gas Emissions in Road Transportation Category (도로수송부문 온실가스 배출량 산정을 위한 간선 및 지선도로상의 교통량 추정시스템 개발)

  • Kim, Ki-Dong;Lee, Tae-Jung;Jung, Won-Seok;Kim, Dong-Sool
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.28 no.3
    • /
    • pp.233-248
    • /
    • 2012
  • The national emission from energy sector accounted for 84.7% of all domestic emissions in 2007. Of the energy-use emissions, the emission from mobile source as one of key categories accounted for 19.4% and further the road transport emission occupied the most dominant portion in the category. The road transport emissions can be estimated on the basis of either the fuel consumed (Tier 1) or the distance travelled by the vehicle types and road types (higher Tiers). The latter approach must be suitable for simultaneously estimating $CO_2$, $CH_4$, and $N_2O$ emissions in local administrative districts. The objective of this study was to estimate 31 municipal GHG emissions from road transportation in Gyeonggi Province, Korea. In 2008, the municipalities were consisted of 2,014 towns expressed as Dong and Ri, the smallest administrative district unit. Since mobile sources are moving across other city and province borders, the emission estimated by fuel sold is in fact impossible to ensure consistency between neighbouring cities and provinces. On the other hand, the emission estimated by distance travelled is also impossible to acquire key activity data such as traffic volume, vehicle type and model, and road type in small towns. To solve the problem, we applied a hierarchical cluster analysis to separate town-by-town road patterns (clusters) based on a priori activity information including traffic volume, population, area, and branch road length obtained from small 151 towns. After identifying 10 road patterns, a rule building expert system was developed by visual basic application (VBA) to assort various unknown road patterns into one of 10 known patterns. The expert system was self-verified with original reference information and then objects in each homogeneous pattern were used to regress traffic volume based on the variables of population, area, and branch road length. The program was then applied to assign all the unknown towns into a known pattern and to automatically estimate traffic volumes by regression equations for each town. Further VKT (vehicle kilometer travelled) for each vehicle type in each town was calculated to be mapped by GIS (geological information system) and road transport emission on the corresponding road section was estimated by multiplying emission factors for each vehicle type. Finally all emissions from local branch roads in Gyeonggi Province could be estimated by summing up emissions from 1,902 towns where road information was registered. As a result of the study, the GHG average emission rate by the branch road transport was 6,101 kilotons of $CO_2$ equivalent per year (kt-$CO_2$ Eq/yr) and the total emissions from both main and branch roads was 24,152 kt-$CO_2$ Eq/yr in Gyeonggi Province. The ratio of branch roads emission to the total was 0.28 in 2008.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

Building a Korean Sentiment Lexicon Using Collective Intelligence (집단지성을 이용한 한글 감성어 사전 구축)

  • An, Jungkook;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.49-67
    • /
    • 2015
  • Recently, emerging the notion of big data and social media has led us to enter data's big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers' attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public (www.openhangul.com). For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.

Prediction of Target Motion Using Neural Network for 4-dimensional Radiation Therapy (신경회로망을 이용한 4차원 방사선치료에서의 조사 표적 움직임 예측)

  • Lee, Sang-Kyung;Kim, Yong-Nam;Park, Kyung-Ran;Jeong, Kyeong-Keun;Lee, Chang-Geol;Lee, Ik-Jae;Seong, Jin-Sil;Choi, Won-Hoon;Chung, Yoon-Sun;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.132-138
    • /
    • 2009
  • Studies on target motion in 4-dimensional radiotherapy are being world-widely conducted to enhance treatment record and protection of normal organs. Prediction of tumor motion might be very useful and/or essential for especially free-breathing system during radiation delivery such as respiratory gating system and tumor tracking system. Neural network is powerful to express a time series with nonlinearity because its prediction algorithm is not governed by statistic formula but finds a rule of data expression. This study intended to assess applicability of neural network method to predict tumor motion in 4-dimensional radiotherapy. Scaled Conjugate Gradient algorithm was employed as a learning algorithm. Considering reparation data for 10 patients, prediction by the neural network algorithms was compared with the measurement by the real-time position management (RPM) system. The results showed that the neural network algorithm has the excellent accuracy of maximum absolute error smaller than 3 mm, except for the cases in which the maximum amplitude of respiration is over the range of respiration used in the learning process of neural network. It indicates the insufficient learning of the neural network for extrapolation. The problem could be solved by acquiring a full range of respiration before learning procedure. Further works are programmed to verify a feasibility of practical application for 4-dimensional treatment system, including prediction performance according to various system latency and irregular patterns of respiration.

  • PDF

The Role of Chest CT Scans in the Management of Empyema (농흉에서 전산화 단층촬영의 의의)

  • Heo, Jeong-Suk;Kwun, Oh-Yong;Sohn, Jeong-Ho;Choi, Won-Il;Hwang, Jae-Seok;Han, Seung-Beom;Jeon, Young-June;Kim, Jung-Sik
    • Tuberculosis and Respiratory Diseases
    • /
    • v.41 no.4
    • /
    • pp.397-404
    • /
    • 1994
  • Background: To decide the optimal antibiotics and application of chest tube, examination of pleural fluid is fundamental in the management of empyema. Some criteria for drainage of pleural fluid have been recommended but some controversies have been suggested. Recently, newer radiologic methods including ultrasound and computed tomography scanning, have been applied to the diagnosis and management of pleural effusions. We undertook a retrospective analysis of 30 patients with pleural effusion who had CT scans of the chest in order to apply the criteria of Light et al retrospectively to patients with loculation and to correlate the radiologic appearance of pleural effusions with pleural fluid chemistry. Method: We analyzed the records of 30 out of 147 patients with pleural effusion undergoing chest CT scans. Results: 1) Six of the pleural fluid cultures yielded gram negative organisms and three anaerobic bacterias and one Staphylococcus aureus and one non-hemolytic Streptococci. No organism was cultured in ninteen cases(63.0%). 2) The reasons for taking chest CT scans were to rule out malignancy or parenchymal lung disease(46.7%), poor response to antibiotics(40.0%), hard to aspirate pleural fluid(10.0%) and to decide the site for chest tube insertion(3.3%). 3) There was no significant correlations between ATS stages and loculation but there was a tendency to loculate in stage III. 4) There was a significant inverse relationship between the level of pH and loculation(p<0.05) but there appeared to be no relationship between pleural fluid, LDH, glucose, protein, loculation and pleural thickening. 5) In 12 out of 30, therapeutic measures were changed according to the chest CT scan findings. Conclusion: We were unable to identify any correlations between the plerual fluid chemistry, ATS stages and loculations except pH, and we suggest that tube thoracotomy should be individualized according to the clinical judgement and serial observation. All patients with empyema do not need a chest CT scan but a CT scan can provide determination of loculation, guiding and assessing therapy which should decrease morbidity and hospital stay.

  • PDF

Auxiliary Reinforcement Method for the Safety of Tunnelling Face (터널 막장안정성에 따른 보강공법 적용)

  • Kim, Chang-Yong;Park, Chi-Hyun;Bae, Gyu-Jin;Hong, Sung-Wan;Oh, Myung-Ryul
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.2 no.2
    • /
    • pp.11-21
    • /
    • 2000
  • Tunnelling has been created as a great extent in view of less land space available because the growth of population in metropolitan has been accelerated at a faster pace than the development of the cities. In tunnelling, it is often faced that measures are obliged to be taken without confirmation for such abnormality as diverged movement of surrounding rock mass, growing crack of shotcrete and yielding of rockbolts. In this case, it is usually said that the judgments of experienced engineers for the selection of measure are importance and allowed us to get over the situations in many construction sites. But decrease of such experienced engineers need us to develop the new system to assist the selection of measures for the abnormality without any experiences of similar tunnelling sites. In this study, After a lot of tunnelling reinforcement methods were surveyed and the detail application were studied, an expert system was developed to predict the safety of tunnel and choose proper tunnel reinforcement system using fuzzy quantification theory and fuzzy inference rule based on tunnel information database. The expert system developed in this study have two main parts named pre-module and post-module. Pre-module decides tunnel information imput items based on the tunnel face mapping information which can be easily obtained in-situ site. Then, using fuzzy quantification theory II, fuzzy membership function is composed and tunnel safety level is inferred through this membership function. The comparison result between the predicted reinforcement system level and measured ones was very similar. In-situ data were obtained in three tunnel sites including subway tunnel under Han river. This system will be very helpful to make the most of in-situ data and suggest proper applicability of tunnel reinforcement system developing more resonable tunnel support method from dependance of some experienced experts for the absent of guide.

  • PDF

Legal Aspects on ICAO SARPs Regarding Alternative Fire Extinguishing Agent to Halon Fire Extinguishers

  • Lee, Gun-young;Kang, Woo-Jung
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.33 no.1
    • /
    • pp.205-226
    • /
    • 2018
  • For sustainable development of air transport, the establishment and application of international standards of environmental protection area is significant. The development and use of alternative fire extinguishing agent to Halon, which is used for the fire extinguishing systems of engine nacelles/APU and cargo compartments, has been requested in order to protect the ozone layer. The ICAO has been active in preparing international standards and recommended practices (SARPs); however, certification of alternative fire extinguishing agents has been postponed due to technical readiness problem.. Consequently, the implementation of SARPs has also been postponed by two years from the end of 2016. to the end of 2018. As such consequences have caused confusion among Member States regarding its implementation, it is necessary to discuss and pay more attention to this issue. ICAO Council and Air Navigation Commission should consider between setting the implementation time frame earlier or giving enough time for mature readiness and preparedness. Also in order to minimize the unnecessary discharge of Halon owned by Member States, it is necessary to consider efficient management methodologies; for example, requesting fire extinguisher manufacturers to recharge in professional ways. For the successful implementation of the SARPs, ICAO developed an implementation task list as including notification of differences, establishment of a national implementation plan, drafting of the modification to the national regulations and means of compliance, adoption of the national regulations and means of compliance. Member States can develop their own rule making process in reference with the ICAO implementation task list. This issue was presented and discussed during the 54th Conference of Directors General of civil aviation, Asia and Pacific Regions which was held in Ulaanbaatar, Mongolia in 2017 with significant attention among participated Contacting States. In this regards, ICAO Council and Air Navigation Commission should consult with Legal Bureau lawyers regarding SARPs preparing process to eliminate difficulties and confusions for proper implementation within effective date.