• Title/Summary/Keyword: increasing set

Search Result 1,879, Processing Time 0.034 seconds

Growth and Physiological Response of Three Evergreen Shrubs to De-icing Salt(CaCl2) at Different Concentrations in Winter - Focusing on Euonymus japonica, Rhodoendron indicum, and Buxus koreana - (겨울철 염화칼슘(CaCl2) 처리에 따른 가로변 3가지 상록 관목류의 생육 및 생리반응 - 사철나무, 영산홍, 회양목을 중심으로 -)

  • Ju, Jin-Hee;Park, Ji-Yeon;Xu, Hui;Lee, Eun-Yeob;Hyun, Kyoung-Hak;Jung, Jong-Suk;Choi, Eun-Young;Yoon, Yong-Han
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.44 no.2
    • /
    • pp.122-129
    • /
    • 2016
  • It is important to know the sensitivity of shrubs to de-icing salt in order to set guidelines for ecological tolerance of evergreen shrubs along roads. Therefore, the aim of this study was to investigate the influence of de-icing salt, calcium chloride($CaCl_2$), on the growth and physiological characteristics of three evergreen shrubs, Euonymus japonica, Rhododendron indicum, and Buxus koreana. Plants were exposed to calcium chloride at different concentrations(weight percentage, 0% as control, 1.0%, 3.0%, and 5.0%) through amended soil maintained from the start of the experiment in October of 2014 until termination in March of 2015. The survival rate, plant height, leaf length, leaf width, leaf shape index, number of leaves, fresh weight, dry weight, dry matter, root/top ratio, chlorophyll contents, fluorescence, photosynthesis, stomatal conduct, and transpiration rate were recorded. Elevated calcium chloride concentrations decreased plant height, leaf length, leaf width, leaf shape index, fresh weight, dry weight, dry matter, and R/T ratio of the three shrubs. Root growth responded more sensitively than the top growth to salinity. However Euonymus japonica was more tolerant to salt stress than Rhododendron indicum and Buxus koreana. Their growths were totally inhibited by $CaCl_2$ above 3.0% and 1.0% concentrations, respectively. Chlorophyll content, fluorescence, photosynthesis, stomatal conduct, and transpiration rate of both Rhododendron indicum and Buxus koreana were reduced sharply, while Euonymus japonica exhibited mild reductions compared to plants grown in control when increasing calcium chloride was used. Especially, the transpiration rates of Rhododendron indicum, and the photosynthesis and stomatal conduct of Buxus koreana were suppressed as the concentrations of calcium chloride increased. Therefore, Euonymus japonica should be considered as an ecologically tolerant species with proven tolerance to de-icing salt.

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Design Anamorphic Lens Thermal Optical System that Focal Length Ratio is 3:1 (초점거리 비가 3:1인 아나모픽 렌즈 열상 광학계 설계)

  • Kim, Se-Jin;Ko, Jung-Hui;Lim, Hyeon-Seon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.4
    • /
    • pp.409-415
    • /
    • 2011
  • Purpose: To design applied anamorphic lens that focal length ratio is 3:1 optical system to improve detecting distance. Methods: We defined a boundary condition as $50^{\circ}{\sim}60^{\circ}$ for viewing angle, horizontal direction 36mm, vertical direction 12 mm for focal length, f-number 4, $15{\mu}m{\times}15{\mu}m$ for pixel size and limit resolution 25% in 33l p/mm. Si, ZnS and ZnSe as a materials were used and 4.8 ${\mu}m$, 4.2 ${\mu}m$, 3.7 ${\mu}m$ as a wavelength were set. optical performance with detection distance, narcissus and athermalization in designed camera were analyzed. Results: F-number 4, y direction 12 mm and x direction 36 mm for focal length of the thermal optical system were satisfied. Total length of the system was 76 mm so that an overall volume of the system was reduced. Astigmatism and spherical aberration was within ${\pm}$0.10 which was less than 2 pixel size. Distortion was within 10% so there was no matter to use as a thermal optical camera. MTF performance for the system was over 25% from 33l p/mm to full field so it was satisfied with the boundary condition. Designed optical system was able to detect up to 2.9 km and reduce a diffused image by decreasing a narcissus value from all surfaces except the 4th surface. From sensitivity analysis, MTF resolution was increased on changing temperature with the 5th lens which was assumed as compensation. Conclusions: Designed optical system which used anamorphic lens was satisfied with boundary condition. an increasing resolution with temperature, longer detecting distance and decreasing of narcissus were verified.

EVALUATION OF CONDYLAR DISPLACEMENT USING COMPUTER TOMOGRAPHY AFTER THE SURGICAL CORRECTION OF MANDIBULAR PROGNATHISM (전산화단층촬영법을 이용한 하악전돌증 환자의 외과적 악교정술후 하악과두 위치 변화 검토)

  • Lee, Ho-Kyung;Jang, Hyun-Jung;Lee, Sang-Han
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.20 no.3
    • /
    • pp.191-200
    • /
    • 1998
  • This study was intended to perform the influence of condyle positional change after surgical correction of skeletal Class III malocclusion after orthognathic surgery in 37 patients(male 13, female 24) using computed tomogram that were taken in centric occlusion before, immediate after, and long term after surgery and lateral cephalogram that were taken in centric occlusion before, 7 days within the period of intermaxillary fixation, at the 24 hours later removing intermaxillary fixation and long term after surgery. 1. Mean intercondylar distance was $84.42{\pm}5.30mm$ and horizontal long axis of condylar angle was $12.79{\pm}4.92^{\circ}$ on the right, $13.53{\pm}5.56^{\circ}$ on the left side. Condylar lateral poles were located about 12mm and medial poles about 7mm away from the reference line(AA') on the axial tomogram. Mean intercondylar distance was $83.15{\pm}4.62mm$ and vertical axis angle of condylar angle was $76.28{\pm}428^{\circ}$ on the right, $78.30{\pm}3.79^{\circ}$ on the left. 2. In amount of set back, We found the condylar change(T2C-T1C) which had increasing tendency in group III (amount of setback : 10-15mm). but there was no statistical significance(p>0.05). 3. There was some correlation between condylar change(T2C-T1C) and TMJ dysfunction. It seemed that postoperative condylar change had influenced postoperative TMJ dysfunction, through there was no statistical significance (p>0.05). As we have observed the change of condylar axis in the group that complained of TMJ dysfunction in cases of large amount of mandibular setback. So we consider that the more trying to conserve condylar position will decrease occurrence rate of post operational TMJ dysfunction.

  • PDF

A Study of the Effect of the Permeability and Selectivity on the Performance of Membrane System Design (분리막 투과도와 분리도 인자의 시스템 설계 효과 연구)

  • Shin, Mi-Soo;Jang, Dongsoon;Lee, Yongguk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.38 no.12
    • /
    • pp.656-661
    • /
    • 2016
  • Manufacturing membrane materials with high selectivity and permeability is quite desirable but practically not possible, since the permeability and selectivity are usually inversely proportional. From the viewpoint of reducing the cost of $CO_2$ capture, module performance is even more important than the performance of membrane materials itself, which is affected by the permeance of the membrane (P, stagecut) and selectivity (S). As a typical example, when the mixture with a composition of 13% $CO_2$ and 87% of $N_2$ is fed into the module with 10% stage cut and selectivity 5, in the 10 parts of the permeate, $CO_2$ represents 4.28 parts and $N_2$ represents 5.72 parts. In this case, the $CO_2$ concentration in the permeate is 42.8% and the recovery rate of $CO_2$ in this first separation appears as 4.28/13 = 32.9%. When permeance and selectivity are doubled, however, from 10% to 20% and from 5 to 10, respectively, the $CO_2$ concentration in the permeant becomes 64.5% and the recovery rate is 12.9/13 = 99.2%. Since in this case, most of the $CO_2$ is separated, this may be the ideal condition. For a given feed concentration, the $CO_2$ concentration in the separated gas decreases if permeance is larger than the threshold value for complete recovery at a given selectivity. Conversely, for a given permeance, increasing the selectivity over the threshold value does not improve the process further. For a given initial feed gas concentration, if permeance or selectivity is larger than that required for the complete separation of $CO_2$, the process becomes less efficient. From all these considerations, we can see that there exists an optimum design for a given set of conditions.

A Study on Improvement Examination Standard for the Limit of changing Current State of the Cultural Properties (문화재 유형별 현상변경 검토기준 마련 연구)

  • Cho, Hong-Seok;Park, Hyun-Joon;Lee, You-Beom;Lee, Cheon-Woo;Kim, Chul-Ju;Park, Jung-Seop;Kim, Sang-Dong
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.33 no.4
    • /
    • pp.148-165
    • /
    • 2015
  • The Cultural Properties Protection Law Act 1962 has been achieving its role for systematic preservation, management and application of the cultural heritage under rapid economic growth circumstances through continuing revisions. Introduced the influence review system for cultural heritages in 2000, and legislated the guideline for the state-change allowance standard for the National Cultural Heritage in 2006 in specific, the law has been contributing significantly for the cultural properties and historical/cultural environment preservation, along with increase in administrative efficiency and improvement on settlement environment. However, increase in public awareness about the cultural property's value and the needs for the local revitalization by utilizing the heritages, while some allowance standard not peoperly delivering the surrounding conditions, such as the value of the properties and their substantial characteristics, land utilization, etc. being applied, complaints from the local residents are increasing continuously. Thus this research focuses on clear vision/value of the heritage and apply them to create the review criteria for the state-change allowance per heritages. Here we set the Focus of Landscape Management Indicators in order to actively preserve and manage the physical characteristics and the native value by analyzing the Cultural Heritage Protection Laws and related guidelines, manuals and research papers, and redesign the cultural propertiy's classification scheme and propose the Review Standard for state-change from the view point of changing the current state. With this research, we expect increase in the satisfaction for the Property management system with public understanding promotion regarding the Standards by applying the reviewed state-change allowance Standards, white securing the consistency for the review criteria as well as the systematic management of historical/cultural environment with their typification characteristics and the value for short-term.

The Effect of Aquaplast on Surface Dose of Photon Beam (Aquaplast가 광자선의 표면선량에 미치는 영향)

  • Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.13 no.1
    • /
    • pp.95-100
    • /
    • 1995
  • Purpose : To evaluate the effect on surface dose due to Aquaplast used for immobilizing the patients with head and neck cancers in photon beam radiotherapy Materials and Methods: To assess surface and buildup region dose for 6MV X-ray from linear accelerator(Siemens Mevatron 6740), we measured percent ionization value with the Markus chamber model 30-329 manufactured by PTW Frieburg and Capintec electrometer, model WK92. For measurement of surface ionization value, the chamber was embedded in $25{\times}25{\times}3cm^3$ acrylic phantom and set on $25{\times}25{\times}5cm^3$ polystyrene phantom to allow adequate scattering. The measurements of percent depth ionization were made by placing the polystyrene layers of appropriate thickness over the chamber. The measurements were taken at 100cm SSD for $5{\times}5cm^2$, $10{\times}10cm^2$ and $15{\times}15cm^2$ field sizes, respectively. Placing the layer of Aquaplast over the chamber, the same procedures were repeated. We evaluated two types of Aquaplast: 1.6mm layer of original Aquaplast(manufactured by WFR Aquaplast Corp.) and transformed Aquaplast similar to moulded one for immobilizing the patients practically. We also measured surface ionization values with blocking tray in presence or absence of transformed Aquaplast. In calculating percent depth dose, we used the formula suggested by Gerbi and Khan to correct overresponse of the Markus chamber. Results : The surface doses for open fields of $5{\times}5cm^2$, $10{\times}10cm^2$, and $15{\times}15cm^2$ were $79\%$, $13.6\%$, and $18.7\%$, respectively. The original Aquaplast increased the surface doses upto $38.4\%$, $43.6\%$, and $47.4\%$, respectively. For transformed Aquaplast, they were $31.2\%$, $36.1\%$, and $40.5\%$, respectively. There were little differences in percent depth dose values beyond the depth of Dmax. Increasing field size, the blocking tray caused increase of the surface dose by $0.2\%$, $1.7\%$, $3.0\%$ without Aquaplast, $0.2\%$, $1.9\%$, $3.7\%$ with transformed Aquaplast, respectively. Conclusion: The original and transformed Aquaplast increased the surface dose moderately. The percent depth doses beyond Dmax, however, were not affected by Aquaplast. In conclusion, although the use of Aquaplast in practice may cause some increase of skin and buildup region dose, reductioin of skin-sparing effect will not be so significant clinically.

  • PDF

Optimum Sieve-slit width for Effective Removal of Immature Kernels based on Varietal Characteristics of Rice to Improve Milling Efficiency (도정효율 증진을 위한 벼 품종특성별 현미선별체 적정크기)

  • Lee, Choon-Ki;Kim, Jung-Tae;Choi, Yoon-Hee;Lee, Jae-Eun;Seo, Jong-Ho;Kim, Mi-Jung;Jeong, Eung-Gi;Kim, Chung-Kon
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.54 no.4
    • /
    • pp.357-365
    • /
    • 2009
  • On the purpose to improve the milling efficiency as well as head-rice percentage after milling, an experiment to improve the removal ability of immature kernels in the immature brown rice separator (IBRS) was performed focused on varietal characteristics. The removal ability of immature grains by the IBRS was absolutely depending on kernel thickness of brown rice. The kernel thickness of the tested rice varieties distributed from 1.79 mm in Nonganbyeo to 2.16 mm in Daeribbyeo 1. Although there were some variation among rice varieties, it was roughly suggested that the suitable sieve-slit widths for good separation of the immature kernels were 1.9 mm for the varieties thicker than 2.08 mm in thickness, 1.8 mm for the varieties with 2.00-2.08 mm thickness, 1.7 mm for the varieties with 1.90-2.00 mm thickness, and 1.60-1.65 mm for the varieties thinner than 1.7 mm. It was found out that the higher the proportions of immature kernels in brown rice, the more conspicuous the improvement of milling efficiency as well as head rice rates by their removals. With increasing the sieve slit-widths beyond an optimum range, the losses of mature grains increased sharply. For effective separation of immature kernels, it was suggested that the optimum sieve-slit width should be set up depending on both of the kernel thickness and the critical loss limit of mature kernel.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.