• Title/Summary/Keyword: E-scheduling

Search Result 269, Processing Time 0.027 seconds

A Prototype of Distributed Simulation for Facility Restoration Operation Analysis through Incorporation of Immediate Damage Assessment

  • Hwang, Sungjoo;Choi, MinJi;Starbuck, Richmond;Lee, SangHyun;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.339-343
    • /
    • 2015
  • To rapidly recover ceased functionality of a facility after a catastrophic seismic event, critical decisions on facility repair works are made within a limited period of time. However, prolonged damage assessment of facilities, due to massive damage in the surrounding region and the complicated damage judgment procedures, may impede restoration planning. To assist reliable structural damage estimation without a deep knowledge and rapid interactive analysis among facility damage and restoration operations during the approximate restoration project planning phase, we developed a prototype of distributed facility restoration simulations through the use of high-level architecture (HLA) (IEEE 1516). The simulation prototype, in which three different simulations (including a seismic data retrieval technique, a structural response simulator, and a restoration simulation module) interact with each other, enables immediate damage estimation by promptly detecting earthquake intensity and the restoration operation analysis according to estimated damage. By conducting case simulations and experiments, research outcomes provide key insights into post-disaster restoration planning, including the extent to which facility damage varies according to disaster severity, facility location, and structures. Additional insights arise regarding the extent to which different facility damage patterns impact a project's performance, especially when facility damage is hard to estimate by observation. In particular, an understanding of required type and amount of repair activities (e.g., demolition works, structural reinforcement, frame installation, or finishing works) is expected to support project managers in approximate work scheduling or resource procurement plans.

  • PDF

A Framework of Building Knowledge Representation for Sustainability Rating in BIM

  • Shahaboddin Hashemi Toroghi;Tang-Hung. Nguyen;Jin-Lee. Kim
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.437-443
    • /
    • 2013
  • Recently, sustainable building design, a growing field within architectural design, has been emerged in the construction industry as the practice of designing, constructing, and operating facilities in such a manner that their environmental impact, which has become a great concern of construction professionals, can be minimized. A number of different green rating systems have been developed to help assess that a building project is designed and built using strategies intended to minimize or eliminate its impact on the environment. In the United States, the widely accepted national standards for sustainable building design are known as the LEED (Leadership in Energy and Environmental Design) Green Building Rating System. The assessment of sustainability using the LEED green rating system is a challenging and time-consuming work due to its complicated process. In effect, the LEED green rating system awards points for satisfying specified green building criteria into five major categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, and indoor environmental quality; and sustainability of a project is rated by accumulating scores (100 points maximum) from these five major categories. The sustainability rating process could be accelerated and facilitated by using computer technology such as BIM (Building Information Modeling), an innovative new approach to building design, engineering, and construction management that has been widely used in the construction industry. BIM is defined as a model-based technology linked with a database of project information, which can be accessed, manipulated, and retrieved for construction estimating, scheduling, project management, as well as sustainability rating. This paper will present a framework representing the building knowledge contained in the LEED green building criteria. The proposed building knowledge framework will be implemented into a BIM platform (e.g. Autodesk Revit Architecture) in which sustainability rating of a building design can be automatically performed. The development of the automated sustainability rating system and the results of its implementation will be discussed.

  • PDF

Software Metric for CBSE Model

  • Iyyappan. M;Sultan Ahmad;Shoney Sebastian;Jabeen Nazeer;A.E.M. Eljialy
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.187-193
    • /
    • 2023
  • Large software systems are being produced with a noticeably higher level of quality with component-based software engineering (CBSE), which places a strong emphasis on breaking down engineered systems into logical or functional components with clearly defined interfaces for inter-component communication. The component-based software engineering is applicable for the commercial products of open-source software. Software metrics play a major role in application development which improves the quantitative measurement of analyzing, scheduling, and reiterating the software module. This methodology will provide an improved result in the process, of better quality and higher usage of software development. The major concern is about the software complexity which is focused on the development and deployment of software. Software metrics will provide an accurate result of software quality, risk, reliability, functionality, and reusability of the component. The proposed metrics are used to assess many aspects of the process, including efficiency, reusability, product interaction, and process complexity. The details description of the various software quality metrics that may be found in the literature on software engineering. In this study, it is explored the advantages and disadvantages of the various software metrics. The topic of component-based software engineering is discussed in this paper along with metrics for software quality, object-oriented metrics, and improved performance.

Collaborative Inference for Deep Neural Networks in Edge Environments

  • Meizhao Liu;Yingcheng Gu;Sen Dong;Liu Wei;Kai Liu;Yuting Yan;Yu Song;Huanyu Cheng;Lei Tang;Sheng Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1749-1773
    • /
    • 2024
  • Recent advances in deep neural networks (DNNs) have greatly improved the accuracy and universality of various intelligent applications, at the expense of increasing model size and computational demand. Since the resources of end devices are often too limited to deploy a complete DNN model, offloading DNN inference tasks to cloud servers is a common approach to meet this gap. However, due to the limited bandwidth of WAN and the long distance between end devices and cloud servers, this approach may lead to significant data transmission latency. Therefore, device-edge collaborative inference has emerged as a promising paradigm to accelerate the execution of DNN inference tasks where DNN models are partitioned to be sequentially executed in both end devices and edge servers. Nevertheless, collaborative inference in heterogeneous edge environments with multiple edge servers, end devices and DNN tasks has been overlooked in previous research. To fill this gap, we investigate the optimization problem of collaborative inference in a heterogeneous system and propose a scheme CIS, i.e., collaborative inference scheme, which jointly combines DNN partition, task offloading and scheduling to reduce the average weighted inference latency. CIS decomposes the problem into three parts to achieve the optimal average weighted inference latency. In addition, we build a prototype that implements CIS and conducts extensive experiments to demonstrate the scheme's effectiveness and efficiency. Experiments show that CIS reduces 29% to 71% on the average weighted inference latency compared to the other four existing schemes.

Efficient Execution Method for Business Process Management using TOC Concepts (제약이론을 활용한 업무프로세스의 효율적 실행 방법)

  • Rhee Seung-Hyun;Bae Hyerim;Won Hyungjun;Kim Hoontae;Kang Suk-Ho
    • The Journal of Society for e-Business Studies
    • /
    • v.10 no.1
    • /
    • pp.61-80
    • /
    • 2005
  • Business Process Management (BPM) System is a software system to support an efficient execution, control and management of business processes. The system automates complex business processes and manages them effectively to raise productivity. Traditional commercial systems mainly focus on automating processes and do not have methods for enhancing process performances and task performer's efficiency. Therefore, there is room for enhancement of task performers' productivities and efficiency of business processes. In this paper, we propose a new method of executing business processes more efficiently in that a whole process is scheduled considering the degree of participants' workload. The method allows managing the largest constraints among constituent resources of the process. This method is based on the DBR (Drum-Buffer-Rope) in TOC (Theory of Constraints) concepts. We first consider the differences between business process models and DBR application models, and then develop the modified drum, buffer and rope. This leads us to develop BP-DBR (Business Process-DBR) that can control the proper size of task performers' work list and arrival rate of process instances. Use of BP-DBR improves the efficiency of the whole process as well as participants' working condition. We then carry out a set of simulation experiments and compare the effectiveness of our approach with that of the scheduling techniques used in existing systems.

  • PDF

A LSTM Based Method for Photovoltaic Power Prediction in Peak Times Without Future Meteorological Information (미래 기상정보를 사용하지 않는 LSTM 기반의 피크시간 태양광 발전량 예측 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.4
    • /
    • pp.119-133
    • /
    • 2019
  • Recently, the importance prediction of photovoltaic power (PV) is considered as an essential function for scheduling adjustments, deciding on storage size, and overall planning for stable operation of PV facility systems. In particular, since most of PV power is generated in peak time, PV power prediction in a peak time is required for the PV system operators that enable to maximize revenue and sustainable electricity quantity. Moreover, Prediction of the PV power output in peak time without meteorological information such as solar radiation, cloudiness, the temperature is considered a challenging problem because it has limitations that the PV power was predicted by using predicted uncertain meteorological information in a wide range of areas in previous studies. Therefore, this paper proposes the LSTM (Long-Short Term Memory) based the PV power prediction model only using the meteorological, seasonal, and the before the obtained PV power before peak time. In this paper, the experiment results based on the proposed model using the real-world data shows the superior performance, which showed a positive impact on improving the PV power in a peak time forecast performance targeted in this study.

Predicting Cherry Flowering Date Using a Plant Phonology Model (생물계절모형을 이용한 벚꽃 개화일 예측)

  • Jung J. E.;Kwon E. Y.;Chung U. R.;Yun J. I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.7 no.2
    • /
    • pp.148-155
    • /
    • 2005
  • An accurate prediction of blooming date is crucial for many authorities to schedule and organize successful spring flower festivals in Korea. The Korea Meteorological Administration (KMA) has been using regression models combined with a subjective correction by forecasters to issue blooming date forecasts for major cities. Using mean monthly temperature data for February (observed) and March (predicted), they issue blooming date forecasts in late February to early March each year. The method has been proved accurate enough for the purpose of scheduling spring festivals in the relevant cities, but cannot be used in areas where no official climate and phenology data are available. We suggest a thermal time-based two-step phenological model for predicting the blooming dates of spring flowers, which can be applied to any geographic location regardless of data availability. The model consists of two sequential periods: the rest period described by chilling requirement and the forcing period described by heating requirement. It requires daily maximum and minimum temperature as an input and calculates daily chill units until a pre-determined chilling requirement for rest release. After the projected rest release date, it accumulates daily heat units (growing degree days) until a pre- determined heating requirement for flowering. Model parameters were derived from the observed bud-burst and flowering dates of cherry tree (Prunus serrulata var. spontanea) at KMA Seoul station along with daily temperature data for 1923-1950. The model was applied to the 1955-2004 daily temperature data to estimate the cherry blooming dates and the deviations from the observed dates were compared with those predicted by the KMA method. Our model performed better than the KMA method in predicting the cherry blooming dates during the last 50 years (MAE = 2.31 vs. 1.58, RMSE = 2.96 vs. 2.09), showing a strong feasibility of operational application.

Low Power EccEDF Algorithm for Real-Time Operating Systems (실시간 운영체제를 위한 저전력 EccEDF 알고리듬)

  • Lee, Min-Seok;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • For battery based real-time embedded systems, high performance to meet their real-time constraints and energy efficiency to extend battery life are both essential. Real-Time Dynamic Voltage Scaling (RT-DVS) has been a key technique to satisfy both requirements. In this paper, we present an efficient RT-DVS algorithm called EccEDF that is designed based on ccEDF. The proposed algorithm can precisely calculate the maximum unused utilization with consideration of the elapsed time while keeping the structural simplicity of ccEDF, which overlooked the time needed to run the task in calculating the available slack. The maximum unused utilization can be calculated by dividing remaining execution time($C_i-cc_i$) by remaining time($P_i-E_i$) on completion of the task and it is proved using Fluid scheduling model. We also show that the algorithm outperforms ccEDF in practical applications which is modelled using a PXA250 and a 0.28V-to-1.2V wide-operating-range IA-32 processor model.

Forecasting Leaf Mold and Gray Leaf Spot Incidence in Tomato and Fungicide Spray Scheduling (토마토 재배에서 점무늬병 및 잎곰팡이병 발생 예측 및 방제력 연구)

  • Lee, Mun Haeng
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.376-383
    • /
    • 2022
  • The current study, which consisted of two independent studies (laboratory and greenhouse), was carried out to project the hypothesis fungi-spray scheduling for leaf mold and gray leaf spot in tomato, as well as to evaluate the effect of temperature and leaf wet duration on the effectiveness of different fungicides against these diseases. In the first experiment, tomato leaves were infected with 1 × 104 conidia·mL-1 and put in a dew chamber for 0 to 18 hours at 10 to 25℃ (Fulvia fulva) and 10 to 30℃ (Stemphylium lycopersici). In farm study, tomato plants were treated for 240 hours with diluted (1,000 times) 30% trimidazole, 50% polyoxin B, and 40% iminoctadine tris (Belkut) for protection of leaf mold, and 10% etridiazole + 55% thiophanate-methyl (Gajiran), and 15% tribasic copper sulfate (Sebinna) for protection of gray leaf spot. In laboratory test, leaf condensation on the leaves of tomato plants were emerged after 9 hrs. of incubation. In conclusion, the incidence degree of leaf mold and gray leaf spot disease on tomato plants shows that it is very closely related to formation of leaf condensation, therefore the incidence of leaf mold was greater at 20 and 15℃, while 25 and 20℃ enhanced the incidence of gray leaf spot. The incidence of leaf mold and gray leaf spot developed 20 days after inoculation, and the latency period was estimated to be 14-15 days. Trihumin fungicide had the maximum effectiveness up to 168 hours of fungicides at 12 hours of wet duration in leaf mold, whereas Gajiran fungicide had the highest control (93%) against gray leaf spot up to 144 hours. All the chemicals showed an around 30-50% decrease in effectiveness after 240 hours of treatment. The model predictions in present study could be help in timely, effective and ecofriendly management of leaf mold disease in tomato.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.