• Title/Summary/Keyword: System modelling

Search Result 1,402, Processing Time 0.032 seconds

A Digital Forensic Framework Design for Joined Heterogeneous Cloud Computing Environment

  • Zayyanu Umar;Deborah U. Ebem;Francis S. Bakpo;Modesta Ezema
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.207-215
    • /
    • 2024
  • Cloud computing is now used by most companies, business centres and academic institutions to embrace new computer technology. Cloud Service Providers (CSPs) are limited to certain services, missing some of the assets requested by their customers, it means that different clouds need to interconnect to share resources and interoperate between them. The clouds may be interconnected in different characteristics and systems, and the network may be vulnerable to volatility or interference. While information technology and cloud computing are also advancing to accommodate the growing worldwide application, criminals use cyberspace to perform cybercrimes. Cloud services deployment is becoming highly prone to threats and intrusions. The unauthorised access or destruction of records yields significant catastrophic losses to organisations or agencies. Human intervention and Physical devices are not enough for protection and monitoring of cloud services; therefore, there is a need for more efficient design for cyber defence that is adaptable, flexible, robust and able to detect dangerous cybercrime such as a Denial of Service (DOS) and Distributed Denial of Service (DDOS) in heterogeneous cloud computing platforms and make essential real-time decisions for forensic investigation. This paper aims to develop a framework for digital forensic for the detection of cybercrime in a joined heterogeneous cloud setup. We developed a Digital Forensics model in this paper that can function in heterogeneous joint clouds. We used Unified Modeling Language (UML) specifically activity diagram in designing the proposed framework, then for deployment, we used an architectural modelling system in developing a framework. We developed an activity diagram that can accommodate the variability and complexities of the clouds when handling inter-cloud resources.

A constrained minimization-based scheme against susceptibility of drift angle identification to parameters estimation error from measurements of one floor

  • Kangqian Xu;Akira Mita;Dawei Li;Songtao Xue;Xianzhi Li
    • Smart Structures and Systems
    • /
    • v.33 no.2
    • /
    • pp.119-131
    • /
    • 2024
  • Drift angle is a significant index for diagnosing post-event structures. A common way to estimate this drift response is by using modal parameters identified under natural excitations. Although the modal parameters of shear structures cannot be identified accurately in the real environment, the identification error has little impact on the estimation when measurements from several floors are used. However, the estimation accuracy falls dramatically when there is only one accelerometer. This paper describes the susceptibility of single sensor identification to modelling error and simulations that preliminarily verified this characteristic. To make a robust evaluation from measurements of one floor of shear structures based on imprecisely identified parameters, a novel scheme is devised to approximately correct the mode shapes with respect to fictitious frequencies generated with a genetic algorithm; in particular, the scheme uses constrained minimization to take both the mathematical aspect and the realistic aspect of the mode shapes into account. The algorithm was validated by using a full-scale shear building. The differences between single-sensor and multiple-sensor estimations were analyzed. It was found that, as the number of accelerometers decreases, the error rises due to insufficient data and becomes very high when there is only one sensor. Moreover, when measurements for only one floor are available, the proposed method yields more precise and appropriate mode shapes, leading to a better estimation on the drift angle of the lower floors compared with a method designed for multiple sensors. As well, it is shown that the reduction in space complexity is offset by increasing the computation complexity.

A Comparison of Accuracy of the Ocean Thermal Environments Using the Daily Analysis Data of the KMA NEMO/NEMOVAR and the US Navy HYCOM/NCODA (기상청 전지구 해양순환예측시스템(NEMO/NEMOVAR)과 미해군 해양자료 동화시스템(HYCOM/NCODA)의 해양 일분석장 열적환경 정확도 비교)

  • Ko, Eun Byeol;Moon, Il-Ju;Jeong, Yeong Yun;Chang, Pil-Hun
    • Atmosphere
    • /
    • v.28 no.1
    • /
    • pp.99-112
    • /
    • 2018
  • In this study, the accuracy of ocean analysis data, which are produced from the Korea Meteorological Administration (KMA) Nucleus for European Modelling of the Ocean/Variational Data Assimilation (NEMO/NEMOVAR, hereafter NEMO) system and the HYbrid Coordinate Ocean Model/Navy Coupled Ocean Data Assimilation (HYCOM/NCODA, hereafter HYCOM) system, was evaluated using various oceanic observation data from March 2015 to February 2016. The evaluation was made for oceanic thermal environments in the tropical Pacific, the western North Pacific, and the Korean peninsula. NEMO generally outperformed HYCOM in the three regions. Particularly, in the tropical Pacific, the RMSEs (Root Mean Square Errors) of NEMO for both the sea surface temperature and vertical water temperature profile were about 50% smaller than those of HYCOM. In the western North Pacific, in which the observational data were not used for data assimilation, the RMSE of NEMO profiles up to 1000 m ($0.49^{\circ}C$) was much lower than that of HYCOM ($0.73^{\circ}C$). Around the Korean peninsula, the difference in RMSE between the two models was small (NEMO, $0.61^{\circ}C$; HYCOM, $0.72^{\circ}C$), in which their errors show relatively big in the winter and small in the summer. The differences reported here in the accuracy between NEMO and HYCOM for the thermal environments may be attributed to horizontal and vertical resolutions of the models, vertical coordinate and mixing scheme, data quality control system, data used for data assimilation, and atmosphere forcing. The present results can be used as a basic data to evaluate the accuracy of NEMO, before it becomes the operational model of the KMA providing real-time ocean analysis and prediction data.

Integration of Ontology Open-World and Rule Closed-World Reasoning (온톨로지 Open World 추론과 규칙 Closed World 추론의 통합)

  • Choi, Jung-Hwa;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.282-296
    • /
    • 2010
  • OWL is an ontology language for the Semantic Web, and suited to modelling the knowledge of a specific domain in the real-world. Ontology also can infer new implicit knowledge from the explicit knowledge. However, the modeled knowledge cannot be complete as the whole of the common-sense of the human cannot be represented totally. Ontology do not concern handling nonmonotonic reasoning to detect incomplete modeling such as the integrity constraints and exceptions. A default rule can handle the exception about a specific class in ontology. Integrity constraint can be clear that restrictions on class define which and how many relationships the instances of that class must hold. In this paper, we propose a practical reasoning system for open and closed-world reasoning that supports a novel hybrid integration of ontology based on open world assumption (OWA) and non-monotonic rule based on closed-world assumption (CWA). The system utilizes a method to solve the problem which occurs when dealing with the incomplete knowledge under the OWA. The method uses the answer set programming (ASP) to find a solution. ASP is a logic-program, which can be seen as the computational embodiment of non-monotonic reasoning, and enables a query based on CWA to knowledge base (KB) of description logic. Our system not only finds practical cases from examples by the Protege, which require non-monotonic reasoning, but also estimates novel reasoning results for the cases based on KB which realizes a transparent integration of rules and ontologies supported by some well-known projects.

A Study on the Development of Dynamic Models under Inter Port Competition (항만의 경쟁상황을 고려한 동적모형 개발에 관한 연구)

  • 여기태;이철영
    • Journal of the Korean Institute of Navigation
    • /
    • v.23 no.1
    • /
    • pp.75-84
    • /
    • 1999
  • Although many studies on modelling of port competitive situation have been conducted, both theoretical frame and methodology are still very weak. In this study, therefore, a new algorithm called ESD (Extensional System Dynamics) for the evaluation of port competition was presented, and applied to simulate port systems in northeast asia. The detailed objectives of this paper are to develop Unit fort Model by using SD(System Dynamics) method; to develop Competitive Port Model by ESD method; to perform sensitivity analysis by altering parameters, and to propose port development strategies. For these the algorithm for the evaluation of part's competition was developed in two steps. Firstly, SD method was adopted to develop the Unit Port models, and secondly HFP(Hierarchical Fuzzy Process) method was introduced to expand previous SD method. The proposed models were then developed and applied to the five ports - Pusan, Kobe, Yokohama, Kaoshiung, Keelung - with real data on each ports, and several findings were derived. Firstly, the extraction of factors for Unit Port was accomplished by consultation of experts such as research worker, professor, research fellows related to harbor, and expert group, and finally, five factor groups - location, facility, service, cargo volumes, and port charge - were obtained. Secondly, system's structure consisting of feedback loop was found easily by location of representative and detailed factors on keyword network of STGB map. Using these keyword network, feedback loop was found. Thirdly, for the target year of 2003, the simulation for Pusan port revealed that liner's number would be increased from 829 ships to 1,450 ships and container cargo volumes increased from 4.56 million TEU to 7.74 million TEU. It also revealed that because of increased liners and container cargo volumes, length of berth should be expanded from 2,162m to 4,729m. This berth expansion was resulted in the decrease of congested ship's number from 97 to 11. It was also found that port's charge had a fluctuation. Results of simulation for Kobe, Yokohama, Kaoshiung, Keelung in northeast asia were also acquired. Finally, the inter port competition models developed by ESB method were used to simulate container cargo volumes for Pusan port. The results revealed that under competitive situation container cargo volume was smaller than non-competitive situation, which means Pusan port is lack of competitive power to other ports. Developed models in this study were then applied to estimate change of container cargo volumes in competitive relation by altering several parameters. And, the results were found to be very helpful for port mangers who are in charge of planning of port development.

  • PDF

Performance Analysis of a Packet Voice Multiplexer Using the Overload Control Strategy by Bit Dropping (Bit-dropping에 의한 Overload Control 방식을 채용한 Packet Voice Multiplexer의 성능 분석에 관한 연구)

  • 우준석;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.1
    • /
    • pp.110-122
    • /
    • 1993
  • When voice is transmitted through packet switching network, there needs a overload control, that is, a control for the congestion which lasts short periods and occurrs in local extents. In this thesis, we analyzed the performance of the statistical packet voice multiplexer using the overload control strategy by bit dropping. We assume that the voice is coded accordng to (4,2) embedded ADPCM and that the voice packet is generated and transmitted according to the procedures in the CCITT recomendation G. 764. For the performance analysis, we must model the superposed packet arrival process to the multiplexer as exactly as possible. It is well known that interarrival times of the packets are highly correlated and for this reason MMPP is more suited for the modelling in the viewpoint of accuracy. Hence the packet arrival process in modeled as MMPP and the matrix geometric method is used for the performance analysis. Performance analysis is similar to the MMPP IG II queueing system. But the overload control makes the service time distribution G dependent on system status or queue length in the multiplexer. Through the performance analysis we derived the probability generating function for the queue length and using this we derived the mean and standard deviation of the queue length and waiting time. The numerical results are verified through the simulation and the results show that the values embedded in the departure times and that in the arbitrary times are almost the same. Results also show bit dropping reduces the mean and the variation of the queue length and those of the waiting time.

  • PDF

The Application of Adaptive Network-based Fuzzy Inference System (ANFIS) for Modeling the Hourly Runoff in the Gapcheon Watershed (적응형 네트워크 기반 퍼지추론 시스템을 적용한 갑천유역의 홍수유출 모델링)

  • Kim, Ho Jun;Chung, Gunhui;Lee, Do-Hun;Lee, Eun Tae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.5B
    • /
    • pp.405-414
    • /
    • 2011
  • The adaptive network-based fuzzy inference system (ANFIS) which had a success for time series prediction and system control was applied for modeling the hourly runoff in the Gapcheon watershed. The ANFIS used the antecedent rainfall and runoff as the input. The ANFIS was trained by varying the various simulation factors such as mean areal rainfall estimation, the number of input variables, the type of membership function and the number of membership function. The root mean square error (RMSE), mean peak runoff error (PE), and mean peak time error (TE) were used for validating the ANFIS simulation. The ANFIS predicted runoff was in good agreement with the measured runoff and the applicability of ANFIS for modelling the hourly runoff appeared to be good. The forecasting ability of ANFIS up to the maximum 8 lead hour was investigated by applying the different input structure to ANFIS model. The accuracy of ANFIS for predicting the hourly runoff was reduced as the forecasting lead hours increased. The long-term predictability of ANFIS for forecasting the hourly runoff at longer lead hours appeared to be limited. The ANFIS might be useful for modeling the hourly runoff and has an advantage over the physically based models because the model construction of ANFIS based on only input and output data is relatively simple.

A Development of Method for Surface and Subsurface Runoff Analysis in Urban Composite Watershed (I) - Theory and Development of Module - (대도시 복합유역의 지표 및 지표하 유출해석기법 개발 (I)- 이론 및 모듈의 개발 -)

  • Kwak, Chang-Jae;Lee, Jae-Joon
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.1
    • /
    • pp.39-52
    • /
    • 2012
  • Surface-subsurface interactions are an intrinsic component of the hydrologic response within a watershed. In general, these interactions are considered to be one of the most difficult areas of the discipline, particularly for the modeler who intends simulate the dynamic relations between these two major domains of the hydrological cycle. In essence, one major complexity is the spatial and temporal variations in the dynamically interacting system behavior. The proper simulation of these variations requires the need for providing an appropriate coupling mechanism between the surface and subsurface components of the system. In this study, an approach for modelling surface-subsurface flow and transport in a fully intergrated way is presented. The model uses the 2-dimensional diffusion wave equation for sheet surface water flow, and the Boussinesq equation with the Darcy's law and Dupuit-Forchheimer's assumption for variably saturated subsurface water flow. The coupled system of equations governing surface and subsurface flows is discretized using the finite volume method with central differencing in space and the Crank-Nicolson method in time. The interactions between surface and subsurface flows are considered mass balance based on the continuity conditions of pressure head and exchange flux. The major module consists of four sub-module (SUBFA, SFA, IA and NS module) is developed.

Bundle Block Adjustment of Omni-directional Images by a Mobile Mapping System (모바일매핑시스템으로 취득된 전방위 영상의 광속조정법)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.593-603
    • /
    • 2010
  • Most spatial data acquisition systems employing a set of frame cameras may have suffered from their small fields of view and poor base-distance ratio. These limitations can be significantly reduced by employing an omni-directional camera that is capable of acquiring images in every direction. Bundle Block Adjustment (BBA) is one of the existing georeferencing methods to determine the exterior orientation parameters of two or more images. In this study, by extending the concept of the traditional BBA method, we attempt to develop a mathematical model of BBA for omni-directional images. The proposed mathematical model includes three main parts; observation equations based on the collinearity equations newly derived for omni-directional images, stochastic constraints imposed from GPS/INS data and GCPs. We also report the experimental results from the application of our proposed BBA to the real data obtained mainly in urban areas. With the different combinations of the constraints, we applied four different types of mathematical models. With the type where only GCPs are used as the constraints, the proposed BBA can provide the most accurate results, ${\pm}5cm$ of RMSE in the estimated ground point coordinates. In future, we plan to perform more sophisticated lens calibration for the omni-directional camera to improve the georeferencing accuracy of omni-directional images. These georeferenced omni-directional images can be effectively utilized for city modelling, particularly autonomous texture mapping for realistic street view.

Building Height Extraction using Triangular Vector Structure from a Single High Resolution Satellite Image (삼각벡터구조를 이용한 고해상도 위성 단영상에서의 건물 높이 추출)

  • Kim, Hye-Jin;Han, Dong-Yeob;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.6
    • /
    • pp.621-626
    • /
    • 2006
  • Today's commercial high resolution satellite imagery such as IKONOS and QuickBird, offers the potential to extract useful spatial information for geographical database construction and GIS applications. Extraction of 3D building information from high resolution satellite imagery is one of the most active research topics. There have been many previous works to extract 3D information based on stereo analysis, including sensor modelling. Practically, it is not easy to obtain stereo high resolution satellite images. On single image performance, most studies applied the roof-bottom points or shadow length extracted manually to sensor models with DEM. It is not suitable to apply these algorithms for dense buildings. We aim to extract 3D building information from a single satellite image in a simple and practical way. To measure as many buildings as possible, in this paper, we suggested a new way to extract building height by triangular vector structure that consists of a building bottom point, its corresponding roof point and a shadow end point. The proposed method could increase the number of measurable building, and decrease the digitizing error and the computation efficiency.