• Title/Summary/Keyword: modeling software

Search Result 1,972, Processing Time 0.036 seconds

A Construction of the C_MDR(Component_MetaData Registry) for the Environment of Exchanging the Component (컴포넌트 유통환경을 위한 컴포넌트 메타데이타 레지스트리 구축 : C_MDR)

  • Song, Chee-Yang;Yim, Sung-Bin;Baik, Doo-Kwon;Kim, Chul-Hong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.614-629
    • /
    • 2001
  • As the information-intensive society in 21c based on the environment of global internet is promoted, the software is getting more large and complex, and the demand for the software is increasing briskly. So, it becomes an important issue in academic and industrial field to activate reuse by developing and exchanging the standardized component. Currently, the information services as a product type of each company are provided in foreign market place for reusing a commercial component, but the components which are serviced in each market place are different, insufficient and unstandardized. That is, construction for Component Data Registry based on ISO 11179, is not accomplished. Hence, the national government has stepped up the plan for sending out public component at 2001. Therefore, the systems as a tool for sharing and exchange of data, have to support the meta-information of standardized component. In this paper, we will propose the C_MDR system: a tool to register and manage the standardized meta-information, based upon ISO 11179, for the commercialized common component. The purpose of this system is to systemically share and exchange the data in chain of acceleration of reusing the component. So, we will show the platform of specification for the component meta-information, then define the meta-information according to this platform, also represent the meta-information using XML for enhancing the interoperability of information with other system. Moreover, we will show that three-layered expression make modeling to be simple and understandable. The implementation of this system is to construct a prototype system of the component meta-information through the internet on www, this system uses ASP as a development language and RDBMS Oracle for PC. Thus, we may expect the standardization of the exchanged component metadata, and be able to apply to the exchanged reuse tool.

  • PDF

LIM Implementation Method for Planning Biotope Area Ratio in Apartment Complex - Focused on Terrain and Pavement Modeling - (공동주택단지의 생태면적률 계획을 위한 LIM 활용방법 - 지형 및 포장재 모델링을 중심으로 -)

  • Kim, Bok-Young;Son, Yong-Hoon;Lee, Soon-Ji
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.3
    • /
    • pp.14-26
    • /
    • 2018
  • The Biotope Area Ratio (BAR) is a quantitative pre-planning index for sustainable development and an integrated indicator for the balanced development of buildings and outdoor spaces. However, it has been pointed out that there are problems in operations management: errors in area calculation, insufficiency in the underground soil condition and depth, reduction in biotope area after construction, and functional failure as a pre-planning index. To address these problems, this study proposes implementing LIM. Since the weights of the BAR are mainly decided by the underground soil condition and depth with land cover types, the study focused on the terrain and pavements. The model should conform to BIM guidelines and standards provided by government agencies and professional organizations. Thus, the scope and Level Of Detail (LOD) of the model were defined, and the method to build a model with BIM software was developed. An apartment complex on sloping ground was selected as a case study, a 3D terrain modeled, paving libraries created with property information on the BAR, and a LIM model completed for the site. Then the BAR was calculated and construction documents were created with the BAR table and pavement details. As results of the study, it was found that the application of the criteria on the BAR and calculation became accurate, and the efficiency of design tasks was improved by LIM. It also enabled the performance of evidence-based design on the terrain and underground structures. To adopt LIM, it is necessary to create and distribute LIM library manuals or templates, and build library content that comply with KBIMS standards. The government policy must also have practitioners submit BIM models in the certification system. Since it is expected that the criteria on planting types in the BAR will be expanded, further research is needed to build and utilize the information model for planting materials.

Estimation of Soil Moisture Using Sentinel-1 SAR Images and Multiple Linear Regression Model Considering Antecedent Precipitations (선행 강우를 고려한 Sentinel-1 SAR 위성영상과 다중선형회귀모형을 활용한 토양수분 산정)

  • Chung, Jeehun;Son, Moobeen;Lee, Yonggwan;Kim, Seongjoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.515-530
    • /
    • 2021
  • This study is to estimate soil moisture (SM) using Sentinel-1A/B C-band SAR (synthetic aperture radar) images and Multiple Linear Regression Model(MLRM) in the Yongdam-Dam watershed of South Korea. Both the Sentinel-1A and -1B images (6 days interval and 10 m resolution) were collected for 5 years from 2015 to 2019. The geometric, radiometric, and noise corrections were performed using the SNAP (SentiNel Application Platform) software and converted to backscattering coefficient of VV and VH polarization. The in-situ SM data measured at 6 locations using TDR were used to validate the estimated SM results. The 5 days antecedent precipitation data were also collected to overcome the estimation difficulty for the vegetated area not reaching the ground. The MLRM modeling was performed using yearly data and seasonal data set, and correlation analysis was performed according to the number of the independent variable. The estimated SM was verified with observed SM using the coefficient of determination (R2) and the root mean square error (RMSE). As a result of SM modeling using only BSC in the grass area, R2 was 0.13 and RMSE was 4.83%. When 5 days of antecedent precipitation data was used, R2 was 0.37 and RMSE was 4.11%. With the use of dry days and seasonal regression equation to reflect the decrease pattern and seasonal variability of SM, the correlation increased significantly with R2 of 0.69 and RMSE of 2.88%.

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF

Estimate and Environmental Assessment of Greenhouse Gas(GHG) Emissions and Sludge Emissions in Wastewater Treatment Processes for Climate Change (기후변화를 고려한 하수처리공법별 온실가스 및 슬러지 배출량 산정 및 환경성 평가)

  • Oh, Tae-Seok;Kim, Min-Jeong;Lim, Jung-Jin;Kim, Yong-Su;Yoo, Chang-Kyoo
    • Korean Chemical Engineering Research
    • /
    • v.49 no.2
    • /
    • pp.187-194
    • /
    • 2011
  • In compliance with an international law about the ocean dumping of the sludge, the proper sewage treatment process which occurs from the wastewater treatment process has been becoming problem. Generally the sewage and the sludge are controlled from anaerobic condition when the sewage is treated and land filled, where the methane$(CH_{4})$ and the nitrous oxide $(N_{2}O)$ from this process are discharged. Because these gases have been known as one of the responsible gases for global warming, the wastewater treatment process is become known as emission sources of green house gases(GHG). This study is to suggest a new approach of estimate and environmental assessment of greenhouse gas emissions and sludge emissions from wastewater treatment processes. It was carried out by calculating the total amounts of GHG emitted from biological wastewater treatment process and the amount of the sludgegenerated from the processes. Four major biological wastewater treatment processes which are Anaerobic/Anoxic/Oxidation$(A_{2}O)$, Bardenpho, Virginia Initiative Plant(VIP), University of Cape Town(UCT)are used and GPS-X software is used to model four processes. Based on the modeling result of four processes, the amounts of GHG emissions and the sludge produced from each process are calculated by Intergovernmental Panel on Climate Change(IPCC) 2006 guideline report. GHG emissions for water as well as sludge treatment processes are calculated for environmental assessment has been done on the scenario of various sludge treatments, such as composting, incineration and reclamation and each scenario is compared by using a unified index of the economic and environmental assessment. It was found that Bardenpho process among these processes shows a best process that can emit minimum amount of GHG with lowest impact on environment and composting emits the minimum amount of GHG for sludge treatment.

Binding Mode Analysis of Bacillus subtilis Obg with Ribosomal Protein L13 through Computational Docking Study

  • Lee, Yu-No;Bang, Woo-Young;Kim, Song-Mi;Lazar, Prettina;Bahk, Jeong-Dong;Lee, Keun-Woo
    • Interdisciplinary Bio Central
    • /
    • v.1 no.1
    • /
    • pp.3.1-3.6
    • /
    • 2009
  • Introduction: GTPases known as translation factor play a vital role as ribosomal subunit assembly chaperone. The bacterial Obg proteins ($Spo{\underline{0B}}$-associated ${\underline{G}}TP$-binding protein) belong to the subfamily of P-loop GTPase proteins and now it is considered as one of the new target for antibacterial drug. The majority of bacterial Obgs have been commonly found to be associated with ribosome, implying that these proteins may play a fundamental role in ribosome assembly or maturation. In addition, one of the experimental evidences suggested that Bacillus subtilis Obg (BsObg) protein binds to the L13 ribosomal protein (BsL13) which is known to be one of the early assembly proteins of the 50S ribosomal subunit in Escherichia coli. In order to investigate binding mode between the BsObg and the BsL13, protein-protein docking simulation was carried out after generating 3D structure of the BsL13 structure using homology modeling method. Materials and Methods: Homology model structure of BsL13 was generated using the EcL13 crystal structure as a template. Protein-protein docking of BsObg protein with ribosomal protein BsL13 was performed by DOT, a macro-molecular docking software, in order to predict a reasonable binding mode. The solvated energy minimization calculation of the docked conformation was carried out to refine the structure. Results and Discussion: The possible binding conformation of BsL13 along with activated Obg fold in BsObg was predicted by computational docking study. The final structure is obtained from the solvated energy minimization. From the analysis, three important H-bond interactions between the Obg fold and the L13 were detected: Obg:Tyr27-L13:Glu32, Obg:Asn76-L13:Glu139, and Obg:Ala136-L13:Glu142. The interaction between the BsObg and BsL13 structures were also analyzed by electrostatic potential calculations to examine the interface surfaces. From the results, the key residues for hydrogen bonding and hydrophobic interaction between the two proteins were predicted. Conclusion and Prospects: In this study, we have focused on the binding mode of the BsObg protein with the ribosomal BsL13 protein. The interaction between the activated Obg and target protein was investigated with protein-protein docking calculations. The binding pattern can be further used as a base for structure-based drug design to find a novel antibacterial drug.

Dose Distribution and Characterization for Radiation Fields of Multileaf Collimateor System (방사선 입체조형치료용 다엽콜리메이터의 특성과 조직내 선량분포 측정)

  • Chu, Sung-Sil;Kim, Gwi-Eon
    • Radiation Oncology Journal
    • /
    • v.14 no.1
    • /
    • pp.77-85
    • /
    • 1996
  • Purpose : Multileaf collimator(MLC) is very suitable tool for conformal radio-therapy and commissioning measurements for a multileaf collimator installed on a dual energy accelerator with 6 and 10MV photons are required, For modeling the collimator with treament planning software, detailed dosimetric characterization of the multileaf collimator including the penumbra width, leaf transmission between leaf leakage and localization of the leaf ends and sides is an essential requirement. materials and Methods : Measurement of characteristic data of the MLC with 26 pair block leaves installed on CLINAC 2100C linear accelerator was performed. Low sensitive radiographic film(X-omatV) was used for the penumbra measurement and separate experiments using radiographic film and thermoluminescent dosimeters were performed to verify the dose distribution, Measured films were analized with a photodensitometer of WP700i scanner. Results : For 6 & 10 MV x-ray energies, approximately $2.0\%$ of photons incident on the multileaf collimator were transmitted and an additional $0.5\%$ leakage occurs between the leaves. Localizing the physical end of the leaves showed less than 1mm deviation from the $50\%$ decrement line and this difference is attributed to the curved shaped end on the leaves One side of a sin히e leaf corresponded to the $50\%$ decrement line, but the opposite face was aligned with a lower value. This difference is due to the tongue and groove used to decrease between leaf leakage. Alignment of the leaves to form a straight edge resulted larger penumbra at far position from isocenter as compare with divergent alloy blocks. When the MLC edge is stepped by sloping field, the isodose lines follow the leaf pattern and Produce scalloping isodose curves in tissue. The effective penumbra by 45 degree stepped MLC is about 10mm at 10cm depth for 6MV x-ray. The difference of effective penumbra in deep tissue between MLC and divergent alloy blocks is small (5mm). Conclusion : Using the characteristic data of MLC, the MLC has the clinlical acceptability and suitability for 3-D conformal radiotherapy except small field size.

  • PDF

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

A Meta Analysis of Using Structural Equation Model on the Korean MIS Research (국내 MIS 연구에서 구조방정식모형 활용에 관한 메타분석)

  • Kim, Jong-Ki;Jeon, Jin-Hwan
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.47-75
    • /
    • 2009
  • Recently, researches on Management Information Systems (MIS) have laid out theoretical foundation and academic paradigms by introducing diverse theories, themes, and methodologies. Especially, academic paradigms of MIS encourage a user-friendly approach by developing the technologies from the users' perspectives, which reflects the existence of strong causal relationships between information systems and user's behavior. As in other areas in social science the use of structural equation modeling (SEM) has rapidly increased in recent years especially in the MIS area. The SEM technique is important because it provides powerful ways to address key IS research problems. It also has a unique ability to simultaneously examine a series of casual relationships while analyzing multiple independent and dependent variables all at the same time. In spite of providing many benefits to the MIS researchers, there are some potential pitfalls with the analytical technique. The research objective of this study is to provide some guidelines for an appropriate use of SEM based on the assessment of current practice of using SEM in the MIS research. This study focuses on several statistical issues related to the use of SEM in the MIS research. Selected articles are assessed in three parts through the meta analysis. The first part is related to the initial specification of theoretical model of interest. The second is about data screening prior to model estimation and testing. And the last part concerns estimation and testing of theoretical models based on empirical data. This study reviewed the use of SEM in 164 empirical research articles published in four major MIS journals in Korea (APJIS, ISR, JIS and JITAM) from 1991 to 2007. APJIS, ISR, JIS and JITAM accounted for 73, 17, 58, and 16 of the total number of applications, respectively. The number of published applications has been increased over time. LISREL was the most frequently used SEM software among MIS researchers (97 studies (59.15%)), followed by AMOS (45 studies (27.44%)). In the first part, regarding issues related to the initial specification of theoretical model of interest, all of the studies have used cross-sectional data. The studies that use cross-sectional data may be able to better explain their structural model as a set of relationships. Most of SEM studies, meanwhile, have employed. confirmatory-type analysis (146 articles (89%)). For the model specification issue about model formulation, 159 (96.9%) of the studies were the full structural equation model. For only 5 researches, SEM was used for the measurement model with a set of observed variables. The average sample size for all models was 365.41, with some models retaining a sample as small as 50 and as large as 500. The second part of the issue is related to data screening prior to model estimation and testing. Data screening is important for researchers particularly in defining how they deal with missing values. Overall, discussion of data screening was reported in 118 (71.95%) of the studies while there was no study discussing evidence of multivariate normality for the models. On the third part, issues related to the estimation and testing of theoretical models on empirical data, assessing model fit is one of most important issues because it provides adequate statistical power for research models. There were multiple fit indices used in the SEM applications. The test was reported in the most of studies (146 (89%)), whereas normed-test was reported less frequently (65 studies (39.64%)). It is important that normed- of 3 or lower is required for adequate model fit. The most popular model fit indices were GFI (109 (66.46%)), AGFI (84 (51.22%)), NFI (44 (47.56%)), RMR (42 (25.61%)), CFI (59 (35.98%)), RMSEA (62 (37.80)), and NNFI (48 (29.27%)). Regarding the test of construct validity, convergent validity has been examined in 109 studies (66.46%) and discriminant validity in 98 (59.76%). 81 studies (49.39%) have reported the average variance extracted (AVE). However, there was little discussion of direct (47 (28.66%)), indirect, and total effect in the SEM models. Based on these findings, we suggest general guidelines for the use of SEM and propose some recommendations on concerning issues of latent variables models, raw data, sample size, data screening, reporting parameter estimated, model fit statistics, multivariate normality, confirmatory factor analysis, reliabilities and the decomposition of effects.

A Study on the Development of High Sensitivity Collision Simulation with Digital Twin (디지털 트윈을 적용한 고감도 충돌 시뮬레이션 개발을 위한 연구)

  • Ki, Jae-Sug;Hwang, Kyo-Chan;Choi, Ju-Ho
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.813-823
    • /
    • 2020
  • Purpose: In order to maximize the stability and productivity of the work through simulation prior to high-risk facilities and high-cost work such as dismantling the facilities inside the reactor, we intend to use digital twin technology that can be closely controlled by simulating the specifications of the actual control equipment. Motion control errors, which can be caused by the time gap between precision control equipment and simulation in applying digital twin technology, can cause hazards such as collisions between hazardous facilities and control equipment. In order to eliminate and control these situations, prior research is needed. Method: Unity 3D is currently the most popular engine used to develop simulations. However, there are control errors that can be caused by time correction within Unity 3D engines. The error is expected in many environments and may vary depending on the development environment, such as system specifications. To demonstrate this, we develop crash simulations using Unity 3D engines, which conduct collision experiments under various conditions, organize and analyze the resulting results, and derive tolerances for precision control equipment based on them. Result: In experiments with collision experiment simulation, the time correction in 1/1000 seconds of an engine internal function call results in a unit-hour distance error in the movement control of the collision objects and the distance error is proportional to the velocity of the collision. Conclusion: Remote decomposition simulators using digital twin technology are considered to require limitations of the speed of movement according to the required precision of the precision control devices in the hardware and software environment and manual control. In addition, the size of modeling data such as system development environment, hardware specifications and simulations imitated control equipment and facilities must also be taken into account, available and acceptable errors of operational control equipment and the speed required of work.