• Title/Summary/Keyword: Reporting Model

Search Result 284, Processing Time 0.03 seconds

Applying a Forced Censoring Technique with Accelerated Modeling for Improving Estimation of Extremely Small Percentiles of Strengths

  • Chen Weiwei;Leon Ramon V.;Young Timothy M.;Guess Frank M.
    • International Journal of Reliability and Applications
    • /
    • v.7 no.1
    • /
    • pp.27-39
    • /
    • 2006
  • Many real world cases in material failure analysis do not follow perfectly the normal distribution. Forcing of the normality assumption may lead to inaccurate predictions and poor product quality. We examine the failure process of the internal bond (IB or tensile strength) of medium density fiberboard (MDF). We propose a forced censoring technique that closer fits the lower tails of strength distributions and better estimates extremely smaller percentiles, which may be valuable to continuous quality improvement initiatives. Further analyses are performed to build an accelerated common-shaped Weibull model for different product types using the $JMP^{(R)}$ Survival and Reliability platform. In this paper, a forced censoring technique is implemented for the first time as a software module, using $JMP^{(R)}$ Scripting Language (JSL) to expedite data processing, which is crucial for real-time manufacturing settings. Also, we use JSL to automate the task of fitting an accelerated Weibull model and testing model homogeneity in the shape parameter. Finally, a package script is written to readily provide field engineers customized reporting for model visualization, parameter estimation, and percentile forecasting. Our approach may be more accurate for product conformance evaluation, plus help reduce the cost of destructive testing and data management due to reduced frequency of testing. It may also be valuable for preventing field failure and improved product safety even when destructive testing is not reduced by yielding higher precision intervals at the same confidence level.

  • PDF

A Study on Accrual Earnings Management of Shipping Companies (해운사의 발생액 이익조정에 관한 연구)

  • Hong, Soon-Wook
    • Journal of Navigation and Port Research
    • /
    • v.45 no.3
    • /
    • pp.173-180
    • /
    • 2021
  • Although accounting is one of the core fields of corporate management, few studies have reported accounting phenomena involving shipping companies. In addition, although financial reporting is very important to shipping companies that use several financial tools such as ship finance and financial lease, it is difficult to identify studies investigating shipping companies' financial reporting, especially their earnings management. The purpose of this study is to analyze accrual earnings management behavior of shipping companies. Companies with high debt ratios and net losses are known to have incentives for earnings management. Due to the nature of the industry, shipping companies have a high debt ratio and often report net losses. Accordingly, shipping companies are expected to engage in substantial earnings management. Based on the analysis of KOSP I companies listed on the Korea Exchange from 2001 to 2020, it was found that shipping companies are engaged in higher levels of earnings management than non-shipping companies. Discretionary accrual was used as a proxy variable for earnings management. Discretionary accrual was measured using the modified Jones model of Dechow et al. (1995) and the performance matched model of Kothari et al.(2005). In this study, significant results were derived by comparatively analyzing the earnings management practices, which is one of the major accounting behaviors of shipping and non-shipping companies. Stakeholders such as external auditors, investors, financial institutions, analysts, and government authorities need to be aware of the earnings management behavior of listed shipping companies during their external audit, financial analysis, and supervision. Finally, listed shipping companies must conduct stricter accounting based on accounting principles.

Development and Application of a Scenario Analysis System for CBRN Hazard Prediction (화생방 오염확산 시나리오 분석 시스템 구축 및 활용)

  • Byungheon Lee;Jiyun Seo;Hyunwoo Nam
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.3
    • /
    • pp.13-26
    • /
    • 2024
  • The CBRN(Chemical, Biological, Radiological, and Nuclear) hazard prediction model is a system that supports commanders in making better decisions by creating contamination distribution and damage prediction areas based on the weapons used, terrain, and weather information in the events of biochemical and radiological accidents. NBC_RAMS(Nuclear, Biological and Chemical Reporting And Modeling S/W System) developed by ADD (Agency for Defense Development) is used not only supporting for decision making plan for various military operations and exercises but also for post analyzing CBRN related events. With the NBC_RAMS's core engine, we introduced a CBR hazard assessment scenario analysis system that can generate contaminant distribution prediction results reflecting various CBR scenarios, and described how to apply it in specific purposes in terms of input information, meteorological data, land data with land coverage and DEM, and building data with pologon form. As a practical use case, a technology development case is addressed that tracks the origin location of contaminant source with artificial intelligence and a technology that selects the optimal location of a CBR detection sensor with score data by analyzing large amounts of data generated using the CBRN scenario analysis system. Through this system, it is possible to generate AI-specialized CBRN related to training and analysis data and support planning of operation and exercise by predicting battle field.

A Preliminary Research for Developing System Prototype Generating Linear Schedule (선형 공정표를 생성하는 시스템 프로토타입 개발을 위한 기초 연구)

  • Ryu, Han-Guk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • Linear scheduling method limits to present works of work breakdown structure as a form of lines and was often developed manually. In other words, linear schedule could not utilize activity, work breakdown structure, and etc. information of network schedule such as CPM(Critical Path Method) and has been used only for reporting or confirming construction master plan. Therefore, it is necessary to develop system which can automatically generating the linear schedule based on the network schedule having many accumulated and useful construction schedule information. Thus, this research has an effort to establish data process model, data flow diagram, and data model in order to make linear schedule. In addition, this research addresses the system solution structure, user interface class diagram and logic diagram, and data type schema. The results of this paper can be used as a preliminary research for developing linear schedule generating system prototype by utilizing the network schedule information.

Development and Validity of Creativity Path Inventory (CPI) (창의성 경로 척도(Creativity Path Inventory)의 개발 및 타당화)

  • Lee, Hyunjoo;Lee, Mina;Park, Eunji
    • Journal of Gifted/Talented Education
    • /
    • v.25 no.4
    • /
    • pp.511-528
    • /
    • 2015
  • The development process from creative potential to realized talent is complex and non-linear. This feature of the process stands out more in the process of living a creative life in the long-term rather than in a situation to solve certain problems in the short-term. The purpose of this study is to develop Creativity Path Inventory (CPI) for undergraduate students based on Sawyer's Zigzag Model which is one of creative process theories and to verify reliability and validity of the inventory. Thus, reflecting the characteristics of each stage of the model, this study developed 88 items in 8 factors and finally confirmed 38 items in 7 factors through item analysis and verification process on construct validity. Internal consistency of a total of 38 items in CPI turned out to be .835, confirming the reliability of the inventory and goodness-of-fit index of the final model also demonstrated an appropriate result. CPI with verified reliability and validity will help enable people who want to manifest creativity in view of everyday creativity to realize self-improvement by self-reporting their strengths and weaknesses on their own.

Estimation of lapse rate of variable annuities by using Cox proportional hazard model (Cox 비례위험모형을 이용한 변액연금 해지율의 추정)

  • Kim, Yumi;Lee, Hangsuck
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.723-736
    • /
    • 2013
  • The importance of lapse rate is highly increasing due to the introduction of Cash Flow Pricing system, non-refund-of-reserve insurance policy, and IFRS (International Financial Reporting System) to the Korean insurance market. Researches on lapse rate have mainly focused on simple data analysis and regression analysis, etc. However, lapse rate can be analyzed by survival analysis and can be well explained in terms of several covariates with Cox proportional hazard model. Guaranteed minimum benefits embedded in variable annuities require more elegant statistical analysis of lapse rate. Hence, this paper analyzes data of policyholders with variable annuities by using Cox proportional hazard model. The key variables of policy holder that influences the lapse rate are payment method, premium, lapse insured to term insured, reserve-GMXB ratio, and age.

Analyzing Common Method Bias of the Korean Empirical Studies on Technology Acceptance Model (한국 TAM 실증연구의 동일방법편의 분석)

  • Baek, Sang-Yong
    • The Journal of Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-17
    • /
    • 2012
  • Common Method Bias(CMB) may cause the potential inflation of correlations between measures assessed via the same method. The problem of CMB has been well known in behavioral sciences because the survey method with self-reporting is vulnerable to CMB. Thus, the discussion on CMB is still ongoing in the MIS research in US. However, in Korea, the MIS research has never paid attention on the CMB problem. The purpose of this study is to examine the CMB problem in the Korean MIS research. To evaluate the effect of CMB, empirical studies on Technology Acceptance Model(TAM) are selected because (1) TAM is one of the MIS research areas studied intensively, (2) TAM is a theoretical model well supported by the existing empirical studies so that the result of this study would have a great ripple effect when the CMB problem turned out to be serious, (3) CMB is domain-specific. 47 TAM samples (out of 45 studies) from three Korean Journals were selected and the relevant data were collected such as correlation matrixes and the measures of the dependent variable. To find and evaluate the size of CMB, two analytic methods (Marker-Variable Technique and Method-Method Pair Technique) are employed. The result showed that there exists CMB in the Korean studies but the problem is not so serious to distort the empirical testing, compared with that of US studies. However, considering that CMB can contaminate the testing results, Korean MIS researchers should explicitly deal with the problem in designing empirical studies and collecting data.

An Improved Calibration Method for the COCOMO II Post-Architecture Model

  • Yoon, Myoung-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.5 no.2
    • /
    • pp.47-55
    • /
    • 2000
  • To date many software engineering cost models have been developed to predict cost, schedule, and effort of the software under development. The COCOMO Ⅱ is well- suited for the new software development life cycle such as non-sequential and rapid- development processes. The traditional regression approach based on the least square criterion is the most commonly used technique for empirical calibration in the COCOMO Ⅱ model. It has a few assumptions frequently violated by software engineering data sets. The source data is also generally imprecise in reporting size effort, and cost-driver ratings, particularly across different organizations. And that the outlier for the source data is a peculiarity and indicates a data point. To cope with difficulties, in this paper, we propose a new regression method for calibrating COCOMO Ⅱ post-architecture model based on the minimum relative error(MRE) criterion. The characteristic of the proposed method is insensitive to the extreme values of the data in the empirical calibration. As the experimental results, It is evident that our proposed calibration method MRE was shown to be superior to the traditional regression approach for model calibration, as illustrated by the values obtained for standard deviation(^σ), and prediction at level LPRED(L) measures.

  • PDF

A Study on HSTPA Model for Improvement of Emergency Response Training for Ships (선박의 비상대응훈련 개선을 위한 HSTPA 모델에 관한 연구)

  • Han, Ki-Young;Jung, Jin-ki;Ahn, Young-Joong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.441-447
    • /
    • 2019
  • Since emergency response training for maritime safety and safety education of maritime education institutions are conducted based on the set scenarios and education contents, there are limitations in the reduction of human error and response to various situations. Although there is a need for improvement, there is no way to improve response capabilities by assessing existing education training and securing diversity in situations. This study proposes a theoretical procedure analyzer method to model the diversity of situations for the improvement of emergency response training. This paper defines the human and system theoretical procedure analysis model (HSTPA) based on the organic relationship of the source and system. The limitations of the existing training were derived by analyzing the errors that each component could produce and applying them to the fire response training scenarios requiring vertical reporting systems and responses. The segmentation and inspection of training scenario considerations applying the proposed HSTPA model is believed to help create diverse and realistic scenarios in emergency response training and education, and improve the situation judgment understanding and response capabilities of the subjects.

Object detection in financial reporting documents for subsequent recognition

  • Sokerin, Petr;Volkova, Alla;Kushnarev, Kirill
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Document page segmentation is an important step in building a quality optical character recognition module. The study examined already existing work on the topic of page segmentation and focused on the development of a segmentation model that has greater functional significance for application in an organization, as well as broad capabilities for managing the quality of the model. The main problems of document segmentation were highlighted, which include a complex background of intersecting objects. As classes for detection, not only classic text, table and figure were selected, but also additional types, such as signature, logo and table without borders (or with partially missing borders). This made it possible to pose a non-trivial task of detecting non-standard document elements. The authors compared existing neural network architectures for object detection based on published research data. The most suitable architecture was RetinaNet. To ensure the possibility of quality control of the model, a method based on neural network modeling using the RetinaNet architecture is proposed. During the study, several models were built, the quality of which was assessed on the test sample using the Mean average Precision metric. The best result among the constructed algorithms was shown by a model that includes four neural networks: the focus of the first neural network on detecting tables and tables without borders, the second - seals and signatures, the third - pictures and logos, and the fourth - text. As a result of the analysis, it was revealed that the approach based on four neural networks showed the best results in accordance with the objectives of the study on the test sample in the context of most classes of detection. The method proposed in the article can be used to recognize other objects. A promising direction in which the analysis can be continued is the segmentation of tables; the areas of the table that differ in function will act as classes: heading, cell with a name, cell with data, empty cell.