• Title/Summary/Keyword: proposed model

Search Result 33,542, Processing Time 0.069 seconds

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Factors Affecting Intention to Experience of 6th Industry (6차 산업 체험 의향에 영향을 미치는 요인에 관한 연구)

  • Choi, Yang-ae
    • Journal of Venture Innovation
    • /
    • v.3 no.1
    • /
    • pp.117-142
    • /
    • 2020
  • The purpose of this study is to explore the factors affecting the 6th industry experience by Schmitt experience model. The newly introduced variables are the cognitive experience, emotional experience, and social experience that are reconstructed based on Schmitt's experience theory and gender, family as a moderrating variable and trust as a mediation variable. In addition to experience intention. The hypothesis was set as follows. the experience factors that are the cognitive factor, the emotional factor, and the social factor will have a positive(+) influence on the intention to experience. Mooring factors will have a negative(-) effect on intention to experience. For statistical analysis, SPSS 24 and AMOS 23 statistical packages were used to test the research hypothesis. The research was based on 320 questionnaire data and tested by 314 valid responses were analyzed. As a result of the research, First, cognitive, emotional, and social factors had positive(+) effects on experience intention. Among the factors that directly affect the experience intention, the magnitude of influence appeared in the order of cognitive factors > social factors > emotional factors > mooring factors. Second, mooring factors have negative(-) effects on experience intention. Third, Trust has been partially influenced by factors of attraction, cognitive, emotional, and social. Fourth, there are significant statistical differences between men and women in cognitive and mooring factors in the path differences. Fifth, Social factors and mooring factors differed significantly in the composition of the household. Social factors with significant differences in path analysis have also been statistically demonstrated. The results of this study are academically verified that the cognitive, emotional, and social factors have an important influence on the experience intention in the 6th industry experience and the Schmitt's experience model proposed in this study is useful framework of analysis. In practical terms, it could provide implications for what factors should be strategically and marketingly focused to activate the 6th industry experience.

Developing and Applying the Questionnaire to Measure Science Core Competencies Based on the 2015 Revised National Science Curriculum (2015 개정 과학과 교육과정에 기초한 과학과 핵심역량 조사 문항의 개발 및 적용)

  • Ha, Minsu;Park, HyunJu;Kim, Yong-Jin;Kang, Nam-Hwa;Oh, Phil Seok;Kim, Mi-Jum;Min, Jae-Sik;Lee, Yoonhyeong;Han, Hyo-Jeong;Kim, Moogyeong;Ko, Sung-Woo;Son, Mi-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.495-504
    • /
    • 2018
  • This study was conducted to develop items to measure scientific core competency based on statements of scientific core competencies presented in the 2015 revised national science curriculum and to identify the validity and reliability of the newly developed items. Based on the explanations of scientific reasoning, scientific inquiry ability, scientific problem-solving ability, scientific communication ability, participation/lifelong learning in science presented in the 2015 revised national science curriculum, 25 items were developed by five science education experts. To explore the validity and reliability of the developed items, data were collected from 11,348 students in elementary, middle, and high schools nationwide. The content validity, substantive validity, the internal structure validity, and generalization validity proposed by Messick (1995) were examined by various statistical tests. The results of the MNSQ analysis showed that there were no nonconformity in the 25 items. The confirmatory factor analysis using the structural equation modeling revealed that the five-factor model was a suitable model. The differential item functioning analyses by gender and school level revealed that the nonconformity DIF value was found in only two out of 175 cases. The results of the multivariate analysis of variance by gender and school level showed significant differences of test scores between schools and genders, and the interaction effect was also significant. The assessment items of science core competency based on the 2015 revised national science curriculum are valid from a psychometric point of view and can be used in the science education field.

Accessibility Analysis in Mapping Cultural Ecosystem Service of Namyangju-si (접근성 개념을 적용한 문화서비스 평가 -남양주시를 대상으로-)

  • Jun, Baysok;Kang, Wanmo;Lee, Jaehyuck;Kim, Sunghoon;Kim, Byeori;Kim, Ilkwon;Lee, Jooeun;Kwon, Hyuksoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.4
    • /
    • pp.367-377
    • /
    • 2018
  • A cultural ecosystem service(CES), which is non-material benefit that human gains from ecosystem, has been recently further recognized as gross national income increases. Previous researches proposed to quantify the value of CES, which still remains as a challenging issue today due to its social and cultural subjectivity. This study proposes new way of assessing CES which is called Cultural Service Opportunity Spectrum(CSOS). CSOS is accessibility based CES assessment methodology for regional scale and it is designed to be applicable for any regions in Korea for supporting decision making process. CSOS employed public spatial data which are road network and population density map. In addition, the results of 'Rapid Assessment of Natural Assets' implemented by National Institute of Ecology, Korea were used as a complementary data. CSOS was applied to Namyangju-si and the methodology resulted in revealing specific areas with great accessibility to 'Natural Assets' in the region. Based on the results, the advantages and limitations of the methodology were discussed with regard to weighting three main factors and in contrast to Scenic Quality model and Recreation model of InVEST which have been commonly used for assessing CES today due to its convenience today.

Real Option Analysis to Value Government Risk Share Liability in BTO-a Projects (손익공유형 민간투자사업의 투자위험분담 가치 산정)

  • KU, Sukmo;LEE, Sunghoon;LEE, Seungjae
    • Journal of Korean Society of Transportation
    • /
    • v.35 no.4
    • /
    • pp.360-373
    • /
    • 2017
  • The BTO-a projects is the types, which has a demand risk among the type of PPP projects in Korea. When demand risk is realized, private investor encounters financial difficulties due to lower revenue than its expectation and the government may also have a problem in stable infrastructure operation. In this regards, the government has applied various risk sharing policies in response to demand risk. However, the amount of government's risk sharing is the government's contingent liabilities as a result of demand uncertainty, and it fails to be quantified by the conventional NPV method of expressing in the text of the concession agreement. The purpose of this study is to estimate the value of investment risk sharing by the government considering the demand risk in the profit sharing system (BTO-a) introduced in 2015 as one of the demand risk sharing policy. The investment risk sharing will take the form of options in finance. Private investors have the right to claim subsidies from the government when their revenue declines, while the government has the obligation to pay subsidies under certain conditions. In this study, we have established a methodology for estimating the value of investment risk sharing by using the Black - Scholes option pricing model and examined the appropriateness of the results through case studies. As a result of the analysis, the value of investment risk sharing is estimated to be 12 billion won, which is about 4% of the investment cost of the private investment. In other words, it can be seen that the government will invest 12 billion won in financial support by sharing the investment risk. The option value when assuming the traffic volume risk as a random variable from the case studies is derived as an average of 12.2 billion won and a standard deviation of 3.67 billion won. As a result of the cumulative distribution, the option value of the 90% probability interval will be determined within the range of 6.9 to 18.8 billion won. The method proposed in this study is expected to help government and private investors understand the better risk analysis and economic value of better for investment risk sharing under the uncertainty of future demand.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

A Performance Comparison of Super Resolution Model with Different Activation Functions (활성함수 변화에 따른 초해상화 모델 성능 비교)

  • Yoo, Youngjun;Kim, Daehee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.303-308
    • /
    • 2020
  • The ReLU(Rectified Linear Unit) function has been dominantly used as a standard activation function in most deep artificial neural network models since it was proposed. Later, Leaky ReLU, Swish, and Mish activation functions were presented to replace ReLU, which showed improved performance over existing ReLU function in image classification task. Therefore, we recognized the need to experiment with whether performance improvements could be achieved by replacing the RELU with other activation functions in the super resolution task. In this paper, the performance was compared by changing the activation functions in EDSR model, which showed stable performance in the super resolution task. As a result, in experiments conducted with changing the activation function of EDSR, when the resolution was converted to double, the existing activation function, ReLU, showed similar or higher performance than the other activation functions used in the experiment. When the resolution was converted to four times, Leaky ReLU and Swish function showed slightly improved performance over ReLU. PSNR and SSIM, which can quantitatively evaluate the quality of images, were able to identify average performance improvements of 0.06%, 0.05% when using Leaky ReLU, and average performance improvements of 0.06% and 0.03% when using Swish. When the resolution is converted to eight times, the Mish function shows a slight average performance improvement over the ReLU. Using Mish, PSNR and SSIM were able to identify an average of 0.06% and 0.02% performance improvement over the RELU. In conclusion, Leaky ReLU and Swish showed improved performance compared to ReLU for super resolution that converts resolution four times and Mish showed improved performance compared to ReLU for super resolution that converts resolution eight times. In future study, we should conduct comparative experiments to replace activation functions with Leaky ReLU, Swish and Mish to improve performance in other super resolution models.