• Title/Summary/Keyword: Performance Based Design

Search Result 10,767, Processing Time 0.043 seconds

Rifle performance improvement cost estimation through Relation between the accuracy and Engagement results Using the Engagement class simulation model (명중률과 교전결과의 상관관계분석을 통한 개인화기 성능개선비용 추정 : 교전급 분석모델을 중심으로)

  • TaeKyeom Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.289-295
    • /
    • 2024
  • This study analyzes the correlation between the accracy of rifle and the result of engagement. And estimates the improvement cost of the rifle accordingly. For this experiment, an engagement class simulation model(AWAM: Army Weapon Effectiveness Analysis Model) was used. We also selected the rifle, which is a portable weapon for the experiment. Prior to this experiment, we conducted a reliability test(VV&A: Verification, Validation and Accreditation) on the model. The VV&A process is mainly done during the development of the DM&S model, which is also necessary for the operation of the M&S. We confirmed the need for VV&A during the experiment and obtained reliable experimental results using the corrected values. In the Accuracy Experiment we found that the 20% improvement is the most effective. And we were able to estimate the cost of acquiring a rifle with a 20% higher accuracy. The cost was estimated by simple regression analysis based on the price of the current rifle. Through this study, we could know the impact of the accuracy of rifle on the experimental results and estimate the cost of improved rifle.

A Study on the Smoke Removal Equipment in Plant Facilities Using Simulation (시뮬레이션을 이용한 플랜트 시설물 제연설비에 관한 연구)

  • Doo Chan Choi;Min Hyeok Yang;MIn Hyeok Ko;Su Min Oh
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.40-46
    • /
    • 2024
  • Purpose: In this study, in order to ensure the evacuation safety of plant facilities, we analyze the relationship between the height of smoke removal boundary walls, the presence or absence of smoke removal equipment, and evacuation safety. Method: Using fire and evacuation simulations, evacuation safety was analyzed through changes in the height of the smoke removal boundary wall, air supply volume and exhaust volume according to vertical dista. Result: In the case of visible drawings, if only 0.6m of boundary wall is used, the time below 5m reaches the shortest, and 1.2m of boundary width is 20% longer than when using smoke removal facilities. In the case of temperature, 1.2m is 20% longer than 0.6m when only the boundary width is used without smoke removal facilities. Conclusion: It was found that increasing the length of the smoke removal boundary wall could affect visibility, and installing a smoke removal facility would affect temperature. Therefore, it is determined that an appropriate smoke removal plan and smoke removal equipment should be installed in consideration of the process characteristics.

Consumer Behavior in Achieving the Goals of ESG Banking Products: Focusing on environmental awareness and saving behavior (ESG 금융상품의 목표 달성에 미치는 소비자 행동에 관한 탐색적 연구 -환경인식과 저축행동을 중심으로-)

  • Inkwan Cho;Bong Gyou Lee
    • Journal of Service Research and Studies
    • /
    • v.14 no.2
    • /
    • pp.117-137
    • /
    • 2024
  • ESG has become a necessity for all companies, and major Korean banks are actively practicing ESG management. Banks are playing a role in providing ESG finance as intermediaries in the supply of funds. Recently, they have launched ESG digital banking products that offer preferential interest rates for eco-friendly activities in combination with digital technologies. However, indiscriminate provision of preferential interest rates can adversely affect profitability of banks, and they may face the problem of 'Greenwashing' if they do not contribute to improving environmental awareness. Therefore, this study selected ESG digital savings products linked to electricity savings as the subject of the study, and empirically analyzed consumers' environmental awareness and savings behavior through actual data of consumers (N=2,478). The main findings of this study are as follows First, the analysis of the consumer status of ESG digital banking products shows that the 30-50s are the main consumer base, and the MZ generation shows relatively high performance in achieving preferential interest rates through electricity saving practices. Second, consumers' environmental awareness has a significant impact on achieving the goals of ESG banking products. ESG banking products can contribute to environmental awareness while fulfilling the basic function of saving. Third, environmental awareness did not drive consumers' savings contribution behavior, suggesting the need for continued consumer engagement. Based on environmental awareness and the theory of saving behavior, this study provides a theoretical explanation in ESG financial products. The results suggest that the appropriateness of the preferential interest rate design of ESG financial products is important.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Measurement of competency through self study in basic nursing lab. practice focused on cleansing enema (기본간호학 실습에 있어 자가학습을 통한 능숙도 측정 - 배변관장을 중심으로 -)

  • Ko Il-Sun
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.6 no.3
    • /
    • pp.532-543
    • /
    • 1999
  • This study was conducted to provide the basic data necessary for the improvement of the teaching method for basic nursing practice as well as the effectiveness of the practice by examining the students' competency in cleansing enema after doing the self study instead of the traditional education. To examine the competency in cleansing enema after the self study, this study is an one group pretest-posttest design that subjects did the enema practice through the self study. The subjects were 89 sophomore students at Y University. College of Nursing. In basic nursing lab practice class, cleansing enema self study module was given to the students which was developed by the researcher based on the literature review and asked them to finish doing the pre study and checking the self study evaluation criteria after reading the goal, learning activities and theoretical guideline. After watching the video tape, students practiced the process in the module by themselves. For the competency in cleansing enema. repeated autonomous practices were done during the open lab other than the regular class. Whenever the practice was done, the frequency and time were measure and documented. When the student felt confident through repeated practices, the competency was evaluated by the researcher and two assistants based on the evaluation criteria. And the process was repeated till the student could perform all the items on evaluation criteria completely. The data were collected for 42 days from Oct. 15 to Nov. 26 in 1996. Collected data were analyzed by frequency, percentage, Pearson correlation coefficient and variance analysis. The results are summarized as follows : 1. 43.2% of the students were favorable to nursing and 63.6% like lecture, but 71.6% like practice. So they were more interested in practice than in lecture. 2. 62.3% of the students scored high in written test, 97.8% scored high in practice. So the practice score was better. 3. The frequency of repeated practice to pass the test ranged from 1 to 4 and the average is 2.2. 4. The average time needed in preparation and the performance was nearly the same regardless of the frequency. It took 5 to 38 minutes for those who passed the test after practicing once and the average was 16 minutes. 5 to 60 minutes were taken for those who practiced twice to pass the test and the average was 21 minutes. Those who passed the test after three practices needed 8 to 30 minutes and the average was 15 minutes, which was similar to the time that the students who passed the test for the first trial. Only one student passed the test after 4 practices and it took 10 minutes. 5. 64% of the students agreed that the context and the content of the module were appropriate for the self study and 68.2% were satisfied. And 71.9% said that the module helped them to practice the enema self study 6. Though only 42% of the students were satisfied with the video. 50.6% said that it was helpful for the self study. 7. 52.3% of the students were satisfied with the self study method, and 86.6% obtained self-confidence when performing the enema. 8. The lower the student's practice score was, the more practices were needed for them to pass the test(r=-.213, P<.05). As a result, for performing the enema practice competently, two or more practice opportunities were needed to be given. And it is possible to obtain the less complex nursing skills through the self study, when enough learning resources and assistance such as learning guidance or video tapes are provided. Based on this study. I want to suggest that. 1. There must be college policy that can support the new method instead of the traditional learning method for the students to attain the proficiency in basic nursing skills. 2. The assistant materials should be developed as soon as possible to promote the self study of basic nursing skills.

  • PDF

Evaluation Criteria and Preferred Image of Jeans Products based on Benefit Segmentation (진 제품 구매자의 추구혜택에 따른 평가기준 및 선호 이미지)

  • Park, Na-Ri;Park, Jae-Ok
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.31 no.6 s.165
    • /
    • pp.974-984
    • /
    • 2007
  • The purpose of this study was to find differences in evaluation criteria and to find differences in preferred images based on benefits segmented groups of jeans products consumers. Male and female Korean university students participated in the study. Quota sampling method was used to collect the data based on gender and a residential area of the respondents. Data from 492 questionnaires were used in the analysis. Factor analysis, Cronbach's alpha coefficient, cluster analysis, one-way ANOVA, and post-hoc test were conducted. As a result, respondents who seek multi-benefits considered aesthetic criteria(e.g., color, style, design, fit) and quality performance criteria(e.g., durability, ease of care, contractibility, flexibility) more importantly when evaluating and purchasing jeans products. Respondents who seek brand name considered extrinsic criteria(e.g., brand reputation, status symbol, country of origin, fashionability) more importantly than respondents who seek economic efciency. Respondents who seek multi-benefits such as attractiveness, fashion, individuality, and utility tend to prefer all the images: individual image, active image, sexual image, sophisticated image, and simple image when wearing jeans products. Respondents who seek fashion are likely to prefer individual image, and respondents who seek brand name more prefer both individual image and polished image. Mean while, respondents who seek economical efficiency less prefer sexual image and polished image.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Stud and Puzzle-Strip Shear Connector for Composite Beam of UHPC Deck and Inverted-T Steel Girder (초고성능 콘크리트 바닥판과 역T형 강거더의 합성보를 위한 스터드 및 퍼즐스트립 전단연결재에 관한 연구)

  • Lee, Kyoung-Chan;Joh, Changbin;Choi, Eun-Suk;Kim, Jee-Sang
    • Journal of the Korea Concrete Institute
    • /
    • v.26 no.2
    • /
    • pp.151-157
    • /
    • 2014
  • Since recently developed Ultra-High-Performance-Concrete (UHPC) provides very high strength, stiffness, and durability, many studies have been made on the application of the UHPC to bridge decks. Due to high strength and stiffness of UHPC bridge deck, the structural contribution of top flange of steel girder composite to UHPC deck would be much lower than that of conventional concrete deck. At this point of view, this study proposes a inverted-T shaped steel girder composite to UHPC deck. This girder requires a new type of shear connector because conventional shear connectors are welded on top flange. This study also proposes three different types of shear connectors, and evaluate their ultimate strength via push-out static test. The first one is a stud shear connector welded directly to the web of the girder in the transverse direction. The second one is a puzzle-strip type shear connector developed by the European Commission, and the last one is the combination of the stud and the puzzle-strip shear connectors. Experimental results showed that the ultimate strength of the transverse stud was 26% larger than that given in the AASHTO LRFD Bridge Design Specifications, but a splitting crack observed in the UHPC deck was so severe that another measure needs to be developed to prevent the splitting crack. The ultimate strength of the puzzle-strip specimen was 40% larger than that evaluated by the equation of European Commission. The specimens combined with stud and puzzle-strip shear connectors provided less strength than arithmetical sum of those. Based on the experimental observations, there appears to be no advantage of combining transverse stud and puzzle-strip shear connectors.