• Title/Summary/Keyword: reliability prediction

Search Result 1,201, Processing Time 0.026 seconds

Development of smart car intelligent wheel hub bearing embedded system using predictive diagnosis algorithm

  • Sam-Taek Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.1-8
    • /
    • 2023
  • If there is a defect in the wheel bearing, which is a major part of the car, it can cause problems such as traffic accidents. In order to solve this problem, big data is collected and monitoring is conducted to provide early information on the presence or absence of wheel bearing failure and type of failure through predictive diagnosis and management technology. System development is needed. In this paper, to implement such an intelligent wheel hub bearing maintenance system, we develop an embedded system equipped with sensors for monitoring reliability and soundness and algorithms for predictive diagnosis. The algorithm used acquires vibration signals from acceleration sensors installed in wheel bearings and can predict and diagnose failures through big data technology through signal processing techniques, fault frequency analysis, and health characteristic parameter definition. The implemented algorithm applies a stable signal extraction algorithm that can minimize vibration frequency components and maximize vibration components occurring in wheel bearings. In noise removal using a filter, an artificial intelligence-based soundness extraction algorithm is applied, and FFT is applied. The fault frequency was analyzed and the fault was diagnosed by extracting fault characteristic factors. The performance target of this system was over 12,800 ODR, and the target was met through test results.

A Study on the Metadata Schema for the Collection of Sensor Data in Weapon Systems (무기체계 CBM+ 적용 및 확대를 위한 무기체계 센서데이터 수집용 메타데이터 스키마 연구)

  • Jinyoung Kim;Hyoung-seop Shim;Jiseong Son;Yun-Young Hwang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.161-169
    • /
    • 2023
  • Due to the Fourth Industrial Revolution, innovation in various technologies such as artificial intelligence (AI), big data (Big Data), and cloud (Cloud) is accelerating, and data is considered an important asset. With the innovation of these technologies, various efforts are being made to lead technological innovation in the field of defense science and technology. In Korea, the government also announced the "Defense Innovation 4.0 Plan," which consists of five key points and 16 tasks to foster advanced science and technology forces in March 2023. The plan also includes the establishment of a Condition-Based Maintenance system (CBM+) to improve the operability and availability of weapons systems and reduce defense costs. Condition Based Maintenance (CBM) aims to secure the reliability and availability of the weapon system and analyze changes in equipment's state information to identify them as signs of failure and defects, and CBM+ is a concept that adds Remaining Useful Life prediction technology to the existing CBM concept [1]. In order to establish a CBM+ system for the weapon system, sensors are installed and sensor data are required to obtain condition information of the weapon system. In this paper, we propose a sensor data metadata schema to efficiently and effectively manage sensor data collected from sensors installed in various weapons systems.

Simulation analysis and evaluation of decontamination effect of different abrasive jet process parameters on radioactively contaminated metal

  • Lin Zhong;Jian Deng;Zhe-wen Zuo;Can-yu Huang;Bo Chen;Lin Lei;Ze-yong Lei;Jie-heng Lei;Mu Zhao;Yun-fei Hua
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.3940-3955
    • /
    • 2023
  • A new method of numerical simulating prediction and decontamination effect evaluation for abrasive jet decontamination to radioactively contaminated metal is proposed. Based on the Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) coupled simulation model, the motion patterns and distribution of abrasives can be predicted, and the decontamination effect can be evaluated by image processing and recognition technology. The impact of three key parameters (impact distance, inlet pressure, abrasive mass flow rate) on the decontamination effect is revealed. Moreover, here are experiments of reliability verification to decontamination effect and numerical simulation methods that has been conducted. The results show that: 60Co and other homogeneous solid solution radioactive pollutants can be removed by abrasive jet, and the average removal rate of Co exceeds 80%. It is reliable for the proposed numerical simulation and evaluation method because of the well goodness of fit between predicted value and actual values: The predicted values and actual values of the abrasive distribution diameter are Ф57 and Ф55; the total coverage rate is 26.42% and 23.50%; the average impact velocity is 81.73 m/s and 78.00 m/s. Further analysis shows that the impact distance has a significant impact on the distribution of abrasive particles on the target surface, the coverage rate of the core area increases at first, and then decreases with the increase of the impact distance of the nozzle, which reach a maximum of 14.44% at 300 mm. It is recommended to set the impact distance around 300 mm, because at this time the core area coverage of the abrasive is the largest and the impact velocity is stable at the highest speed of 81.94 m/s. The impact of the nozzle inlet pressure on the decontamination effect mainly affects the impact kinetic energy of the abrasive and has little impact on the distribution. The greater the inlet pressure, the greater the impact kinetic energy, and the stronger the decontamination ability of the abrasive. But in return, the energy consumption is higher, too. For the decontamination of radioactively contaminated metals, it is recommended to set the inlet pressure of the nozzle at around 0.6 MPa. Because most of the Co elements can be removed under this pressure. Increasing the mass and flow of abrasives appropriately can enhance the decontamination effectiveness. The total mass of abrasives per unit decontamination area is suggested to be 50 g because the core area coverage rate of the abrasive is relatively large under this condition; and the nozzle wear extent is acceptable.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

A Study on the Allowable Bearing Capacity of Pile by Driving Formulas (각종 항타공식에 의한 말뚝의 허용지지력 연구)

  • Lee, Jean-Soo;Chang, Yong-Chai;Kim, Yong-Keol
    • Journal of Navigation and Port Research
    • /
    • v.26 no.1
    • /
    • pp.106-111
    • /
    • 2002
  • The estimation of pile bearing capacity is important since the design details are determined from the result. There are numerous ways of determining the pile design load, but only few of them are chosen in the actual design. According to the recent investigation in Korea, the formulas proposed by Meyerhof based on the SPT N values are most frequently chosen in the design stage. In the study, various static and dynamic formulas have been used in predicting the allowable bearing capacity of a pile. Further, the reliability of these formulas has been verified by comparing the perdicted values with the static and dynamic load test measurements. Also, in most cases, these methods of pile bearing capacity determination do not take the time effect consideration, the actual allowable load as determined from pile load test indicates severe deviation from the design value. The principle results of this study are summarized as follows : As a result of estimate the reliability in criterion of the Davisson method, t was showed that Terzaghi & Peck >Chin>Meyerhof > Modified Meyerhof method was the most reliable method for the prediction of bearing capacity. Comparisons of the various pile-driving formulas showed that Modified Engineering News was the most reliable method. However, a significant error happened between dynamic bearing capacity equation was judged that uncertainty of hammer efficiency, characteristics of variable, time effect etc... was not considered. As a result of considering time effect increased skin friction capacity higher than end bearing capacity. It was found out that it would be possible to increase the skin friction capacity 1.99 times higher than a driving. As a result of considering 7 day's time effect, it was obtained that Engineering news, Modified Engineering News, Hiley, Danish, Gates, CAPWAP(CAse Pile Wave Analysis Program) analysis for relation, repectively, $Q_{u(Restrike)} / Q_{u(EOID)} = 0.98t_{0.1}$ , $0.98t_{0.1}$, $1.17t_{0.1}$, $0.88t_{0.1}$, $0.89t_{0.1}$, $0.97t_{0.1}$.

A STUDY ON THE MEASUREMENT OF THE IMPLANT STABILITY USING RESONANCE FREQUENCY ANALYSIS (공진 주파수 분석법에 의한 임플랜트의 안정성 측정에 관한 연구)

  • Park Cheol;Lim Ju-Hwan;Cho In-Ho;Lim Heon-Song
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.41 no.2
    • /
    • pp.182-206
    • /
    • 2003
  • Statement of problem : Successful osseointegration of endosseous threaded implants is dependent on many factors. These may include the surface characteristics and gross geometry of implants, the quality and quantity of bone where implants are placed, and the magnitude and direction of stress in functional occlusion. Therefore clinical quantitative measurement of primary stability at placement and functional state of implant may play a role in prediction of possible clinical symptoms and the renovation of implant geometry, types and surface characteristic according to each patients conditions. Ultimately, it may increase success rate of implants. Purpose : Many available non-invasive techniques used for the clinical measurement of implant stability and osseointegration include percussion, radiography, the $Periotest^{(R)}$, Dental Fine $Tester^{(R)}$ and so on. There is, however, relatively little research undertaken to standardize quantitative measurement of stability of implant and osseointegration due to the various clinical applications performed by each individual operator. Therefore, in order to develop non-invasive experimental method to measure stability of implant quantitatively, the resonance frequency analyzer to measure the natural frequency of specific substance was developed in the procedure of this study. Material & method : To test the stability of the resonance frequency analyzer developed in this study, following methods and materials were used : 1) In-vitro study: the implant was placed in both epoxy resin of which physical properties are similar to the bone stiffness of human and fresh cow rib bone specimen. Then the resonance frequency values of them were measured and analyzed. In an attempt to test the reliability of the data gathered with the resonance frequency analyzer, comparative analysis with the data from the Periotest was conducted. 2) In-vivo study: the implants were inserted into the tibiae of 10 New Zealand rabbits and the resonance frequency value of them with connected abutments at healing time are measured immediately after insertion and gauged every 4 weeks for 16 weeks. Results : Results from these studies were such as follows : The same length implants placed in Hot Melt showed the repetitive resonance frequency values. As the length of abutment increased, the resonance frequency value changed significantly (p<0.01). As the thickness of transducer increased in order of 0.5, 1.0 and 2.0 mm, the resonance frequency value significantly increased (p<0.05). The implants placed in PL-2 and epoxy resin with different exposure degree resulted in the increase of resonance frequency value as the exposure degree of implants and the length of abutment decreased. In comparative experiment based on physical properties, as the thickness of transducer increased, the resonance frequency value increased significantly(p<0.01). As the stiffness of substances where implants were placed increased, and the effective length of implants decreased, the resonance frequencies value increased significantly (p<0.05). In the experiment with cow rib bone specimen, the increase of the length of abutment resulted in significant difference between the results from resonance frequency analyzer and the $Periotest^{(R)}$. There was no difference with significant meaning in the comparison based on the direction of measurement between the resonance frequency value and the $Periotest^{(R)}$ value (p<0.05). In-vivo experiment resulted in repetitive patternes of resonance frequency. As the time elapsed, the resonance frequency value increased significantly with the exception of 4th and 8th week (p<0.05). Conclusion : The development of resonance frequency analyzer is an attempt to standardize the quantitative measurement of stability of implant and osseointegration and compensate for the reliability of data from other non-invasive measuring devices It is considered that further research is needed to improve the efficiency of clinical application of resonance frequency analyzer. In addition, further investigation is warranted on the standardized quantitative analysis of the stability of implant.

A Prediction of N-value Using Artificial Neural Network (인공신경망을 이용한 N치 예측)

  • Kim, Kwang Myung;Park, Hyoung June;Goo, Tae Hun;Kim, Hyung Chan
    • The Journal of Engineering Geology
    • /
    • v.30 no.4
    • /
    • pp.457-468
    • /
    • 2020
  • Problems arising during pile design works for plant construction, civil and architecture work are mostly come from uncertainty of geotechnical characteristics. In particular, obtaining the N-value measured through the Standard Penetration Test (SPT) is the most important data. However, it is difficult to obtain N-value by drilling investigation throughout the all target area. There are many constraints such as licensing, time, cost, equipment access and residential complaints etc. it is impossible to obtain geotechnical characteristics through drilling investigation within a short bidding period in overseas. The geotechnical characteristics at non-drilling investigation points are usually determined by the engineer's empirical judgment, which can leads to errors in pile design and quantity calculation causing construction delay and cost increase. It would be possible to overcome this problem if N-value could be predicted at the non-drilling investigation points using limited minimum drilling investigation data. This study was conducted to predicted the N-value using an Artificial Neural Network (ANN) which one of the Artificial intelligence (AI) method. An Artificial Neural Network treats a limited amount of geotechnical characteristics as a biological logic process, providing more reliable results for input variables. The purpose of this study is to predict N-value at the non-drilling investigation points through patterns which is studied by multi-layer perceptron and error back-propagation algorithms using the minimum geotechnical data. It has been reviewed the reliability of the values that predicted by AI method compared to the measured values, and we were able to confirm the high reliability as a result. To solving geotechnical uncertainty, we will perform sensitivity analysis of input variables to increase learning effect in next steps and it may need some technical update of program. We hope that our study will be helpful to design works in the future.

Exploring the Role of Preference Heterogeneity and Causal Attribution in Online Ratings Dynamics

  • Chu, Wujin;Roh, Minjung
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.61-101
    • /
    • 2014
  • This study investigates when and how disagreements in online customer ratings prompt more favorable product evaluations. Among the three metrics of volume, valence, and variance that feature in the research on online customer ratings, volume and valence have exhibited consistently positive patterns in their effects on product sales or evaluations (e.g., Dellarocas, Zhang, and Awad 2007; Liu 2006). Ratings variance, or the degree of disagreement among reviewers, however, has shown rather mixed results, with some studies reporting positive effects on product sales (e.g., Clement, Proppe, and Rott 2007) while others finding negative effects on product evaluations (e.g., Zhu and Zhang 2010). This study aims to resolve these contradictory findings by introducing preference heterogeneity as a possible moderator and causal attribution as a mediator to account for the moderating effect. The main proposition of this study is that when preference heterogeneity is perceived as high, a disagreement in ratings is attributed more to reviewers' different preferences than to unreliable product quality, which in turn prompts better quality evaluations of a product. Because disagreements mostly result from differences in reviewers' tastes or the low reliability of a product's quality (Mizerski 1982; Sen and Lerman 2007), a greater level of attribution to reviewer tastes can mitigate the negative effect of disagreement on product evaluations. Specifically, if consumers infer that reviewers' heterogeneous preferences result in subjectively different experiences and thereby highly diverse ratings, they would not disregard the overall quality of a product. However, if consumers infer that reviewers' preferences are quite homogeneous and thus the low reliability of the product quality contributes to such disagreements, they would discount the overall product quality. Therefore, consumers would respond more favorably to disagreements in ratings when preference heterogeneity is perceived as high rather than low. This study furthermore extends this prediction to the various levels of average ratings. The heuristicsystematic processing model so far indicates that the engagement in effortful systematic processing occurs only when sufficient motivation is present (Hann et al. 2007; Maheswaran and Chaiken 1991; Martin and Davies 1998). One of the key factors affecting this motivation is the aspiration level of the decision maker. Only under conditions that meet or exceed his aspiration level does he tend to engage in systematic processing (Patzelt and Shepherd 2008; Stephanous and Sage 1987). Therefore, systematic causal attribution processing regarding ratings variance is likely more activated when the average rating is high enough to meet the aspiration level than when it is too low to meet it. Considering that the interaction between ratings variance and preference heterogeneity occurs through the mediation of causal attribution, this greater activation of causal attribution in high versus low average ratings would lead to more pronounced interaction between ratings variance and preference heterogeneity in high versus low average ratings. Overall, this study proposes that the interaction between ratings variance and preference heterogeneity is more pronounced when the average rating is high as compared to when it is low. Two laboratory studies lend support to these predictions. Study 1 reveals that participants exposed to a high-preference heterogeneity book title (i.e., a novel) attributed disagreement in ratings more to reviewers' tastes, and thereby more favorably evaluated books with such ratings, compared to those exposed to a low-preference heterogeneity title (i.e., an English listening practice book). Study 2 then extended these findings to the various levels of average ratings and found that this greater preference for disagreement options under high preference heterogeneity is more pronounced when the average rating is high compared to when it is low. This study makes an important theoretical contribution to the online customer ratings literature by showing that preference heterogeneity serves as a key moderator of the effect of ratings variance on product evaluations and that causal attribution acts as a mediator of this moderation effect. A more comprehensive picture of the interplay among ratings variance, preference heterogeneity, and average ratings is also provided by revealing that the interaction between ratings variance and preference heterogeneity varies as a function of the average rating. In addition, this work provides some significant managerial implications for marketers in terms of how they manage word of mouth. Because a lack of consensus creates some uncertainty and anxiety over the given information, consumers experience a psychological burden regarding their choice of a product when ratings show disagreement. The results of this study offer a way to address this problem. By explicitly clarifying that there are many more differences in tastes among reviewers than expected, marketers can allow consumers to speculate that differing tastes of reviewers rather than an uncertain or poor product quality contribute to such conflicts in ratings. Thus, when fierce disagreements are observed in the WOM arena, marketers are advised to communicate to consumers that diverse, rather than uniform, tastes govern reviews and evaluations of products.

  • PDF

Earnings Management of Firms Selected as Preliminary Unicorn (예비유니콘 선정기업의 이익조정에 대한 연구)

  • HAKJUN, HAN;DONGHOON, YANG
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.173-188
    • /
    • 2023
  • This paper analyzed the Earnings management of firms selected as preliminary Unicorn. If a manager is selected as a preliminary unicorn firm, he can receive financial support of up to 20 billion won, creating a factor in managing the manager's earnings. The motive for management's earnings management is related to the capital market. Accounting information is used by investors and financial analysts, and corporate profits affect corporate value. Therefore, if the accounting earning is adjusted upward, the corporate value will be raised and investment conditions will be favorable. In this paper, earnings quality was measured by the modified Jones model of Dechow et al.(1995) by the ROA control model of Kothari et al.(2005) among the discretionary accruals estimated using an alternative accrual prediction model. Competing similar companies in the same market as the selected companies were formed, and the discretionary accruals were mutually compared to verify the research hypotheses, and only the selected companies were analyzed for the audit year and after the audit year. As a result of the analysis, it was found that the companies selected as preliminary unicorns had higher earnings management compared to the corresponding companies in question, which had a negative impact on the quality of accounting profits. It was found that the companies selected as preliminary unicorns continued to receive incentives for management's earnings management even after being selected. These results indicate that the companies selected as prospective unicorns are recognized for their value in the market through external growth rather than internal growth, and thus, incentives for management's earnings management to attract investment from external investors under favorable conditions are continuing. In the future preliminary unicorn selection evaluation, it was possible to present what needs to be reviewed on the quality of accounting earning. The implication of this paper is that the factors of management's earnings management eventually hinder investors and creditors from judging the reliability of accounting information. It was suggested that a policy alternative for the K-Unicorn Project, which enhances reliability were presented by reflecting the evaluation of earnings quality through discretionary accruals.

  • PDF

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.