DOI QR코드

DOI QR Code

An Evaluation Framework for Defense Informatization Policy

  • Jung, Hosang (Professor, Asia Pacific School of Logistics, Inha University) ;
  • Lee, Sangho (Associate Professor, Department of IT Management, Sun Moon University)
  • Received : 2020.03.03
  • Accepted : 2020.03.26
  • Published : 2020.03.31

Abstract

The well-known sentence, "You can't manage what you don't measure" suggests the importance of measurement. The Ministry of National Defense (MND) in Korea is measuring the effort of informatization for various dimensions such as validity, adequacy, and effectiveness using the MND evaluation system to obtain positive and significant effects from informatization. MND views the defense informatization domain as divided into the defense informatization policy, the defense informatization project, and the defense informatization level, which can measure the informatization capability of the MND and the armed forces or organizations. Furthermore, it feels there is some limitation, such as those related to ambiguity and reliability, present in the system. To overcome the limitations in the existing system to evaluate the defense informatization policy, this study proposes a revised evaluation framework for the policy of defense informatization, its indicators, and measurement methods.

Keywords

I. INTRODUCTION

The sentence, “You can’t manage what you don’t measure” is alleged to have been used by Peter Drucker or Edward Deming [1] and suggests the importance of measurement. An evaluation is “an assessment, as systematic and objective as possible, of an on-going or completed project, program or policy, its design, implementation and results” [2] or an assessment of policy effectiveness, efficiency, relevance, and coherence during and after implementation. It seeks to measure outcomes and impacts to assess and determine whether the anticipated benefits of policy have been realized [3]. It “should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors” [2]. In addition, an evaluation refers to the process of judging the value and merits of the object to be evaluated based on certain criteria and procedures. It is an important part of the logical process using which public or private organizations determine the policy: that is, they start by making a plan, followed by implementing or executing the plan or policy, and then evaluating the outcomes and processes, and further taking any follow-up action based on the evaluation results [4].

This study proposes a framework to assess the defense informatization policy (DIP) in terms of the validity of policy-making, the appropriateness of the policy-making process, the adequacy of performance by the policy at the policy-making stage; the properness of policy implementation at the policy implementation stage; achievement of performance objectives, adequacy of the performance analysis process, and utilization of analysis results at the outcome/performance stage. It also describes quantitative evaluation indicators for each item for policy evaluation.

The remainder of this paper is organized as follows. It reviews existing works related to the evaluation of the DIP in Section 2. We suggest a framework for the evaluation of DIP and describe evaluation indicators and their measuring method in Section 3. The last section presents a summary, limitations, and directions for future work.

II. RELATED WORKS

2.1 Korean Government’s Evaluation of Policies

In Korea, the Framework Act on the Evaluation of Government Services is a base for government evaluation [5]. The evaluation refers to checking, analyzing, and evaluating the establishment, implementation, and results of the plan with respect to policies, projects, and duties carried out by a given institution, corporation, or organization [5]. Government service evaluation refers to the evaluation of policies carried out by the government or public organizations or corporations to ensure the efficiency, effectiveness, and accountability of government operations. The government evaluation is divided into selfassessment and specific evaluation. Self-assessment refers to self-evaluation of jurisdiction policy by the central administrative agency or local government. The specific evaluation means that the Prime Minister evaluates the policies necessary for the central administrative agency to manage the government service integrally.

Another effort related to the evaluation of the informatization policy in the Government of the Republic of Korea is evaluating the performance management in the evaluation of administrative management capability [6]. The Ministry of the Interior and Safety manages this evaluation, which is based on the Framework Act on Public Service Evaluation [7]. Forty-four central government departments, including the Ministry of National Defense (MND), were evaluated in 2019. The evaluation item related to the informatization policy is performance management indicators, and its weight is just seven percent.

2.2 Informatization Evaluation in MND

The MND performs various measurements to obtain significant realized effects from informatization. The term “defense information” refers to any type of material or knowledge processed by optical or electronic means for defense and is expressed in code, letters, voice, sound, and video [8]. The optical or electronic means naturally use and depend on multimedia, which is “a technique (such as the combining of sound, video, and text) for expressing ideas (as in communication, entertainment, or art) in which several media are employed” [9]. The term “defense informatization” refers to the production, distribution, or utilization of defense information to enable activities in the defense sector or to promote efficiency. The DIP is the policy for defense informatization, and follows four principles: strategic informatization for national security of the information society, economic informatization through efficient management of defense information resources, technical informatization to secure excellent defense information technology, and integrated informatization to maximize the utility of defense power [8].

The evaluation in the defense informatization domain is divided into the evaluation for DIP, the evaluation for defense informatization project by the Act on Defense Informatization [8], [10], [11], and the evaluation of defense informatization level [12], [13].

The evaluation of the defense informatization project assesses the establishment, implementation process, and results of project plans for specific defense informatization projects such as IT procurement projects, information system (IS) development projects, and IS maintenance and operation projects, which are being carried out by defense organizations. The project evaluation consists of three stages: ex ante project stage, project progression stage, and ex post project stage [14]. It should focus on defining the performance indicators from the establishment of the operation concept of the informatization project, reviewing the progress of the performance indicators in the progression stage, and evaluating whether the performance indicators achieved the target values in the subsequent stages.

The evaluation of the defense informatization level can measure the informatization capacity and readiness, as the informatization level, in defense organizations [12], [15]. The level evaluation should focus on measuring the level of the organization's informatization mind and informatization infrastructure (facility, equipment, budget, etc.) along with the utilization of IS operated as a result of the informatization project [16].

The evaluation for the DIP is an annual evaluation of the implementation direction, result, and performance of policy for all agencies and units of the MND, the Army, the Navy, and the Air Force promoting defense informatization. It should focus on evaluating whether the policy was implemented in accordance with the policy direction for the DIP items included in the Defense Informatization Policy Statement (DIPS) and the Defense Informatization Basic Plan [12], [13], [16]. It checks compliance with procedures and standards to be considered at each stage of policymaking, implementation, and result measurement as an assessment of the adequacy of policy-making and implementation, and also checks targeting of performance indicators according to the characteristics of policy and the results of the implemented policy as policy implementation and performance evaluation.

2.3 Current Evaluation Method of MND for DIP

The MND’s current evaluation method for DIP uses evaluation indicators by stages such as policy planning, policy implementation, and output/performance of policy from a systematic perspective [12]. It uses eleven indicators. In the policy planning stage, two items (the adequacy of planning and the adequacy of the performance plan) are used. For the adequacy of the planning item, five indicators such as conformity with DIPS ([a-1] Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), adequacy of policy analysis (Were the policy measures for achieving the policy objectives prepared appropriately?), fidelity of opinion (Did the organization faithfully collect expert opinions when planning?), and sufficiency of preliminary validity review on the plan (Did an organization fully conduct a preliminary survey when planning? Were the anticipated side effects reviewed and their alternatives fully reviewed?) are used. For the adequacy of performance plan item, four indicators such as specificity of performance goal setting (Are the objectives an organization is trying to achieve through the policy sufficiently specific? Has an organization specified specific targets for evaluating the outcome of the policy? Is there a concrete way to evaluate the effectiveness of the policy?), and relevance of performance indicators (Were the performance indicators and their performance targets set appropriately?) are used.

In the policy implementation stage, three indicators, such as the fidelity of the propulsion schedule (Has the organization proceeded faithfully the policy in accordance with the schedule?), responsiveness to changes in administrative conditions and circumstances (Has the organization responded appropriately to changes in administrative conditions and circumstances?), and connectivity with relevant institutions and policies (Did the organization establish proper connectivity and cooperation system with relevant institutions and policies in the process of implementation?) are used to measure the relevance of the implementation process.

[a-1]In the output/performance of the policy stage, two items (achievement of performance objective and feedback of evaluation results) are used. The achievement of the performance objective item uses the achievement of the target of performance indicators (Did the organization achieve the originally set objectives in the policy planning? Did the organization provide good things and poor things through performance analysis? Did the organization suggest appropriate implications by performance analysis?). The feedback of the evaluation results item uses the evaluation result utilization indicators (Are the results of the performance analysis properly reflected in the next plan? Were the results of performance analysis fully utilized through knowledge management?).

The current method has some limitations. Specifically, in conformity with DIPS (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), it uses a 5-point Likert scale (Very poor – Poor – Acceptable – Good – Very good). This suggests the check criteria below [12]:

▪ “Very good (5 point),” when the policy content by the policy plan matches the policy direction in the DIPS.

▪ “Acceptable (3 point),” when the policy does not exactly match the direction in the DIPS but is related to the direction of informatization in the DIPS.

▪ “Very poor (1 point),” when the policy is not related to the direction of informatization in the DIPS.

However, it may not be meaningful to use [a-1] evaluation indicator (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?) because the policy to be evaluated cannot be made completely apart from the DIP. Moreover, it may also be artificial to assign different points according to the level of conformity, in which the subjective judgment of the evaluator is involved [16].

To overcome the limitations of the current method, it is necessary to reconstruct the evaluation system for DIP based on clear and quantitative evaluation indicators that can guarantee objectivity.

III. EVALUATION FRAMEWORK FOR DEFENSE INFORMATIZATION POLICY

The proposed evaluation framework for DIP consists of three stages: policy-making, policy implementation, and outcome/performance of policy, as in the existing system.

Fig. 1 shows a policy-making process. Table 1 presents the evaluation items, their evaluation indicators, and their descriptions in the framework. Seventeen indicators were used.

Fig. 1. Policy-making process.

Table 1. Evaluation indicators for defense informatization policy

In the policy-making stage, the validity of policy-making (A), the appropriateness of the policy-making process (B), and the adequacy of performance by the policy (C) are evaluated. The validity of policy-making is based on the necessity and timeliness of policy as an evaluation indicator. The appropriateness of policy-making process is evaluated using the indicators of fidelity of collecting opinions, fidelity of study in advance, fidelity of policy analysis, and fidelity of post preparation. The adequacy of performance by policy is evaluated by three indicators such as representativeness, objectivity, and redundancy of performance indicators.

In the policy implementation stage, the properness of policy implementation (D) are reviewed with three indicators, i.e., compliance with plan, responsiveness to change of circumstance, and connectivity with relevant organizations or policies.

In the output/performance stage, the achievement of the performance objective (E), the adequacy of the performance analysis process (F), and the utilization of the analysis results (G) are evaluated. Two indicators such as concreteness and reliability of performance analysis are used for the evaluation of the adequacy of performance analysis process. In addition, the utilization of analysis results is based on the sharing and learning level of analysis result and the intellectualization level of analysis result.

For the evaluation framework to work well, specific measures, descriptions, and criteria should be provided for each evaluation indicator. The explanation of the evaluation indicator for [a-1] the necessity of policy is as below:

▪ Indicator: Policy-making >> Validity of policy-making > Necessity of policy

▪ Description: Check if the necessity of policy was reviewed when making the policy

▪ Question: Was the policy fully reviewed in accordance with the policy contents in the DIPS and the National Informatization Basic Plan?

▪ Check criteria: See Table 2

▪ Source of data: DIPS, Framework Act on National Informatization [17], National Informatization Basic Plan [17], [18], Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years

Table 2. Check criteria for [a-1]necessity of policy indicator.

* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years.

Table 2 shows chek cneria for necessity of policy indicator. This indicator checks whether the policy is consistent with the direction of defense informatization, national informatization, and other public or private informatization. The criteria use a 5-point Likert scale (Very poor (0) – Poor (1) – Acceptable (2) – Good (3) – Very good (4)). If the policy is consistent with the direction of all informatization, one can mark “Very good.” Even though the policy is not consistent with the direction of defense informatization, one should mark “Acceptable” if it is fully consistent with the direction of national informatization. One can mark “Poor” if it matches with the direction of other informatization policy except defense or national informatization one.

All evaluation indicators as tabulation are shown in Tables 3 to 19.

Table 3. Indicator [A-1] Necessity of policy

Table 4. Indicator [A-2] Timeliness of policy.

Table 5. Indicator [B-1] Fidelity of collecting opinions.

Table 6. Indicator [B-2] Fidelity of study in advance

Table 7. Indicator [B-3] Fidelity of policy analysis

Table 8. Indicator [B-4] Fidelity of post preparation.

Table 9. Indicator [C-1] Representativeness of performance indicators

Table 10. Indicator [C-2] Objectivity of performance indicators

Table 11. Indicator [C-3] Redundancy of performance indicators

Table 12. Indicator [D-1] Compliance with plan

Table 13. Indicator [D-2] Responsiveness to change of circumstance.

Table 14. Indicator [D-3] Connectivity with relevant organizations or policies.

Table 15. Indicator [E-1] Achievement of performance objective

Table 16. Indicator [F-1] Concreteness of performance analysis.

Table 17. Indicator [F-2] Reliability of performance analysis.

Table 18. Indicator [G-1] Sharing and learning level of analysis result.

Table 19. Indicator [G-2] Intellectualization level of analysis result

IV. CONCLUSION

This study describes the improved evaluation framework, which was revised from the current defense informatization evaluation method [12], for the DIP. The proposed framework for the policy of defense informatization is evaluated in each stage of policy-making, policy implementation, and outcome/performance of policy. This does not use a survey method but a direct evaluation of the policy by evaluators, if possible. The evaluation requires measurement effort. For an efficient evaluation that reduces the burden of the defense organizations on overlapped evaluation by national and defense methods, the proposed evaluation method takes in and is consistent with the national evaluation method [5-7] as much as possible. The framework proposed in this study can be applied to assess other various policies such as multimedia broadcasting policy, ICT convergence policy, and multimedia policy as well as DIP.

 There are some limitations in the current study, as is the case with most researches and methodologies. It is necessary to set the performance objective for each policy in advance. Most policies do not have a clear and quantitative performance objective, indicator, or target [19]. If the policy does not have quantitative performance indicators related to an objective and target value, the evaluation framework cannot be workable. Moreover, the proposed evaluation framework is a revision based on an existing study [12], and not a theory.

The simple is more beautiful and better than the complex. It is more useful to develop an evaluation framework that most users can intuitively understand or easily use. Lower acceptance may weaken its effectiveness. It is better to evolve an imperfect evaluation framework by repetitively evaluating the informatization policy than waiting for the development of a fully reasonable and theoretically perfect evaluation framework. In addition, it must be as open as possible with the methods and results made widely available.

 Repetitive uses of an evaluation framework can accumulate experience. They can lead to lessons learned and modification requirements, which can make the evaluation framework more useful. Users can easily accept the evaluation framework. Through such a virtuous cycle, the evaluation framework for the policy about defense informatization will be easily accepted by the users and can aid in generating effective policies.

Acknowledgements

This manuscript is based on Research Report [16]. The authors wish to thank the editors and the anonymous reviewers for their careful reviews and constructive suggestions. Their suggestions helped strengthen the manuscript. All errors are the sole responsibility of the authors.

References

  1. A. McAfee and E. Brynjolfsson, "Big data: The management revolution," Harvard Business Review, Vol. 90, No. 10, October 2012, pp. 60-68.
  2. Development Assistance Committee, "Principles for Evaluation of Development Assistance," OECD, 1991. https://www.oecd.org/dac/evaluation/2755284.pdf
  3. Policy and Operations Evaluation Department (IOB) of the Dutch Ministry of Foreign Affairs, "Evaluation Policy and Guidelines for Evaluations," The Netherlands, October 2009. https://www.oecd.org/dac/evaluation/iob-evaluation-policy-and-guidelines-for-evaluations.pdf
  4. J. H. Kim, "Seeking ways to improve the use of policy evaluation" Korean Journal of Policy Analysis and Evaluation, Vol. 26, No. 3, 2016, pp. 205-222. (In Korean)
  5. Korea Prime Minister, Framework Act on the Evaluation of Government Services, Act No. 14118, Mar. 2016. (In Korean) http://www.law.go.kr/법령/정부업무평가기본법/(14118)
  6. Korea Ministry of the Interior and Safety, "2019 Central Government Department Self-evaluation (Public Administration Capability Part) Plan (Draft)," May 17, 2019. (In Korean)
  7. Korea Office for Government Policy Coordination, Framework Act on Public Service Evaluation, Act No. 14839, July 26, 2017. (In Korean) http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EC%A0%95%EB%B6%80%EC%97%85%EB%AC%B4%ED%8F%89%EA%B0%80%EA%B8%B0%EB%B3%B8%EB%B2%95
  8. Korea Ministry of National Defense (MND), Act on Establishment of Infrastructure for Informatization of National Defense and Management of Informational Resources for National Defense (Abbreviation: Act on Defense Informatization), Act No. 12553, May 9, 2014. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=32670&lang=ENG
  9. Merriam-Webster, retrieved at Mar. 24, 2020. https://www.merriam-webster.com/dictionary/multimedia
  10. Korea Ministry of National Defense (MND), Enforcement Decree of Act on Establishment of Infrastructure for Informatization of National Defense and Management of Informational Resources for National Defense (Abbreviation: Act on Defense Informatization), Presidential Decree No. 25906, Dec. 30, 2014. (In Korean) http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EA%B5%AD%EB%B0%A9%EC%A0%95%EB%B3%B4%ED%99%94%20%EA%B8%B0%EB%B0%98%EC%A1%B0%EC%84%B1%20%EB%B0%8F%20%EA%B5%AD%EB%B0%A9%EC%A0%95%EB%B3%B4%EC%9E%90%EC%9B%90%EA%B4%80%EB%A6%AC%EC%97%90%20%EA%B4%80%ED%95%9C%20%EB%B2%95%EB%A5%A0%20%EC%8B%9C%ED%96%89%EB%A0%B9
  11. Korea Ministry of National Defense (MND), Defense Informatization Task Directive, MND Directive No. 2129, Feb. 5, 2018. (In Korean) http://www.law.go.kr/행정규칙/국방정보화업무훈령
  12. H. J. Kwon, J. S. Choi, S. T. Kim, H. J. Lee, and Y. P. Sung, "A study for improving an evaluation systems of defense informatization," Korea Institute for Defense Analyses, Seoul, Republic of Korea, Research Report, Feb. 2012. (In Korean)
  13. H. J. Kwon, J. K. Hong, S. T. Kim, and H. J. Lee, "A study on development of defense informatization evaluation and management plan," Korea Institute for Defense Analyses, Seoul, Republic of Korea, Research Report, Sep. 2016. (In Korean)
  14. S. Lee, and C.-H. Song, "Evaluation system for defense IT project in Korea: Post-implementation stage," Journal of Multimedia Information System, Vol. 5, No. 4, 2018, pp. 291-297. https://doi.org/10.9717/JMIS.2018.5.4.291
  15. S. Sim, and S. Lee, "Development of evaluation system for defense informatization level," Journal of Multimedia Information System, Vol. 6, No. 4, 2019, pp. 271-282. https://doi.org/10.33851/JMIS.2019.6.4.271
  16. S. Lee, H. S. Jung, and S. J. Yoon, "An application with an evaluation methodology for defense informatization and validating the methodology," Sun Moon University, Asan, Republic of Korea, Research Report, Nov. 2012. (In Korean)
  17. Korea Ministry of Science and ICT, Framework Act on National Informatization, Act No. 15786, Oct. 16, 2018. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=50665&lang=ENG
  18. Korea Ministry of Science and ICT, Enforcement Decree Framework Act on National Informatization, Presidential Decree No. 28264, Sep. 5, 2017. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=45333&lang=ENG
  19. H. J. Lee, S. T. Kim, and H. J. Kwon, "A case study on performance management of public informatization: Based on evaluation of defense informatization policy," in Proceedings of the 2019 Fall Conference on the Korea Society of Management Information Systems, Seoul, Nov. 2019, pp. 224-231. (In Korean)