I. INTRODUCTION
The sentence, “You can’t manage what you don’t measure” is alleged to have been used by Peter Drucker or Edward Deming [1] and suggests the importance of measurement. An evaluation is “an assessment, as systematic and objective as possible, of an on-going or completed project, program or policy, its design, implementation and results” [2] or an assessment of policy effectiveness, efficiency, relevance, and coherence during and after implementation. It seeks to measure outcomes and impacts to assess and determine whether the anticipated benefits of policy have been realized [3]. It “should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors” [2]. In addition, an evaluation refers to the process of judging the value and merits of the object to be evaluated based on certain criteria and procedures. It is an important part of the logical process using which public or private organizations determine the policy: that is, they start by making a plan, followed by implementing or executing the plan or policy, and then evaluating the outcomes and processes, and further taking any follow-up action based on the evaluation results [4].
This study proposes a framework to assess the defense informatization policy (DIP) in terms of the validity of policy-making, the appropriateness of the policy-making process, the adequacy of performance by the policy at the policy-making stage; the properness of policy implementation at the policy implementation stage; achievement of performance objectives, adequacy of the performance analysis process, and utilization of analysis results at the outcome/performance stage. It also describes quantitative evaluation indicators for each item for policy evaluation.
The remainder of this paper is organized as follows. It reviews existing works related to the evaluation of the DIP in Section 2. We suggest a framework for the evaluation of DIP and describe evaluation indicators and their measuring method in Section 3. The last section presents a summary, limitations, and directions for future work.
II. RELATED WORKS
In Korea, the Framework Act on the Evaluation of Government Services is a base for government evaluation [5]. The evaluation refers to checking, analyzing, and evaluating the establishment, implementation, and results of the plan with respect to policies, projects, and duties carried out by a given institution, corporation, or organization [5]. Government service evaluation refers to the evaluation of policies carried out by the government or public organizations or corporations to ensure the efficiency, effectiveness, and accountability of government operations. The government evaluation is divided into self-assessment and specific evaluation. Self-assessment refers to self-evaluation of jurisdiction policy by the central administrative agency or local government. The specific evaluation means that the Prime Minister evaluates the policies necessary for the central administrative agency to manage the government service integrally.
Another effort related to the evaluation of the informatization policy in the Government of the Republic of Korea is evaluating the performance management in the evaluation of administrative management capability [6]. The Ministry of the Interior and Safety manages this evaluation, which is based on the Framework Act on Public Service Evaluation [7]. Forty-four central government departments, including the Ministry of National Defense (MND), were evaluated in 2019. The evaluation item related to the informatization policy is performance management indicators, and its weight is just seven percent.
The MND performs various measurements to obtain significant realized effects from informatization. The term “defense information” refers to any type of material or knowledge processed by optical or electronic means for defense and is expressed in code, letters, voice, sound, and video [8]. The optical or electronic means naturally use and depend on multimedia, which is “a technique (such as the combining of sound, video, and text) for expressing ideas (as in communication, entertainment, or art) in which several media are employed” [9]. The term “defense informatization” refers to the production, distribution, or utilization of defense information to enable activities in the defense sector or to promote efficiency. The DIP is the policy for defense informatization, and follows four principles: strategic informatization for national security of the information society, economic informatization through efficient management of defense information resources, technical informatization to secure excellent defense information technology, and integrated informatization to maximize the utility of defense power [8].
The evaluation in the defense informatization domain is divided into the evaluation for DIP, the evaluation for defense informatization project by the Act on Defense Informatization [8], [10], [11], and the evaluation of defense informatization level [12], [13].
The evaluation of the defense informatization project assesses the establishment, implementation process, and results of project plans for specific defense informatization projects such as IT procurement projects, information system (IS) development projects, and IS maintenance and operation projects, which are being carried out by defense organizations. The project evaluation consists of three stages: ex ante project stage, project progression stage, and ex post project stage [14]. It should focus on defining the performance indicators from the establishment of the operation concept of the informatization project, reviewing the progress of the performance indicators in the progression stage, and evaluating whether the performance indicators achieved the target values in the subsequent stages.
The evaluation of the defense informatization level can measure the informatization capacity and readiness, as the informatization level, in defense organizations [12], [15]. The level evaluation should focus on measuring the level of the organization's informatization mind and informatization infrastructure (facility, equipment, budget, etc.) along with the utilization of IS operated as a result of the informatization project [16].
The evaluation for the DIP is an annual evaluation of the implementation direction, result, and performance of policy for all agencies and units of the MND, the Army, the Navy, and the Air Force promoting defense informatization. It should focus on evaluating whether the policy was implemented in accordance with the policy direction for the DIP items included in the Defense Informatization Policy Statement (DIPS) and the Defense Informatization Basic Plan [12], [13], [16]. It checks compliance with procedures and standards to be considered at each stage of policy-making, implementation, and result measurement as an assessment of the adequacy of policy-making and implementation, and also checks targeting of performance indicators according to the characteristics of policy and the results of the implemented policy as policy implementation and performance evaluation.
The MND’s current evaluation method for DIP uses evaluation indicators by stages such as policy planning, policy implementation, and output/performance of policy from a systematic perspective [12]. It uses eleven indicators. In the policy planning stage, two items (the adequacy of planning and the adequacy of the performance plan) are used. For the adequacy of the planning item, five indicators such as conformity with DIPS (<a-1> Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), adequacy of policy analysis (Were the policy measures for achieving the policy objectives prepared appropriately?), fidelity of opinion (Did the organization faithfully collect expert opinions when planning?), and sufficiency of preliminary validity review on the plan (Did an organization fully conduct a preliminary survey when planning? Were the anticipated side effects reviewed and their alternatives fully reviewed?) are used. For the adequacy of performance plan item, four indicators such as specificity of performance goal setting (Are the objectives an organization is trying to achieve through the policy sufficiently specific? Has an organization specified specific targets for evaluating the outcome of the policy? Is there a concrete way to evaluate the effectiveness of the policy?), and relevance of performance indicators (Were the performance indicators and their performance targets set appropriately?) are used.
In the policy implementation stage, three indicators, such as the fidelity of the propulsion schedule (Has the organization proceeded faithfully the policy in accordance with the schedule?), responsiveness to changes in administrative conditions and circumstances (Has the organization responded appropriately to changes in administrative conditions and circumstances?), and connectivity with relevant institutions and policies (Did the organization establish proper connectivity and cooperation system with relevant institutions and policies in the process of implementation?) are used to measure the relevance of the implementation process.
In the output/performance of the policy stage, two items (achievement of performance objective and feedback of evaluation results) are used. The achievement of the performance objective item uses the achievement of the target of performance indicators (Did the organization achieve the originally set objectives in the policy planning? Did the organization provide good things and poor things through performance analysis? Did the organization suggest appropriate implications by performance analysis?). The feedback of the evaluation results item uses the evaluation result utilization indicators (Are the results of the performance analysis properly reflected in the next plan? Were the results of performance analysis fully utilized through knowledge management?).
The current method has some limitations. Specifically, in conformity with DIPS (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), it uses a 5-point Likert scale (Very poor – Poor – Acceptable – Good – Very good). This suggests the check criteria below [12]:
-
▪ “Very good (5 point),” when the policy content by the policy plan matches the policy direction in the DIPS.
-
▪ “Acceptable (3 point),” when the policy does not exactly match the direction in the DIPS but is related to the direction of informatization in the DIPS.
-
▪ “Very poor (1 point),” when the policy is not related to the direction of informatization in the DIPS.
However, it may not be meaningful to use <a-1> evaluation indicator (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?) because the policy to be evaluated cannot be made completely apart from the DIP. Moreover, it may also be artificial to assign different points according to the level of conformity, in which the subjective judgment of the evaluator is involved [16].
To overcome the limitations of the current method, it is necessary to reconstruct the evaluation system for DIP based on clear and quantitative evaluation indicators that can guarantee objectivity.
III. EVALUATION FRAMEWORK FOR DEFENSE INFORMATIZATION POLICY
The proposed evaluation framework for DIP consists of three stages: policy-making, policy implementation, and outcome/performance of policy, as in the existing system.
Fig. 1 shows a policy-making process. Table 1 presents the evaluation items, their evaluation indicators, and their descriptions in the framework. Seventeen indicators were used.
In the policy-making stage, the validity of policy-making (A), the appropriateness of the policy-making process (B), and the adequacy of performance by the policy (C) are evaluated. The validity of policy-making is based on the necessity and timeliness of policy as an evaluation indicator. The appropriateness of policy-making process is evaluated using the indicators of fidelity of collecting opinions, fidelity of study in advance, fidelity of policy analysis, and fidelity of post preparation. The adequacy of performance by policy is evaluated by three indicators such as representativeness, objectivity, and redundancy of performance indicators.
In the policy implementation stage, the properness of policy implementation (D) are reviewed with three indicators, i.e., compliance with plan, responsiveness to change of circumstance, and connectivity with relevant organizations or policies.
In the output/performance stage, the achievement of the performance objective (E), the adequacy of the performance analysis process (F), and the utilization of the analysis results (G) are evaluated. Two indicators such as concreteness and reliability of performance analysis are used for the evaluation of the adequacy of performance analysis process. In addition, the utilization of analysis results is based on the sharing and learning level of analysis result and the intellectualization level of analysis result.
For the evaluation framework to work well, specific measures, descriptions, and criteria should be provided for each evaluation indicator. The explanation of the evaluation indicator for <A-1> the necessity of policy is as below:
-
▪ Indicator: Policy-making >> Validity of policy-making > Necessity of policy
-
▪ Description: Check if the necessity of policy was reviewed when making the policy
-
▪ Question: Was the policy fully reviewed in accordance with the policy contents in the DIPS and the National Informatization Basic Plan?
-
▪ Check criteria: See Table 2
-
▪ Source of data: DIPS, Framework Act on National Informatization [17], National Informatization Basic Plan [17], [18], Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years.
Table 2 shows check criteria for necessity of policy indicator. This indicator checks whether the policy is consistent with the direction of defense informatization, national informatization, and other public or private informatization. The criteria use a 5-point Likert scale (Very poor (0) – Poor (1) – Acceptable (2) – Good (3) – Very good (4)). If the policy is consistent with the direction of all informatization, one can mark “Very good.” Even though the policy is not consistent with the direction of defense informatization, one should mark “Acceptable” if it is fully consistent with the direction of national informatization. One can mark “Poor” if it matches with the direction of other informatization policy except defense or national informatization one.
All evaluation indicators as tabulation are shown in Tables 3 to 19.
Indicator | Policy-making >> Validity of policy-making > Necessity of policy | ||||
---|---|---|---|---|---|
Description | Check if the necessity of policy was reviewed when making the policy | ||||
Question | Was the policy fully reviewed in accordance with the policy contents of the Defense Informatization Policy Statement (DIPS) and National Informatization Basic Plan? | ||||
Check criteria | Measure | Score | Consistency with the direction of defense informatization policy (based on DIPS) | Consistency with the direction of national informatization policy (based on National Informatization Basic Plan) | Consistency with the direction of other private or public informatization policies |
Very good | 4 | Matched | - | - | |
Good | 3 | Partially matched | - | - | |
Acceptable | 2 | Almost NO matched | Matched | - | |
Poor | 1 | Almost NO matched | Partially matched | - | |
Almost NO matched | Almost NO matched | Matched | |||
Very Poor | 0 | Almost NO matched | Almost NO matched | Partially matched or below | |
* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years. | |||||
Source of data | - Defense Informatization Policy Statement (DIPS) - National Informatization Basic Plan in the Framework Act on National Informatization [17] - Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years |
Indicator | Policy-making >> Validity of policy-making > Timeliness of policy | ||||
---|---|---|---|---|---|
Description | Check if the timeliness of policy was reviewed when making the policy | ||||
Question | Was the plan adequately reviewed for timely policy planning in the Defense Informatization Policy Statement (DIPS)? | ||||
Check criteria | Measure | Score | Consistency with the priority in the defense informatization policy plan (based on DIPS) | Consistency with the priority in the national informatization policy plan (based on National Informatization Basic Plan) | Consistency with the priority in the other private or public informatization policies |
Very good | 4 | Matched | - | - | |
Good | 3 | Partially matched | - | - | |
Acceptable | 2 | Almost NO matched | Matched | - | |
Poor | 1 | Almost NO matched | Partially matched | - | |
Almost NO matched | Almost NO matched | Matched | |||
Very Poor | 0 | Almost NO matched | Almost NO matched | Partially matched or below | |
* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years. | |||||
Source of data | - Defense Informatization Policy Statement (DIPS) - National Informatization Basic Plan in the Framework Act on National Informatization [17] - Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years |
Indicator | Policy-making >> Adequacy of performance by policy > Representativeness of performance indicators | |||
---|---|---|---|---|
Description | Ensure that the performance indicators represent the objectives that the organization wants to achieve through the policy | |||
Question | Are the objectives to be achieved through the policy sufficiently detailed and expressed in performance indicators? | |||
Check criteria | Measure | Score | Clarity of objectives | Connectivity of performance indicators |
Very good | 4 | When the objective of the policy is specifically set for each period (short-term, long-term) | When performance indicators are clearly aligned with the objective of the policy | |
Good | 3 | When the objective of the policy is specifically set for each period | When performance indicators are NOT clearly aligned with the objective of the policy | |
Acceptable | 2 | When the objective of the policy is roughly set for each period | When performance indicators are clearly aligned with the objective of the policy | |
Poor | 1 | When the objective of the policy is roughly set for each period | When performance indicators are NOT clearly aligned with the objective of the policy | |
Very poor | 0 | When the objective of the policy is NOT defined over time, or it is absent | - | |
- | When performance indicators are NOT defined | |||
Source of data | - Proposal report showing the policy objectives - Report showing the connectivity of policy objectives by performance indicators - Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8] - Defense informatization performance evaluation report in the Act on Defense Informatization [8] |
Indicator | Policy-making >> Adequacy of performance by policy > Objectivity of performance indicators | ||
---|---|---|---|
Description | Ensure that performance indicators are set up to be measured or calculated | ||
Question | Are there data on which performance indicators can be measured or calculated, and are the measurement criteria or calculation methods provided? | ||
Check criteria | Measure | Score | Measurement/calculation method of performance indicators |
Very good | 4 | When all performance indicators have specific measurement criteria or calculation methods | |
Good | 3 | When two-thirds (⅔) or more of performance indicators have specific measurement criteria or calculation methods | |
Acceptable | 2 | When a half or more to less than two-thirds (⅔) of performance indicators have specific measurement criteria or calculation methods | |
Poor | 1 | When one-thirds (⅓) or more to less than a half of performance indicators have specific measurement criteria or calculation methods | |
Very poor | 0 | When less than one-thirds (⅓) of performance indicators have specific measurement criteria or calculation methods | |
* Note. The performance indicators refer to indicators defined in connection with the objectives identified in <C-1> indicator (representation of performance indicators), excluding performance indicators not linked to the performance objectives. | |||
Source of data | - Report showing measurement or calculation method of performance indicators - Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8] - Defense informatization performance evaluation report in Act on Defense Informatization [8] |
Indicator | Policy implementation >> Properness of policy implementation > Compliance with plan | ||
---|---|---|---|
Description | Ensure that the policy went as planned | ||
Question | Did the policy proceed faithfully in accordance with the schedule? | ||
Check criteria | Measure | Score | Compliance with plan |
Very good | 4 | When two-thirds (⅔) or more of the schedule against the plan was completed | |
Good | 3 | When a half or more to less than two-thirds (⅔) of the schedule against the plan was completed | |
Acceptable | 2 | When one-thirds (⅓) or more to less than a half of the schedule against the plan was completed | |
Poor | 1 | When less than one-thirds (⅓) of the schedule against the plan was completed | |
Very poor | 0 | When the policy was NOT driven at all | |
* Note. 1. In case of less than two-thirds of the schedule compared to the plan, if the objective evidence is provided that the schedule was delayed due to unavoidable external circumstances (budget change, an order from higher institutions or organizations, etc.), judge as “Agree (3 points).” 2. The schedule compared to the plan is calculated as maximum of the ratio of progress time to total schedule or input cost to total budget. For example, if the policy has been advanced about four months compared to the 12-month schedule and 1.2 billion of the total 2 billion budgets have been used, the schedule against the plan is determined as max [4/12, 12/20] = 0.6 and the judgement is “Agree (3 points).” |
|||
Source of data | - Gantt chart of policy implementation plan and a current progress; Documents showing a current progress - Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8] |
Indicator | Policy implementation >> Properness of policy implementation > Connectivity with relevant organizations or policies | |||
---|---|---|---|---|
Description | Confirmation of establishment and operation of connectivity and cooperation system with relevant organizations and policies | |||
Question | Has the organization established and operated connectivity and cooperation system with relevant organizations and policies in the process of implementation? | |||
Check criteria | Measure | Score | Level of establishment of cooperation system with relevant organizations | Level of connectivity with relevant policies |
Very good | 4 | When reviewed the establishment of cooperative system with relevant organizations and continuously met (more than once) | When the relevant policy is identified, the connectivity is reviewed, and the actual connectivity case is presented in detail | |
Good | 3 | When reviewed the establishment of cooperative system with relevant organizations and met once | When the relevant policy is identified, and the connectivity is reviewed, but the actual connectivity case is NOT specific | |
Acceptable | 2 | When reviewed the establishment of cooperative system with relevant organizations and met once | When the relevant policy is identified, and the connectivity is reviewed, but NO case | |
Poor | 1 | When only reviewed the establishment of cooperative system with relevant organizations and NO met | When the relevant policy is ‘identified,’ and the connectivity is reviewed, but No case | |
When the relevant policy is only identified | ||||
Very poor | 0 | When NO reviewed the establishment of cooperative system | - | |
- | When the relevant policy is NOT identified | |||
Source of data | - Memorandum for requesting cooperation to relevant institutions - Report showing connectivity cases to relevant policies - Joint review plan in the Act on Defense Informatization [8] |
Indicator | Output/performance >> Achievement of performance objective > Achievement of performance objective | ||
---|---|---|---|
Description | Check the achievement level of the performance objective made when making the policy | ||
Question | Did you achieve the originally set objectives in the policy? | ||
Check criteria | Measure | Score | Objective achievement ratio |
Very good | 4 | When the objective achievement ratio is two-thirds (⅔) or more | |
Good | 3 | When the objective achievement ratio is a half or more to less than two-thirds (⅔) | |
Acceptable | 2 | When the objective achievement ratio is one-thirds (⅓) or more to less than a half | |
Poor | 1 | When the objective achievement ratio is zero over to less than one-thirds (⅓) | |
Very poor | 0 | When the objective achievement ratio is zero | |
* Note. If there are multiple performance objectives, the weight average is used. | |||
Source of data | - Performance objective achievement ratio report; Documents showing a current progress - Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8] - Defense informatization performance evaluation report in the Act on Defense Informatization [8] |
Indicator | Output/performance >> Adequacy of performance analysis process > Concreteness of performance analysis | |||
---|---|---|---|---|
Description | In the result of the performance analysis, check whether the problem of the policy and its cause are specified in detail | |||
Question | Are the problems of the policy and its causes specified? | |||
Check criteria | Measure | Score | Presenting the problem of the policy | Identifying the cause of the problem |
Very good | 4 | When analyzed systematically the problem of the policy and presented | When analyzed systematically the cause of the problem and presented | |
Good | 3 | When analyzed systematically the problem of the policy and presented | When NOT analyzed systematically the cause of the problem and presented | |
When analyzed generally the problem of the policy and presented | When analyzed systematically the cause of the problem and presented | |||
Acceptable | 2 | When analyzed generally the problem of the policy and presented | When NOT analyzed systematically the cause of the problem and presented | |
Poor | 1 | When analyzed generally the problem of the policy and presented | When NOT analyzed the cause of the problem | |
Very poor | 0 | When NOT analyzed on the problem of the policy | - | |
Source of data | - Performance analysis report, Policy problem analysis report - Project closure report in Act on Defense Informatization [8] |
Indicator | Output/performance >> Adequacy of performance analysis process > Reliability of performance analysis | |||
---|---|---|---|---|
Description | Check if the performance analysis was conducted with the participation of internal and external experts | |||
Question | Did the internal and external experts related to the policy actively participate in the performance analysis? | |||
Check criteria | Measure | Score | Level of internal expert participation | Level of external expert participation |
Very good | 4 | When multiple (two or more) internal experts participated in the analysis more than once | When multiple (two or more) external experts participated in the analysis more than once | |
Good | 3 | When multiple (two or more) internal experts participated in the analysis once | When multiple (two or more) external experts participated in the analysis once | |
Acceptable | 2 | When ONLY one internal expert participated in the analysis once or more | When ONLY one external expert participated in the analysis once or more | |
Poor | 1 | When ONLY one internal expert participated in the analysis once | - | |
- | When ONLY one external expert participated in the analysis once | |||
Very poor | 0 | When any internal expert did NOT participate in the analysis | When an external expert did NOT participate in the analysis | |
* Note. Internal expert refers to skilled workers in policy-making and implementation organizations, while external expert refers to professionals belonging to other organizations. | ||||
Source of data | - List of internal and external experts (including profiles) and their documented opinions related to the policy - Minutes (or photos of meetings), confirmation of participation, etc. - Joint review plan in the Act on Defense Informatization [8] |
Indicator | Output/performance >> Utilization of analysis results > Intellectualization level of analysis result | ||
---|---|---|---|
Description | Check if the analysis result was systematically accumulated and managed | ||
Question | Did the organization accumulate and manage the result of performance analysis using a database system? | ||
Check criteria | Measure | Score | Accumulation and management level of analysis result |
Very good | 4 | When the results of performance analysis were systematically accumulated and managed online using relevant specialized solutions | |
Good | 3 | When the results of performance analysis were written in specialized documents and accumulated and managed only offline | |
Acceptable | 2 | Where the results of performance analysis were accumulated and managed in parallel with the minutes and other documents | |
Poor | 1 | When documents, which recorded the results of performance analysis, were presented but were one-time, and were not accumulated and managed | |
Very poor | 0 | When the results of performance analysis are NOT accumulated and managed | |
* Note. The relevant specialized solutions for the accumulation and management of analysis results refer to the online-based tools that provide the functions for accumulating and managing data such as database, data warehouse, and data mart. | |||
Source of data | - Screen capture of online database saved analysis results - Report on the analysis results - Defense informatization performance evaluation report in the Act on Defense Informatization [8] |
IV. CONCLUSION
This study describes the improved evaluation framework, which was revised from the current defense informatization evaluation method [12], for the DIP. The proposed framework for the policy of defense informatization is evaluated in each stage of policy-making, policy implementation, and outcome/performance of policy. This does not use a survey method but a direct evaluation of the policy by evaluators, if possible. The evaluation requires measurement effort. For an efficient evaluation that reduces the burden of the defense organizations on overlapped evaluation by national and defense methods, the proposed evaluation method takes in and is consistent with the national evaluation method [5-7] as much as possible. The framework proposed in this study can be applied to assess other various policies such as multimedia broadcasting policy, ICT convergence policy, and multimedia policy as well as DIP.
There are some limitations in the current study, as is the case with most researches and methodologies. It is necessary to set the performance objective for each policy in advance. Most policies do not have a clear and quantitative performance objective, indicator, or target [19]. If the policy does not have quantitative performance indicators related to an objective and target value, the evaluation framework cannot be workable. Moreover, the proposed evaluation framework is a revision based on an existing study [12], and not a theory.
The simple is more beautiful and better than the complex. It is more useful to develop an evaluation framework that most users can intuitively understand or easily use. Lower acceptance may weaken its effectiveness. It is better to evolve an imperfect evaluation framework by repetitively evaluating the informatization policy than waiting for the development of a fully reasonable and theoretically perfect evaluation framework. In addition, it must be as open as possible with the methods and results made widely available.
Repetitive uses of an evaluation framework can accumulate experience. They can lead to lessons learned and modification requirements, which can make the evaluation framework more useful. Users can easily accept the evaluation framework. Through such a virtuous cycle, the evaluation framework for the policy about defense informatization will be easily accepted by the users and can aid in generating effective policies.