I. INTRODUCTION
The Korea Ministry of National Defense (MND) distinguishes the evaluation of the defense informatization by the evaluation of the information policy, the evaluation of the IT project, and the evaluation of the level of informatization in the organizations [1]. The evaluation of informatization policy is related to various policies in the field of defense informatization. The evaluation of IT project evaluates various projects such as information systems development project, IT procurement project, information systems operation project which are going on in the defense sector. The evaluation of the level of informatization assesses the level of informatization of the defense organizations or institutes such as army, navy, air force [2].
In the evaluation systems of defense informatization projects, there are a pre-project evaluation for selection of IT project to invest, an in-project evaluation for effective management of IT project, and a post-implementation evaluation for measuring whether IT project achieved planned performance objectives or not [1].
The MND has the directive that is related to defense informatization [3]. However, the directive does not describe a specific method of evaluation for informatization project. The evaluation is being conducted by experts without a firmly consistent method to execute the directive. Though many studies (e.g., [1]) developed evaluation systems and measurement methods in the defense environment, the agreement to a formalized method is incomplete. To overcome such a limitation, it is needed an evaluation system that users can understand and use easily.
This study describes existing works to related the evaluation system. In addition, it suggests the evaluation system for defense environments, evaluation metrics, and their measuring method used in the post-implementation evaluation stage of the informatization project. Last section depicts conclusion and future works.
II. RELATED WORKS
DeLone and McLean [4] developed an information systems (IS) success model (Fig. 1) using existing studies related to IS performance. The model describes that the quality of information systems and the quality of information affect the use of information systems and the user satisfaction, that the use of information systems and the user satisfaction affect the individual performance of information systems, and that the individual performance lastly has an effect on organizational performance. In the view of information systems success model, the evaluation of information systems needs to be done in terms of information systems quality, information quality, information systems use, user satisfaction, and individual and organizational performance of information systems.

DeLone and McLean’s IS success model [4] gained much interest from IS researchers. The model was widely used in various domains. Empirical studies using the IS success model were accumulated over time. The researchers also presented the limitations of the model. DeLone and McLean [5] reviewed the cumulative empirical studies with their model and presented updated IS success model (Fig. 2). They added service quality and intention to use to the existing model and added the feedback from net benefit (performance) to intention to use and user satisfaction. In the view of updated IS success model, information systems should be evaluated by measuring information systems quality, information quality, service quality, intention to use, use, user satisfaction, and net benefits (performance).

For measuring the performance of IT investment and showing results, United States General Accounting Office (GAO) suggested the guide that (1) use IT results chain, (2) use balanced score card (BSC), (3) develop performance indicators and results in each level of organization, (4) develop easy and understandable performance indicators, data gathering and analysis capability, (5) strengthen IT process to improve the achievement of organizational mission performance [6]. In the guideline, performance of informatization project is measured by BSC.
In addition, GAO suggested the select-control-evaluate model as a central tenet of the federal approach to IT investment management [7]. During the select phase, there are activities such as screen, rank, and choose IT projects. In control phase, one has to monitor progress and take corrective actions. Lastly, one should conduct interviews, make adjustments, and apply lessons learned during the evaluate phase. One will check that the information systems are delivering what you expected.
In Korea, the government evaluation is conducted based on the Framework Act on the Evaluation of Government Activities [8]. The evaluation of the government work is divided into the self-evaluation which evaluates the main policy of the central administrative agency itself and the specific evaluation which evaluates the policy that the Prime Minister needs to integrally manage the central government.
Another axis of the national level evaluation for government is the financial business performance management systems implemented by the Ministry of Strategy and Finance based on Article 8 of the National Finance Act [9]. As a performance monitoring system, an organization derives its strategic goals / performance targets and develops performance indicators that can be used to measure actual achievement in performance plans, and verifies them through performance reports. It is an autonomous evaluation method in which the project executing department autonomously evaluates the financial business and uses the evaluation results confirmed and checked by the Ministry of Strategy and Finance for financial management. As another evaluation method, in addition, there is the program evaluation that suggests alternatives by deeply analyzing the effectiveness of the project using external experts and statistical techniques.
The MND evaluates IT projects according to the Defense Informatization Task Directive [3]. It includes the evaluation subject, principle, type, time, and items to be evaluated. The evaluation items are four questions (Has the planned performance met the target? As a result of the project evaluation, is the project performing effectively? Have you improved efficiency in achieving performance goals? Have you improved efficiency in achieving performance goals?). However, though the directive suggests the evaluation items, it does not present a specific evaluation method.
III. EVALUATION SYSTEMS FOR DEFENSE IT PROJECT
As shown in Table 1, the evaluation system (post-implementation evaluation stage) of the informatization project is evaluated from the viewpoint of achievement of performance, project plan compliance, project management, and economic validity. Achievement of performance measures the degree of achievement of performance objectives (indicators) proposed in the informatization project plan. Compliance with the project plan measures whether the project cost was used according to the budget of the project plan and whether the planned period of the project plan has been complied with. The project management confirms that a project manager has fulfilled the regulations / guidelines / procedures related to the project management in the defense environment, has completed all the points in the audits, in-project evaluation, review meetings, and tests within the project period, and that there is reasonable and reliable evidence of these. Moreover, economic validity uses the benefit cost ratio. It measures the economic validation of the informatization project by comparing actually executed cost with accomplished performance (actual benefit), not planned budget or performance targets in the project plan.
In the evaluation system, the evaluation items are measured as evaluation metrics. Table 2 shows the measurement method for measuring the degree of accomplishment of performance objective in the project. There are two options of the measurement method for this metric. The first option is calculating the degree of accomplishment of performance objectives as a percentage. Another option is the method of using four-point scale (“No” - “Some” - “Considerably” - “Yes”). If there are multiple performance indicators of a specific IT project, weights are applied to each performance indicator to obtain a weighted average.
Item | Description |
---|---|
Evaluation item | Accomplishment of performance |
Metric | <Post metric #1> Accomplishment of performance objective |
Explanation | Have the planned performance indicators met the targets? |
Measurement method |
□ (Option #1) Score according to degree of accomplishment of performance objectives. ※ Calculate the degree of accomplishment of performance objectives as a percentage. ※ If there are many performance objectives in the project, after calculating the weight between the performance indicators developed according to the performance objectives, the degree of accomplishment of performance objectives is measured by weighted addition. □ (Option #2) The degree of accomplishment of reasonably presented performance targets is confirmed and graded in four-point scale. No-------------Some-------------Considerably-------------Yes ► Mark at “Yes” when below all criteria are satisfied a. “Yes” in <Ex metric #2-3> (Table A1 in Appendix) b. Achieved at least 100% of the target of the performance indicator ► Mark at “Considerably” when it is applicable to any case below a. “Yes” in <Ex metric #2-3> and when the target of the performance indicator is achieved to a considerable degree (90-99%) b. “Yes” in <Ex metric #2-3> and even if the target value of the performance indicator is achieved at least 100%, when it corresponds to any one of the following cases: 1) if the execution is not successful due to a problem during the project, 2) if the target value of the performance indicator was exceeded by the external factors c. “Yes” in <Ex metric #2-3> and even if the target value of the performance indicator is not achieved (below 80%), when the project manager manages the project appropriately according to the project plan and has active efforts to respond to external changes ► Mark at “Some” when it is applicable to any case below a. “Yes” in <Ex metric #2-3> and when one achieves the target value of the performance indicator to some extent (80-89%) b. “Yes” in <Ex metric #2-3> and even if the target value of the performance indicator is achieved at least 100%, when the data are not trustful c. “No” in <Ex metric #2-3>, but the target value is achieved at least 90% ► Mark at “No” when it is applicable to any case below a. “No” in <Ex metric #2-3> and when the target value of the performance indicator is achieved below 90% b. “Yes” in <Ex metric #2-3> and when the target value of the performance indicator is achieved below 80% c. When there are the false reports about the achieved target value of the performance indicator or the manipulation of data ※ If there are many performance objectives, one judges “Yes”, “Considerably” degree, “Some” degree, and “No” in each indicator and calculates the degree of accomplishment of all performance objectives by weighted addition. |
Data gathering method | □System ■Data ■Questionnaires □Interview |
Data sources | Relevant document or data to demonstrate the achievement of the performance indicators and the reliability of the data, e.g., Project Plan, Project Closure Report, Performance Report |
Tables 3 and 4 show the measurement method for the observance of the project budget metric and project period metric, respectively, presenting compliance with the project plan.
Table 5 shows the explanation of the appropriateness of project management metric. The project manager has to follow the regulations, guidelines, and procedures that is related the national and defense project management. Furthermore, there is evidence can reveal their observance.
Table 6 represents economic validity of the project. In the ex-project stage, the project was checked and selected with expected performance and estimated cost. After project, this metric compares the achieved performance against actual cost, which includes all cost items in total cost of ownership, in the project.
IV. CONCLUSION
The (post-implementation evaluation stage) evaluation system for the defense informatization project evaluates the informatization project from the viewpoint of achievement of performance, adherence to project plan, project management, and economic validity. There is a limitation in most of evaluation methods. The development of a theoretically complete evaluation system is important, but it is more important to develop and apply an evaluation system that is easy for users to understand or apply it. The evaluation system should be used continually in real projects and supplemented so that the evaluation system that is accepted by many stakeholders including the evaluated organizations as well as the evaluators can be developed.
For the future work, it is necessary to use and supplement the proposed evaluation system. One should practically use the evaluation system for various real projects, check its usability, and develop the best practices and lessons learned. As the evaluation cases are accumulated, if the evaluation system is not clear enough to mislead the evaluator, there should be a complementary work of the evaluation system. Moreover, it is necessary to ensure that there is sufficient consistency in the evaluation of the ex-project stage [9] and the evaluation of the post-implementation stage in this study, and it should be verified whether the evaluation can proceed easily even if there is not a lot of expertise in the evaluation.