ITS evaluation must be well thought through and properly targeted if it is going to be useful. The methodology should take into account: the ITS scheme objectives, user needs and stakeholder expectations. It should make use of quantified and qualitative measures that match the needs of decision-makers.
At an ‘overview’ level, the evaluation should identify both the impacts that were planned and the unintended impacts – establishing qualitatively the nature of the changes which have occurred and their possible causes. At a more detailed level, the evaluation should quantify the changes after the scheme is implemented, assess the scale of those changes and the level of confidence in them – and identify the mechanisms leading to significant changes. (See Guidelines and Techniques)
A systematic approach is needed for the monitoring stage of the evaluation cycle to ensure that different types of impacts (benefits, disbenefits and costs) are identified and measured. Developing an “Evaluation Plan” is key to this. It is a technical document – separate to an “Evaluation Management Plan” which covers resources, timing, coordination with other activities, and data management. (See Evaluation Methods)
The evaluation plan should be drafted and agreed before the scheme is implemented, so that the data needed to measure the situation prior to the deployment can be defined and collected in advance. The time frame available for collecting ‘before’ data may also influence the timing of some of the ‘after’ data collection needed for comparative purposes.
Consider the ‘Checklist’ of issues that need to be addressed when preparing an evaluation plan. (See Evaluation Methods) Useful input may come from the evaluation plans used for other studies – or other guidelines – in determining all the elements that need to be addressed.
An evaluation plan needs to be well specified:
Focus on identifying the policy objectives that the ITS scheme is intended to address and assess the extent to which it contributes to reaching those objectives.
Define the ‘base case’ or ‘before’ ITS deployment situation – to enable comparisons with the ‘after’ deployment situation.
Define the area to be covered by the evaluation. A map is helpful.
Define the timing and duration of the evaluation – both before and after deployment. Ensure that the timing minimises external effects (such as seasonal effects). Ensure that the duration is long enough for the full impacts to be identified – not just the initial effects whilst users are adjusting to the ITS deployment.
List the impacts expected, using information from other schemes, preliminary investigations and the scheme appraisal. In some cases it may be possible to list the expected impacts in quantitative terms – but if there is no evidence of impacts from previous schemes, a qualitative description of expected impacts will be needed. Evaluation toolkits and databases of evaluation results are useful sources. (See Improving Performance)
Prepare a table listing the objectives, the indicators for assessing them and the data sources for the indicators. (See Evaluation Methods)
Define the evaluation methods to be used. Consider using more than one approach to provide a more ‘rounded’ view of the impacts and to confirm conclusions. (See Evaluation Methods) In the case of a complex evaluation, draw a ‘flow chart’ showing how all of the different sources of information fit together to inform the evaluation objectives.
Consider the experimental design principles to be used. For example compare the performance of a package of ITS measures with a ‘control’ that is as similar as possible but has no ITS element. It is desirable to make comparisons using more than one location to take account of other variables.
Ensure that the design will capture any unintended side effects of the ITS measures as well as the expected impacts. Review the results of other studies to improve the design.
Define the details of the data to be gathered – before and after implementation – to inform each of the indicators. Include, for each indicator, information on sample sizes, statistical confidence, timing, and location sites.
Review how external factors could influence the evaluation and plan the evaluation to exclude them. For example, changes in the transport network of a neighbouring area could have knock-on effects in the area in which the ITS is to be deployed. Wider scale changes such as growth in vehicle ownership or a downturn in the economy will influence the demand for transport and could ‘confound’ the evaluation.
List the performance criteria which can be used to assess the monitoring and evaluation. These can be included in the supply contract for commissioning the monitoring and evaluation.