RNO/ITS - PIARC (World Road Association)
Published on RNO/ITS - PIARC (World Road Association) (https://rno-its.piarc.org)

Home > Printer-friendly > Systems Approach to ITS Design

Systems Approach to ITS Design

The complexities of transport and logistics can be approached by using systems engineering methodologies and user participation in the design work. The Road Network Operator needs to understand fully the total system structure, its dynamic characteristics and the role and responsibilities of its different road users. It is only then that a proper Human-Machine Interaction (HMI) design can be accomplished with positive user acceptance and operational success.

Analysis breaks problems into their parts and attempts to find the optimum solution. This process of breaking apart the whole, however, neglects the interrelationship between the parts – which can often be the root cause of the problem. The “systems” approach argues that in complex systems, the parts do not always provide an understanding of the whole. Rather, in a purposeful system, the whole gives meaning to the parts.

In order to tackle the sometimes contradictory interests of society and individuals, systems engineering methodologies can be applied. An ideal systems approach would start with the analyses of both the users and the problems which these user groups experience in traffic and transport. Procedures should guarantee that the results of these analyses are then used in the design process itself. The introduction of a user-oriented perspective to Intelligent Transport Systems (ITS) has similarities with the introduction of quality assurance procedures found in most industrial activities.

There are other industrial elements of the design process, which have to be included in the required human factors work. They focus on the practical realisation of basic ideas of problem solving which firstly have to be turned into functional concepts. A phase of implementation of these concepts into user-accepted solutions follows. This is often a trade-off between system features and system costs. The features (or benefits) are composed of usability, utility and likeability – whereas the efforts to learn and use, the loss of skills, new elements of risks introduced, and the financial costs constitute the cost elements. The use of human factors knowledge is crucial when high usability is sought.

The difficulty of identifying variables to reliably measure all these elements – is evident. The principle of user acceptance is an approach that clearly highlights all the diverging elements that could, would and should influence the design process. It can simply be stated that if the features are valued higher than the costs (using a weighted criteria/cost function) the solution is acceptable and will be purchased – and hopefully used. Market place stakeholders – such as end-users, customers and consumers must be involved.

New products or solutions in ITS are very seldom developed to solve or meet completely new problems and needs. Instead, better performance of already existing solutions is often the goal. It is also clear that old solutions and products will co-exist side-by-side with new ones. The penetration of new technology in society is often very slow and starts with people that can afford to be “modern” and the most up-to-date. Therefore the design must allow parallel operation of the old and the new, and some form of step-by-step development must be used. Other elements to consider are that the long term goals of systems for traffic and transport are usually societal, while the short term (market-oriented) are individual in that they try to create and meet an instant demand. This inherent conflict must be addressed and needs to be resolved at an early stage of the implementation process.

Role of the Road Operator

Take a Broad and User-Centred Approach to ITS

The Road Operator is well-placed to take a strategic and user-centric approach to the design and introduction of ITS. The interactive design process that is required may appear to take a great deal of time and resources but the benefits will become apparent and should quickly, outweigh the upfront costs.

The Road Operator has the opportunity to work with others on ITS innovations that promote and develop integration of transport and other services – to provide additional benefits.

Invest in Piloting

Before introducing ITS or any new technology, or undertaking extensive field trials, it is invariably beneficial to pilot the system or service with a small group of users before more widespread deployment. This approach is part of user-centred design and allows any problems to be addressed, avoiding embarrassing and expensive mistakes.

Encourage Feedback About ITS

Encouraging feedback from users and providing suitable mechanisms for monitoring use of ITS allows those responsible for ITS operations to better understand the experience of both occasional and experienced users. This may show how the use of ITS is changing over time (because of changes in other parts of the transport system or the environment) and provides advanced information about necessary modifications, including possibly re-design of the ITS.

Integration of Transport and Wider Services

ITS can, if introduced through a sufficiently wide systems approach, assist users – by providing a measure of integration within and across modes of transport (for example by combining tolling, parking and public transport ticketing). It can also assist with wider common service provision between transport and other urban facilities such as energy services. These different modes of transport and wider urban services typically have different ownership, governance and objectives which present barriers to integration and enhancement of user services.

 

Transport Systems Context

The rationale behind the development of ITS is the need for high efficiency and quality in new and innovative transport services. The goal of these services is either to meet a certain need for the movement of people and goods – or to supply a specific endeavour (or activity) with the correct amounts of its necessary components at the time and at the location. These activities are called transport and logistics respectively. By implication, the design of these services will include modern information and communication technologies (ICT) for the exchange of information in real time and the result is an “intelligent” transport system.

From their everyday activities, people identify needs for movement between specific locations and become travellers. They engage in trip planning and, if successful, a trip plan will be created by linking a sequence of transport options to serve the journey – if necessary based on different transport means. These transport options can either be chosen from available information (in timetables, for example) or can be made known to someone (who can organise such options) as dynamic demands on transport. These travel demands and travel patterns are today normally captured by surveys and observations on a yearly basis to inform the production of static timetables.

The planning of a journey includes a matching exercise between the total travel demand and the future availability of vehicles to serve that demand and provide the transport service. The matching will be successful only if no disturbances in the traffic process occur and no characteristics of an open-loop control system are evident.

Advice to Practitioners

When technical systems such as ITS are put into a societal context, complexity emerges. The systems have to be designed in a cost-effective, efficient, safe and environmentally acceptable way. These objectives require design methods which make it possible to cope with system complexity.

ITS usually involves several human decision makers – and all the decision-making processes which require information about the ITS environment must be considered. Integration of network operations, transport processes and stakeholder perspectives is necessary. Analysis, design and evaluation of this complexity, and its impact on modern solutions for transport services, must be performed adequately.

Activities (or processes) in society which make use of a technical infrastructure and communications networks will be influenced by a large number of decision-makers and stakeholders. They are often geographically dispersed, have contradictory goals and act with different time horizons. In consequence, description and analysis of the interaction between technology and people in a specific class of systems can become so complex that specialist tools and techniques may be required. Expert help should be sought where necessary.

Three highly interrelated perspectives - networks, processes and stakeholders - can be usefully identified and, if combined in an operational way, can be helpful to the analysis, design and evaluation of ITS solutions.

Network perspective

The network perspective is focused on the links, nodes and elements for transport and communications which, when brought together, form the physical network and its structure. The use of technologies (and especially ICT) is important and adds complexity. This can be dealt with by breaking down the network into subnets or subsystems.

process perspective

The process perspective is focused on the interaction between network components and the different flows of traffic or communications that can be identified in the processes. The dynamic characteristics are related to the transmission and transformation of information and related information channels. A matching between the time horizons of the control processes and the speed of information exchange is crucial for acceptable performance.

stakeholder perspective

The stakeholder perspective is highly related to how ITS supports decision-making. The stakeholders have to interface with the processes and the networks by means of work stations, control panels, mobile or other in-vehicle units. The interface designs must be adapted to the mental models of the processes used by the stakeholders in their tasks. An appropriate filtering of information has to be introduced if the stakeholders are not to be overloaded, or disturbed by other processes or events outside of their control. A hierarchy of abstraction levels can be established. (See Users of ITS and Stakeholders )

 

Why Involve Users?

Designers often build technical systems without completely understanding the tasks to be performed. Intelligent Transport Systems (ITS) need to be designed to be both useful and usable. Being usable is not enough if the system is not first useful. Users of ITS are diverse individuals – they do not all think the same way and they can be inconsistent and unpredictable. (See Diversity of Users)

It is not surprising that it is often difficult for the designers of technology to understand exactly the real needs of their potential users, how the technology will be used and how use will change as familiarity with the system or service develops. This is particularly the case for complex systems such as ITS in the broader transport context. The goal of good design is for complexity to be made to appear simple or intuitive to its users.

Advice to Practitioners

The complexity of ITS processes and their dynamics makes it essential to use sound design principles for robustness. For this reason, ITS require feedback of information from different process states using appropriate sensors. The feedback will also provide the input to adaptive control algorithms for decision-making by users – and make the processes less sensitive to disturbances. The complex nature of transport systems involving the interaction of many different systems and services is clear from the information feedback loops and the varied timescales used in the different decision-making processes.

Human error becomes virtually inevitable with the large number of different links and connections in the networks and processes of modern transport systems. In the transport domain one of the most critical situations is that of driver-vehicle interaction – as mistakes, slips and lapses in the primary driving tasks will have safety implications. (See Human Performance)

As well as reducing critical errors, there are many other practical reasons for involving users:

  • people are diverse and inconsistent. By understanding their characteristics and taking them into account in the design – the effectiveness and efficiency of the ITS is likely to increase
  • there can be considerable benefits in drawing on the creativity, ideas and expertise of users in the design, development and introduction process
  • users sometimes have the ability to disrupt or reject ITS, which can cause service interruptions or other problems. Appreciating and responding to negative issues during development is only possible if users are involved
  • safety is an important issue for Road Network Operators and they may have a responsibility of care towards workers and transport users. Understanding user interaction with ITS can help the promotion of safety

The overall message is that ITS should not be designed, developed or introduced without involving those who must use it. A holistic approach is required which acknowledges and accounts for the interactions between ITS and its users.

 

Reference sources

ISO 9241-210:2010 Ergonomics of human-system interaction -- Part 210: Human-centred design for interactive systems

Norman, Donald (1988). The Design of Everyday Things. New York: Basic Books. ISBN 978-0-465-06710-7.

BS ISO/IEC 25010:2011 ISO/IEC 25010:2011(E) Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality Models

User Centred Design

User centred design (UCD) is a well-tried methodology that puts users at the heart of the design process, and is highly recommended for developing ITS products and services that must be simple and straightforward to use. It is a multi-stage problem-solving process whereby designers of ITS analyse how users are likely to use a product or service – but also tests their assumptions through analysing actual user behaviour in the real world.

A key part of UCD is the identification and analysis of users’ needs. These have to be completely and clearly identified (even if they are conflicting) in order to help identify the trade-offs that often need to happen in the design process.

User-centred design (UCD) aims to optimise a product or service around how users can, want, or need to use the product or service – rather than forcing users to change their behaviour in order to meet their needs.

The ISO 9241-210 (2010) standard describes six key principles of UCD:

  • the design is based upon an explicit understanding of users, tasks and environments
  • users are involved throughout design and development
  • the design is driven and refined by user-centred evaluation
  • the process is iterative
  • the design addresses the whole user experience
  • the design team includes multidisciplinary skills and perspectives

As shown in the figure below the main, but iterative, steps in UCD are Plan, Research, Design, Adapt, and Measure.

Steps in User Centred Design

Steps in User Centred Design

A key part of the research stage for UCD is collecting and analysing user needs. Quality models such as ISO/IEC 25010 provide a framework for this activity.

Users include the following:

  • primary users who interact with the ITS to achieve the primary goals
  • secondary users who provide support (for example data providers and system managers or administrators)
  • indirect users who are affected by the ITS

Some important questions related to user needs are:

  • who are the users?
  • what are the users’ tasks and goals?
  • what functions do the users need?
  • what are the users’ experience levels with similar systems?
  • what information might the users need, and in what form do they need it?
  • how do users think the system should work?
  • what are the extreme environments?
  • is the user multitasking?
  • does the user have particular requirements of the interface?

User needs can also be defined according to the following list of attributes for an ITS:

  • effectiveness
  • efficiency
  • satisfaction (usefulness, trust, pleasure, comfort)
  • freedom from risk (safety, health, environmental, economic)
  • reliability
  • security
  • coverage of a range of contexts (coverage, flexibility)
  • learnability
  • accessibility

Advice to Practitioners

The main advice is to adopt a user-centred design (UCD) process that begins with collecting and analysing user needs. The design of the ITS product or service can then be undertaken making use of all the advice and guidance provided below. The product or service should then be trialled and developed with actual users, taking account of their feedback. (See Piloting, feedback and monitoring)

Techniques for Collecting User Needs

Assistance from human factors professionals should be sought where necessary as there are many techniques available to collect user needs. Some of the main techniques include:

  • observation/analysis – this requires an external observer to note what activities users actually undertake – and to infer their needs from this
  • walk-throughs – these are deliberate executions of an activity, noting all the steps and documenting exactly the circumstances, for later analysis and derivation of user needs at each stage
  • questionnaires and surveys – these can be written or use other forms of information gathering to engage users directly or indirectly about their needs. Some use scales and optional choices and some have free-form areas for user response – but the design depends on the requirements of the user needs collection process
  • Delphi - this method helps to reach a consensus amongst users engaging the same group in several rounds of surveys. The first round generates initial needs. Subsequent rounds enable participants to see and comment on all ideas – until a general consensus emerges
  • focus groups – these comprise semi-structured discussions with a trained facilitator and a group of users – where a second observer makes notes of the sessions
  • group techniques - various structured techniques can be used to help explore needs and activities including, for example, “5W+H” (asking who, why, what, when, where and how) [Link to 2422, Context of ITS Use @ THE5W+H CHECKLIST] and “Progressive Why” which explores a particular activity or need in increasing depth
  • use of persona – development of a fictional character with all the characteristics of the users, from which user need may be created
  • scenarios – development of a fictional story about the "daily life of", or a sequence of events, with the primary user group as the main characters
  • use cases – these are constructed to describe a specific event or unfolding situation and may include very fine details and interactions between the users and the ITS

Undertaking User Needs Analysis

Assistance from human factors professionals should be sought where necessary. The following general guidelines are useful in compiling user needs and undertaking analysis for use in ITS design:

  • always express findings from the user’s perspective
  • cross-relate user needs to each other (there may be conflicting needs)
  • allocate sufficient time during the development process to check and validate the user’s needs
  • compile all the user needs into a single set of user requirements
  • word the requirements precisely and ensure that all categories of human-related requirements are covered
  • create test statements to validate the user requirements, the concept and the implementation
  • ensure that all user requirements are developed before being transposed into system requirements
  • prior to finalising the ITS design, validate the user requirements with actual users
  • accept that there still may be contradictory requirements
  • try to understand the nuances of the requirements and ensure that these are reflected in the precise wording of the requirements
  • keep asking the users until there is a clear picture of their actual requirements

 

Reference sources

ISO 9241-210:2010 Ergonomics of human-system interaction -- Part 210: Human-centred design for interactive systems

Norman, D. (1988). The Design of Everyday Things. New York: Basic Books. ISBN 978-0-465-06710-7

BS ISO/IEC 25010:2011 ISO/IEC 25010:2011(E) Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality Models

ISO 13407:1999 Human-centred design processes for interactive systems

Human Tasks and Errors

Analysis of tasks and errors is a hugely important activity for anyone seeking to understand how users interact with ITS or wishing to create the environment in which the interaction will take place – between users or between a user and an object/activity. Task analysis requires understanding and documenting all aspects of an activity in order to create or understand processes that are effective, practical and useful. Typically a task analysis is used to identify gaps in the knowledge or understanding of a process. Alternatively it may be used to highlight inefficiencies or safety-critical elements. In both cases the task analysis provides a tool with which to perform a secondary function – usually design-related. Error analysis is a specific extension of a task analysis and is about probing the activities identified by the task analysis – to determine how and why a user might make an error, so that the potential error can be designed out or the consequences can be mitigated.

Task analysis

The person conducting a task analysis first has to identify the overall task or activity to be analysed, and then to define the scope of the analysis. For example, it may be that they wish to examine the tasks performed by an operator at a monitoring station, but are only interested in what the operator does when they are seated at an active station. This would be the defined scope of the analysis.

Within this overall activity all the key subtasks that make up the overall activity must be identified. It is up to the person undertaking the analysis to decide what represents a useful and meaningful division of subtasks. This is something that comes partly from experience of performing such analyses and partly from understanding of the activity being analysed. Crucially, each subtask should have a definable start and end point.

With the subtasks created, the investigator then defines a series of rules and conditions which govern how each subtask is performed. For example, it may be that the monitoring station operator has at their workstation a series of monitoring systems (subtasks) where each subtask is distinct and separate – and it may be that the operator must perform the subtasks in a pre-defined sequence. The investigator must specify the rules governing how each subtask relates to the others. For example: “perform subtasks A and B alternately. At any time, perform subtask C – as and when required”.

The investigator would then look at each subtask in turn and perform a similar process to the process described above. The subtasks that make up subtask A would be identified and the rules governing their commission defined. This process of hierarchical subdivision continues until either there is no more meaningful division of tasks that can be performed – or the investigator has reached a level of understanding useful enough to inform the design.

Purpose of analysis

Performing a task analysis allows a researcher to identify the following:

  • areas where they have a gap in understanding or knowledge
  • subtasks that do not have clearly defined procedures
  • ambiguities in the division of labour and responsibilities
  • safety critical processes or those integral to system operation
  • inefficiencies at subtask or overall task level

An error analysis builds on the task analysis and requires a similar approach. It looks at all the different ways in which an operator or outside agent could perform an error in each subtask identified. For example, one subtask for the operator of a monitoring station, may be to activate an alert system. Errors could include (among others), selecting the wrong incident response plan, pressing the wrong button, failing to push a lever all the way, looking at the wrong screen or dial, or activating the system at the wrong time. Typically such an analysis would be performed by considering each of a list of possible error mechanisms in turn, to avoid missing potential errors. Again, a combination of experience in conducting error analysis and an understanding of the workings of the overall activity are useful.

Advice for Practitioners

Before conducting a task or error analysis it is important to define what the output of the analysis is to be used for, as this will influence how the analysis is performed. For example, it may be that the investigator is only be interested in particular subtasks and activities – or that a particular level of detail is required, below which the analysis is useless and beyond which it is simply a waste of time and effort. Knowing the level of detail required is a key parameter as without this cut-off point, the analysis could go on almost indefinitely. A basic task analysis is a useful way for anyone to gain a clearer picture of any working environment. For more detailed analyses or situations where the task analysis is to provide the foundation for a larger set of activities, it is advisable to acquire the services of experienced practitioners.

task analysis

The following is a basic overview of the key principles/stages:

  • establish what it is you wish to know and how you intend to use the information
  • identify and clearly define the scope of the task/environment to be analysed

break the overall task down into stages:

  • identify key subtasks within the parent activity
  • establish the rules that govern how and when each subtask starts and finishes
  • repeat the previous step for each of the subtasks identified
  • continue this process until a sufficient level of detail has been reached (specifically, when the information is available to answer the original question)

complimentary error analysis

A task analysis is often useful in its own . It may also be useful to conduct a complimentary error analysis. Again it is best to use an experienced professional for large-scale analyses – but for a basic assessment, the following method can be applied:

  • identify the complexity required and decide what level of subtask breakdown is required
  • examine each subtask to determine all the possible ways in which a user might perform an error in carrying out that subtask (this requires the practitioner to understand the task being analysed)
  • consider what external factors might make such errors more likely or more severe (performance-shaping factors)
  • determine what precursor events, actions or omissions are required to happen in order to make the error possible or the performance shaping factors more relevant
  • for each error, identify ways to reduce the potential for error or to mitigate its consequences – bearing in mind that it is preferable to prevent an error from occurring than to try to mitigate the consequences

 

Reference sources

Wilson, J. and Corlett, N. (2005). Evaluation of Human Work (third edition). Taylor and Francis.

Piloting, Feedback and Monitoring

Before introducing ITS or any new technology, or undertaking extensive field trials, it is invariably beneficial to pilot the system or service with a small group of users before more widespread deployment. This approach is part of User Centred Design (See User Centred Design) and allows any problems to be addressed, avoiding embarrassing and expensive mistakes.

Encouraging feedback from users and providing suitable mechanisms for monitoring use of ITS allows those responsible for its operation to better understand the experience of both occasional and frequent users. This may show how use of the ITS is changing over time (because of changes in other parts of the transport system or the environment) and provides advanced information about necessary modifications, including possibly re-design of the ITS (See Evaluation)

Piloting

Even apparently simple tasks such as administering a questionnaire should be piloted. This is because the design of the questionnaire (and the management processes around the questionnaire) may be found deficient or ambiguous when exposed to actual users. Piloting is really important and should not be missed out.

Pilot studies should be well designed with clear objectives, clear plans for collecting and analysing results, and explicit criteria for determining success or failure. Pilot studies should be analysed in the same way as full scale deployments.

In general, the benefits of a pilot study can be identified under four broad headings:

  • process - this confirms (or not) the feasibility of implementation of the full ITS deployment (for example including user uptake and data processing capability)
  • resources - this helps evaluate time and budget issues in full deployment such as the power and communication costs associated with new hardware
  • management - this covers potential issues including personnel requirements and data security
  • technical - this provides initial evidence of technical system performance but also, and more importantly, that the outcomes (such as safety, congestion reduction, information provision) are as intended and that behavioural changes by road users are as expected

Monitoring

Monitoring the use of ITS is important because users may not react as anticipated and their behaviour may adapt and change over time. Since Road Operators are particularly concerned about changes in behaviour that may decrease safety, any (so called) “risk compensation” needs to be identified. This occurs when changes, such as the introduction of ITS, makes users feel safer and they respond (possibly unconsciously) by adopting riskier behaviours.

Monitoring the use of ITS can take many forms. Some examples are:

  • the number of drivers diverting from a route in response to a Variable Message Sign warning of congestion ahead
  • the percentage of drivers using a hand-held mobile phone
  • the percentage of drivers opting to use free-flow tolling tags (rather than paying cash)
  • the time taken for a control room operative to implement a particular road closure measure
  • usage by public transport users of a free bus travel planning application

Feedback

Feedback is information that comes directly from users of ITS about the satisfaction or dissatisfaction they feel with the product or service. Feedback can lead to early identification of problems and to improvements.

When the ITS users are “internal”, such as traffic control room staff or on-site maintenance workers – encouraging feedback has to be addressed as part of the organisational culture. Ideally a “no blame” culture will exist that allows free expression about what works well and does not when ITS is incorporated within wider social and organisational settings. Some industries have an anonymous feedback channel to allow comment on systems and operations. Explicit and overt mechanisms to respond to feedback help encourage further contributions from ITS users.

Feedback from road users about ITS has to be carefully interpreted as it may relate to the wider transport system of which ITS is just the visible part. For example, a complaint about the setting on variable speed limit signs may arise because information about incident clearance is not speedily transmitted to a traffic control centre.

Many organisations publish a service level promise or “customer charter” and this may include feedback mechanisms. Road Operators may choose to implement feedback channels that are passive (such as publishing address/phone/email/web address) or adopt more active mechanisms (such as questionnaires and surveys).

 

Measuring Human Performance

Overall evaluation of ITS products and services is an important activity because the performance of the road user will depend crucially on the usability of the ITS. A benefit of improving the ease and efficiency of ITS technology, is increased user satisfaction. This can provide business advantages, particularly when users have a choice of ITS products or services. Poor usability of ITS, such as a poor user interface, or inadequate and misleading dynamic signage, may have safety implications in the road environment. Good usability will help to manage and predict road user behaviour and so help increase road network performance. For all these reasons, the performance of ITS in terms of its usability needs to be measured. (See Evaluation)

Measurement of Usability

Usability measurement requires assessment of the effectiveness, efficiency and satisfaction with which representative users carry out representative tasks in representative environments. This requires decisions to be made on the following key points:

  • what environment (context of use) is relevant
  • which metrics (measures) will be taken
  • what tools are required – such as stop-watch, questionnaire, driving simulator
  • what method will be used to make the measurements

Steps in performance measurement:

  • define the ITS to be tested (this may be a system in-use or a prototype/pilot)
  • decide what type of user will be considered (first time/infrequent or experienced user)
  • define other aspects of the typical context of use (See Context of Use)
  • decide on the particular functions or aspects of the ITS to be evaluated and the context for the evaluation
  • decide on the usability goals to be tested and their relative importance
  • prepare a plan for measuring a set of metrics (that assess the usability goals)
  • carry out the user tests
  • analyse the test data to derive and interpret the metrics
  • record the results and draw conclusions

The process of measuring human performance when the driver is interacting with ITS (particularly when using information and communication devices) can yield safety benefits – although there are challenges with this form of human performance measurement in this context.

Multimedia:  Understanding Driver Distraction

Safety of Drivers’ ITS Interaction

Driver distraction (not focussing on the road ahead and the driving task) is an important issue for road safety. ITS products, such as information and communication devices, can greatly assist the driver (for example by indicating suitable routes) but ITS can also be an additional source of distraction. Distraction can make drivers less aware of other road users such as pedestrians and road workers and less observant of speed limits and traffic lights. (See Road Safety)

Measuring driving performance when interacting with ITS requires specialist equipment and expertise. Measurements made in laboratory settings and driving simulators may not be representative of real driving behaviour. This is because in real driving contexts drivers can choose when to interact (or not) with devices – and can modify their driving style to compensate to some extent for other demands on their attention. On-road measurements have to be designed to be unobtrusive and representative. Field Operational Tests (FOTs) can be designed accordingly and used to investigate both mature and innovative systems. FOTs can involve both professional and ordinary drivers according to the focus of investigation. A “naturalistic” driving study aims to unobtrusively record driver behaviour. Analysis of the drive record is used to identify safety-related events such as distraction, although the interpretation of results can be problematic and controversial.

The weight of scientific evidence points to distraction being an important safety issue. Many governments and Road Operators have sought to restrict drivers’ use of ITS while driving. There are different national and local approaches ranging from guidelines and advice – to bans on specific activities or functions (such as texting or hand-held phone use). 

Deciding on Goals and Metrics

Usability goals for an ITS product or service should be expressed in terms of the usability attribute – such as easy to learn, efficient to use, easy to remember, few errors, subjectively pleasing. Deciding on the relative importance of these goals depends on the ITS and the context but it helps to focus future evaluations on the most important aspects.

Not all performance measurements have to be quantitative but some simple examples of performance metrics that might be of interest to Road Operators are:

  • the time taken by a control room operative to post a road closure on a message sign
  • the number of toll charge queries collected at automated toll booths
  • the ratio between successful and unsuccessful interactions made by users of a bus ticket machine
  • the percentage of drivers seen using a hand-held mobile phone
  • the number of ITS features that an ITS user can remember during a debriefing after a test
  • the frequency of use of the manuals and/or the help system, and the time spent using them by a VMS maintenance operative

It is also important to collect qualitative data. This can help explain the reasons behind a particular performance and may uncover the user's mental processes and beliefs about how the ITS operates (which may be correct or incorrect).

Addressing more strategic performance goals such as “safety” is a wider question in which the ITS has to be considered in the broader transport context. (See Road Safety)

Conducting Performance Testing

Performance testing can be complex. Consult human factors professionals where necessary and:

  • always conduct pilot testing to make sure that the tools and the techniques for data collection work as expected (See Piloting, Feedback and Monitoring)
  • if relevant, testing can be video-recorded so data can be analysed from the recording.

Data Analysis and Conclusions

Simple descriptive statistics (such as average values and spread) may be sufficient to characterise the particular performance metric – but:

  • for more complex analysis, consideration should be given to the extent to which the sample of tested users represents the whole user population
  • if the metric is to be compared against a benchmark, some indication of measurement error or confidence should also be estimated
  • the results from the individual metrics, as well as any qualitative data, should be considered as a whole when deciding if the usability goals have been met
  • data analysis can be complex so consult human factors professionals where necessary

 

Reference sources

US DOT website on driver distraction: http://www.distraction.gov/

Bevan, N. and Macleod, M. (1994). Usability measurement in context. Behaviour and Information Technology 13 132-145

Wilson, J. and Corlett, N. (2005). Evaluation of Human Work. CRC Press. ISBN 0-415-26757-9

Sanders, M. and McCormick, E. (1993) Human Factors in Engineering and Design. McGraw-Hill, Inc. ISBN 0-07-054901-X

FOTnet project: http://fot-net.eu/

TeleFOT project: http://cordis.europa.eu/docs/projects/cnect/7/224067/080/deliverables/001-telefot.pdf


Source URL: https://rno-its.piarc.org/en/systems-and-standards-human-factors/systems-approach-its-design