Traffic Information Benchmarking Guidelines Version 1.0 | natwg | North American Traffic Working Group

Traffic Information Benchmarking Guideline (TIBG) Version 1.0
A Unified Method for Assessing and Comparing Traffic Information Services
| Background | List of Abbreviations | 1. Introductions and General Considerations | 2. Route Selection | 3. Test Equipment | 4. Driving Behavior | 5. Data Logging Processing | 6. Traffic Content Processing | 7. Speed Comparison | 8. Travel Time Comparison | 9. Congestion Level Comparison

external image new-en.gifThis WikiSpace Version is UNDER CONSTRUCTION... Please refer PDF below for governing version 1.0

Version Summary

Current Compiled Version: 1.00
Date: 04-22-2010
Highlights: Initial guidelines based on a single-trace floating car run for ground truth data collection.

PDF Version 4-22-10

Current Wiki Version Summary For Version 1.0

Editor (Username of the user, or the IP address of the guest, who create this revision) sbayless
Version 1.0 Wiki Revision ID 137654793 and Chapter Revision IDs

Summary Presentation: (Creative Commons Licensed)

Data Quality Surveys: (Proprietary)

external image 88x31.png Unless where noted, all contributions Licensed under Creative Commons Version 3.0. See Background below for more information and Treatment of Intellectual Property


Traffic Information Benchmarking Guideline (TIBG) v1.0 | Traffic Information Benchmarking Guidelines Version 1.0 | Section Revision ID: 137654793

Last Revision: Apr 27, 2010 2:18 pm


This document is published by the North American Traffic Working Group (NATWG) under a Creative Commons license. Its intent is to describe a recommended benchmarking method to evaluate the quality of traffic information estimates on a roadway network. NATWG’s purpose with these guidelines is to offer the market a more unified way of assessing and comparing traffic estimates. Our hope is that the application of these guidelines will help buyers of traffic information (i.e. car companies, media companies, navigation devices manufacturers, fleets, roadway operators, etc.) conduct more effective audits and purchasing decisions, which will reduce evaluation costs and improve overall market conditions. This vision hinges on the voluntary adherence of market actors to the present guidelines, and we thus seek to achieve consensus.


The North American Traffic Working Group works collaboratively to define, accept and advocate for the unique needs of North American traffic information services. NATWG seeks to develop a coordinated, proactive market-driven implementation of traffic and travel information services and products by both influencing international standard efforts and coordinating the development of non-competitive commercial agreements.

For more information about NATWG:


The license for Traffic Information Benchmarking Guideline (TIBG) v1.0 is Attribution-Share Alike 3.0 Unported Anyone may share by copying, distributing or transmitting the work, or adapting the work under the following conditions:

  • Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
  • Share Alike — If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license.

With the understanding that:

  • Waiver — Any of the above conditions can be waived if you get permission from the copyright holder, which is ITS America and the North American Traffic Working Group
  • Public Domain — Where the work or any of its elements is in the public domain under applicable law, that status is in no way affected by the license.
  • Other Rights — In no way are any of the following rights affected by the license:
  • Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations;
  • The author's moral rights;
  • Rights other persons may have either in the work itself or in how the work is used, such as publicity or privacy rights.
  • Notice — For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to this web page.

The most current version of this document is linked to from

<a rel="license" href=""><img alt="Creative Commons License" style="border-width:0" src="" /></a><br /><span xmlns:dc="" href="" property="dc:title" rel="dc:type">TRAFFIC INFORMATION BENCHMARKING GUIDELINES (TIBG) </span> by <a xmlns:cc="" href="" property="cc:attributionName" rel="cc:attributionURL">J.D. Margulici, Dave McNamara Billy Bachman,  Matt Lindsay, Kevin Lu, Chris Scofield, Shawn Turner, Steven H. Bayless, Len Konecny</a> is licensed under a <a rel="license" href="">Creative Commons Attribution-Share Alike 3.0 United States License</a>.<br />Based on a work at <a xmlns:dc="" href="" rel="dc:source"></a>.<br />Permissions beyond the scope of this license may be available at <a xmlns:cc="" href="" rel="cc:morePermissions"></a>.>


The first released version of this document was developed over the course of about one year, starting in January, 2009. Initially, a ‘requirements’ committee was assembled under the leadership of David McNamara (MTS, LLC - Seven organizations[1] were represented and agreed to share their current practices for conducting traffic information quality benchmarks. This effort morphed into a task force that was created at ITS America’s annual conference in June. The charter of the task force was to develop best practices or guidelines based on the inputs provided by NATWG members.

The current composition of the task force is as follows:

Requirements Subcommittee Mission and Charter

Getting involved

NATWG and the Traffic Information Benchmarking Taskforce are dynamic entities and welcome additional members and contributors. The present guidelines need early adopters in order to gain traction and legitimacy. You and your organization can get involved in a number of ways, including but not limited to:

  • · Joining the NATWG membership and weighing in on further development;
  • · Taking these guidelines on a ‘trial run’ by employing them the next time you evaluate traffic information;
  • · Providing existing evaluation data, which may be anonymized and scrambled as necessary, so that the Taskforce can add it to a pool that we are assembling for the purpose of validating the proposed metrics;
  • · Providing feedback and comments, and suggesting modifications;
  • · Joining the Traffic Information Benchmarking Taskforce and directly contribute to the next iteration.

Submitting modifications

Members of the natwg may make changes through the wikispaces interface. Changes are accepted automatically, but may be subject to final review by by current Task Force. Members. Members making changes are encouraged to use the SAVE WITH COMMENT feature to provide rationale for changes submitted.

Requirements Task Force members may convene periodically to determine whether to accept changes and re-compile Traffic Information Benchmarking Guideline (TIBG) into a new version. The current version is uniquely identified on the wiki as version 1.0, followed by the Wikispace generated "revision IDs" of each chapter found on the main Traffic Information Benchmarking Guideline (TIBG) v1.0

In submitting changes, members must adhere to ITS America's Anti-Trust Guildelines and WikiSpaces Acceptable Use Policy.

To join this wiki, please go to the Join NATWG Page.

Wiki Version Summary For Version 1.0

Version 1.0 Wiki Revision ID 131469801 Traffic Information Benchmarking Guidelines Version 1.0 | Last Revision: Apr 27, 2010 2:18 pm

List of Abbreviations

GIS Geographical Information System
GPS Global Positioning System
HOV lane High-Occupancy Vehicle (i.e. carpool) lane
ITS America Intelligent Transportation Society of America
MPH Miles Per Hour
NATWG North American Traffic Working Group
PHS Position, Heading, Speed
RDS Radio Data System
RMSE Root Mean Squared Error
SPM Seconds Per Mile
TIS Traffic Information Source
TMC Traffic Messaging Channel

1. Introductions and General Considerations

Traffic Information Benchmarking Guideline (TIBG) v1.0 | Traffic Information Benchmarking Guidelines Version 1.0 | Last Revision: Apr 27, 2010 2:18 pm

1.1 Intended audience

The intended audience for these guidelines includes producers of traffic information, both public and private, buyers such as automobile manufacturers, personal navigation solution providers, mobile phone network operators and other media companies, or roadway network operators, as well as all intermediaries, third-party stakeholders and facilitators such as government agencies.

1.2 Intended use

The guidelines outline a set of methods and metrics that can be used to evaluate the quality of traffic information that may include incident or event messages, flow speed information, and travel time estimates. This information is assumed to be broadcast in near real-time to reflect either current or future conditions.

The metrics proposed in the guidelines can be applied toward quality assurance and data validation purposes. They can also serve as benchmark measurements in order to compare data quality between several information sources or commercial providers on a given roadway or within a metropolitan region, or to establish comparisons across multiple regions.

The word ‘guidelines’ aptly describe the intent of this document as a first step toward practice harmonization. While they may end up setting a de facto standard, they are not intended as one. A standard may ultimately constitute the most beneficial outcome for the traffic information industry, and it may be developed on the basis of these guidelines, but it is still too early to tell whether this is a doable or even a desirable goal. This will depend in large part on the adoption of the present guidelines, and the response from traffic information customers.

As guidelines, the methods presented in this document leave room for interpretation and often balances general principles with formal rules. Only with usage and feedback can we determine what constitute the best approach on a case-by-case basis. Thus the most important recommendation of all is that those reporting test results clearly document their methodology and its adherence to or departure from the proposed guidelines. We suggest that test results should be as transparent as possible in order to be recognized as legitimate benchmarks.

1.3 Basic premises and scope

These guidelines were developed based on a consensus that there exist no commonly accepted metric or methodology to measure and compare the quality of traffic information. Institutional purchasers of traffic information (e.g. car manufacturers, Departments of Transportation, etc.) conduct benchmarks with mostly ad-hoc methodologies. This increases the overall costs of those benchmarks and prevents effective comparisons between test results. A broader implication of this situation is that absent a standard lexicon and consistent measurements, traffic data quality issues are generally not well understood, if at all, by its consumers. The postulate that drives the development of the present guidelines is that harmonized benchmarking methods would benefit both suppliers and customers by a) improving the consistency and fairness of evaluations; b) lowering their overall costs by eliminating duplication of efforts; and c) establishing recognition for true value-added which will pull quality upward.

Note that in order to fully appreciate the experience of a traffic information consumer, one needs to include such factors as timeliness (i.e. how fast is information provided), ease of use (i.e. delivery format), and add subjective variables on top of that. These guidelines do not attempt to capture any of those elements. Their primary concern is with the accuracy of traffic estimations that are produced and transmitted.

The most basic assumption about the proposed methodology is that there exists a way to collect information about current conditions on a roadway that is deemed trustworthy enough to qualify as ‘ground truth’ and thus serve as a referent against which to compare other traffic information sources[2]. Once ground truth data has been collected, the exercise consists in scoring information sources against it using a set of metrics. These metrics should ideally meet the following criteria:

· Formally defined and easy to compute (i.e. not too many exceptions / fringe cases);

· Relevant to the end-user experience;

· Easy to interpret;

· Good balance of synthetic vs. exhaustive (i.e. tells the story concisely);

· Normalized and scalable (i.e. independent from route length, sample size, etc.)

In their present form, the guidelines apply to controlled-access roadways, essentially centrally-divided expressways with no intersections. The focus is on estimating the accuracy of velocity-based information, as this is arguably the most determining factor of how drivers experience traffic (as opposed to say, vehicle density measurements that are the main currency of traffic engineers[3]). Such information can take one of the forms outlined below:

1) Overall speed of the traffic flow at a particular point and a given time;

2) Average travel time along a roadway segment or route;

3) A descriptive qualification, such as ‘free flowing’ or ‘heavily congested’, alternatively presented with color codes on a map of the roadway network.

Since these information types can all be derived from an estimate of flow speed, the evaluation method presented in this document boils down to performing the comparison illustrated by Error! Reference source not found.. However, we still consider each information type separately and provide corresponding evaluation metrics in order to offer a direct relationship between common applications of traffic data and means to assess their quality.


Figure 1

Example comparison of probe vehicle speed and speed from a traffic information service provider along a route

Accordingly, the document covers the following items:

· What constitutes an acceptable test route?

· How should ground truth data be collected?

· How should ground truth data and traffic estimation data be paired up for comparison?

· What metrics provide the most meaningful comparisons, and how should they be reported?

1.4 Overview of current version and known limitations

As of the current version, the proposed method for collecting referent or ‘ground truth’ data against which to compare estimates is to use floating cars. This means that vehicles outfitted with a GPS receiver and a data recorder take trips on a given itinerary for the purpose of comparing their observed travel speed with the estimated speed broadcast by a traffic information provider, or source (TIS). In other words, the traffic conditions met by those floating cars are considered to constitute the observable ground truth against which estimates are benchmarked at any point in time and space.

The floating car method poses a statistical challenge because sampling is typically scarce unless a whole fleet can be dedicated to benchmark runs. However, it is also very popular because of its relative ease of implementation and the feeling of indisputability provided by data that is collected through direct immersion into the traffic flow (aka. ‘out of the window’ testing). The current iteration of this document considers a single vehicle driving a single run on a preset route. Taken in isolation, such a test would not constitute a valid benchmarking method, but it is the building block for a more complete test aimed at a roadway network. Once these guidelines meet a reasonable level of acceptance, they will be expanded to consider multiple-run tests and network-level benchmarking. Following is a list of specific issues that we intend to address in future versions:

· Further work by NATWG may consider an alternative ground-truth data collection method based on travel time sampling extracted from technologies that individually identify vehicles (e.g. toll tag readers, automated license plate readers, Bluetooth readers or magnetometers with a unique tagging capability). It is also possible that the massive influx of probe data stemming from mobile phones and GPS receivers into traffic information collection systems will result in the ability to self-validate (specifically, real-time estimates may be checked after the facts with bread-crumb data).

· For each set of metrics proposed in these guidelines, a complete methodological explanation is provided. However, the resulting outcome is not ‘plug-and-play’. Specific formulas are conditional to setting certain parameters which will still require fine-tuning. Further, ultimate test results should aggregate multiple runs over a region, and we have not yet provided guidelines on how to perform such aggregations.

· As we move beyond a single-route / single-run test description, probably the most critical issues will be to determine adequate sampling rules that attach a high degree of confidence to test results. As pointed out above, data collection alternatives to floating cars will be considered, notably including reidentification technologies.

· As it stands, the guidelines are aimed at measuring traffic information quality on controlled-access roadways where traffic flows are relatively homogeneous. The traffic information industry is expanding its coverage to signalized arterials and data quality will need to be assessed on those roadways as well. However, it is still early to move in that direction.

1.5 Test meaningfulness

Much of this document focuses on ensuring that benchmark tests are conducted in ways that produce meaningful results. A few general principles can be highlighted here:

· Floating car runs conducted in the absence of any traffic congestion have very little value. It is well documented that merely broadcasting historical traffic conditions or even posted speed limits can yield accurate results well in excess of 90% of the time. Therefore the value of real-time traffic reporting can only be assessed during changing or unusual conditions.

· As a corollary, traffic information benchmarks must be conducted on roads that host significant flows. This excludes most residential streets and minor rural roads.

· A further necessity of making benchmarking results meaningful is to bind test areas to a set of relatively homogeneous roadway sections. In this respect, the most important distinctions are rural areas vs. urban or suburban areas, and controlled-access roadways vs. signalized arterial roads[4].

1.6 Geographical references

These guidelines consider three useful geographical reporting levels: TMC location codes, routes, and networks (or markets). As depicted on Figure 2, these three levels are nested: a market is a collection of routes, itself a collection of TMC codes. At each level, traffic information quality is a function of time –roughly speaking a score can be attributed at regular time steps and then aggregated over longer periods.


Figure 2 Benchmark reporting units: TMC codes, routes and markets

Traffic Message Channel location codes (commonly referred to as TMC codes) were developed for the Radio Data System (RDS) Service in order to standardize the reporting of traffic events on major roadways under a unique set of geographical references. TMC codes are typically assigned at significant decision points and intersections, in an unambiguous format that is independent of existing digital maps for the area. In North America, TMC location codes are assigned and maintained through a collaborative effort between map publishers NAVTEQ and TeleAtlas. While those TMC location codes may be defined with slightly different coordinates in both map systems, the physical reference (typically an intersection) is common. Note that the incidence of using either map system on test results will be so small that is considered negligible. However, for disclosure sake, we recommend reporting the mapping system and version in test results.

Although the TMC standard was initially designed to deal primarily with events (accidents, closures, weather, etc.), traffic information providers have adopted it so widely that it is also used to report flow information on segments defined by two adjacent location codes. In practice, TMC codes thus designate both a location and the directional segment that originates from that location through the next TMC code. As of this writing, TMC codes constitute the reporting unit of choice for the entire traffic information industry. Therefore, they are the natural set of references with which to assess traffic information quality.

Note that while TMC location codes have been a boon to the traffic information industry by bringing a standard to the market, they are not a panacea for reporting complex congestion patterns. If a traffic information provider reports data with a finer spatial granularity, the methodology described in the present guidelines still applies: the comparison between a TIS and a set of records believed to represent ground truth can be performed along any set of elementary road segments, TMC location codes or others. The remainder of the text primarily assumes that TMC codes are used, but one may substitute TMC codes for a different roadway segment definition.

A route is a meaningful itinerary, and can also be defined as a sequence of contiguous TMC location codes.

Finally, a network is a collection of roadways in a given geographical area, which may define a ‘market’. A network will typically encompass hundreds of TMC location codes, and it can be broken down into routes for benchmarking purposes. Benchmarking at the network level is not covered in the present version of this document but will be tackled in a subsequent version. A key consideration at the network level is the volume of ground truth data that is necessary to achieve statistical significance and thus ensure reliable and undisputable test results.

1.7 Reporting results

As already mentioned, an overarching recommendation is to provide as much information and transparency as possible on the methods with which test results are obtained. One way to do this is to follow these guidelines and to clearly indicate where they are observed and where the test method deviates from them.

At a minimum, test results for a single-run test should report the following information:

· Route description, including TMC codes covered

· Type of GPS receiver and data recording device

· Map provider and version used during the test

1.8 Metrics

The metrics presented in these guidelines are designed to measure differences between a traffic information source and data collected by a floating car. These metrics will provide a set of scores that can be easily captured and understood, and allows for an immediate appreciation of the quality of an information source by a reasonably attuned but non-expert professional. Our design philosophy is to strike an adequate balance between a broad synthesis that may hide relevant nuances in the data, and a level of detail that may reveal all kind of interesting features but fails to directly tell a story to the naked eye. Of course, raw numbers can be misleading without a proper understanding of context, and we do not suggest that metrics can be a substitute to a rounded view of any given situation. In fact, that argument precisely militates for standardized metrics that can become a lead-in to a meaningful conversation about context rather than be the object of the discussion.
Three concepts were used in the development of metrics intended to assess the quality of a TIS:

One concept focuses on traffic speeds and measures the aggregated difference between estimated values and ground truth values. The result is a numerical value or a set of them that is expressed in miles per hour. Such a metric provides a reading of the overall discrepancies between actual traffic speeds and broadcasted speed information, as well as their distribution.

The second concept considers travel time estimates. There is an inverse relationship between travel times and speed, and thus the information conveyed by measuring travel time errors is essentially the same as the one obtained by measuring speed differences. However, broadcasting travel times is a very common and popular application of traffic data, and the inverse relationship is not intuitive. Therefore, we offer a methodology to compare estimated travel times with ground truth travel times and report the difference in a normalized way by using seconds per mile as a unit.

The third concept looks more broadly at the estimated traffic conditions in terms of congestion levels. Most information sources report traffic conditions either in descriptive terms (e.g. ‘free flow’, or ‘heavy congestion) or by using color codes on a digital map (e.g. green, yellow, and red). Thus a useful indicator of information quality is to count the number of instances in which the reported congestion level matches or doesn’t match reality. Another way to think about such reporting is in terms of error type with respect to the presence of congestion, i.e. type I errors (congestion is reported when it is in fact not present) and type II errors (congestion is missed).
All three concepts and the corresponding metrics formulation are described in section 7 (numerical speed comparisons), section 8 (travel time comparisons) and section 9 (congestion levels) of this document.

2. Route Selection

Traffic Information Benchmarking Guideline (TIBG) v1.0 | Traffic Information Benchmarking Guidelines Version 1.0 | Last Revision: Apr 27, 2010 2:18 pm

This section describes how to design test routes that are suitable for benchmarking traffic information.

3.1 Road classes, make-up and length

For the purpose of designing test routes, roadway sections may be divided loosely into three types:

· Controlled-access highways: roads connected only by ramps, with no at-grade crossings

· Major arterials: roads with at-grade crossings and posted speed over 40 mph

· Other arterials: roads with posted speed less than 40 mph

A cohesive route should be primarily made up of one type of roadway. Further, routes should span primarily urban/suburban areas or rural areas, but not both. The make-up of a route per the above categories should be indicated in the test results.

Ideally, routes should be designed such that they can be defined as a sequence of contiguous TMC location codes, i.e. along the same road as defined in the TMC table. The object of the test is to measure through-traffic. Thus including ramps into a test route is not recommended, though it may be needed in some cases (e.g. freeway-to-freeway ramps). For TMC location codes included in the route, the entire corresponding segment should be driven and included in test results. In some cases, this may mean that the driver needs to enter and exit a freeway at ramps that are located outside the route boundaries in order to not skew results with acceleration/deceleration and merges.

In order to provide meaningful reports that are distinct from TMC-level results, routes should be at least 2 miles and 3-TMCs in length. Routes can be much longer provided that their make-up follows the aforementioned guidelines and driving times do not typically exceed an hour, at which point fluctuations within a single run may become too large for the route to constitute a meaningful unit.

3.2 Time of day and traffic Conditions

As pointed out in section 1.5, test routes should be known to feature a reasonable likelihood of traffic congestion on a substantial portion of their length and during extended periods of time. There is never an absolute guarantee ex-ante that a particular roadway section will be congested (in fact we would always hope that it is not!) However, time of day and known historical patterns should guide both route selection and run scheduling.

Congestion may result from very regular patterns or more exceptional ones, and both instances may constitute good test cases for the purpose of benchmarking. Examples of the former would include:

· Weekday rush hours (typically defined as 6-10 AM and 4-8 PM) on urban arterials and inbound (morning)/ outbound (afternoon) regional connectors;

· Weekday rush hours between major urban points of interest, e.g. airport, central business districts or shopping malls;

· Weekend vacation routes.

Examples of routes for which congestion is possible but less certain include:

· Midday, evening or weekend routes;

· Routes to or past special events, e.g. trade shows or ball games.

3. Test Equipment

[Invalid Include: Page not found: Test Equipment (TIGB)]

4. Driving Behavior

Traffic Information Benchmarking Guideline (TIBG) v1.0 | Traffic Information Benchmarking Guidelines Version 1.0 | Last Revision: Apr 27, 2010 2:18 pm

This section describes recommended driving behavior that will result in more reliable tests. These recommendations notwithstanding, safety for both the test driver and other drivers on the road should remain the primary consideration and guide behavior at all times.

In general, test drivers should attempt to mimic the experience of a ‘regular’ driver. This is a somewhat elusive notion, but in practice, it means that test drivers should try to be following the bulk of the traffic, not driving in the slowest lane, nor attempting to always be in the fastest lane. Test drivers are free to change lanes and negotiate traffic in a manner consistent with the majority of other drivers present on the road at that time. If one thinks of driver behavior as distributed across a typical bell curve, then test drivers are trying to hit the center of that curve –figuratively of course: safety first!

5.1 Route completeness

Test drivers must ensure that they complete each route on their itinerary in one, uninterrupted run. A route that is started must be finished with no stop while adhering to the driving guidelines presented in the remainder of this section. Of course, test drivers should not hesitate to exit a route or stop along the way if safety or another important circumstance requires it, but then the entire data for the corresponding run must be dismissed for the purpose of data quality benchmarking described in this document.

5.2 Speed and acceleration

Unless a route specifically includes a ramp (e.g. freeway to freeway connector), drivers should not accelerate (or respectively, decelerate) to insert their vehicle into the traffic flow (respectively, exit the traffic flow) while on a recorded run. As indicated in section 2, routes will be designed to avoid such incidence by starting / ending at locations that provide an upstream / downstream buffer for the driver.

While on a route, drivers attempt to maintain speed with the vehicles in the visible field of view and pace with this vehicle group so that the speed of the test vehicle emulates the average flow of all vehicles. There is however, one notable exception to this rule. Speed limits should be obeyed, even if the flow of vehicles is higher than the speed limit. Section 5 and Error! Reference source not found. include data processing provisions that ensure that this discrepancy will not adversely impact test results.

5.3 Lane following

On a multi-lane roadway, the driver is to keep to the middle lane at the average flow of traffic whenever possible. When visibility is obscured in the center lane, moving to another lane is desired so that the test vehicle can maintain the same average speed as all the vehicles in sight. Travel in the left lanes should be limited because the left lanes often exceed the speed limit and these lanes maintain speeds which are higher than average.

5.4 Passing guidelines

The average flow of traffic is best judged by keeping a balance between the number of vehicles the test driver overtakes and the number of vehicles who overtake the driver. This number should be kept as even as possible. Test drivers should generally avoid passing on single-lane roads. The exception would be in the case where the driver is stuck behind a slow moving vehicle which all or most other vehicles are passing.

5. Data Logging Processing

Traffic Information Benchmarking Guideline (TIBG) v1.0 | Traffic Information Benchmarking Guidelines Version 1.0 | Last Revision: Apr 27, 2010 2:18 pm

This section describes how to process data collected during a floating car test. Once a run is completed, a log of data records is extracted from the test equipment. Each record essentially includes a set of GPS coordinates, a time stamp, and a speed reading. In order to perform comparisons between data collected from the floating car and speed estimates that are pulled from a TIS, individual records need to be aggregated to match the spatial resolution of the traffic service. In today’s market, this naturally points to TMC location codes. In other words, GPS records collected along the road segment defined by a TMC location code during a test run are grouped together to compute a single speed value, which can then be directly compared to the data feed provided by a TIS for that TMC location code.

As a preliminary step to processing data logs, it is recommended that visualization software be used to verify that the runs correspond to specified routes and itinerary. This step will ensure that blatant problems in data collection or driver errors are captured right away so that inadequate runs can be discarded with no further treatment.

6.1 Map matching and outlier removal

The first step in processing data logs from a floating car test is to match individual GPS records onto a road map. There is abundance of software available to perform this task including commercial and proprietary solutions. Given the impracticality of enforcing a single version of a single map database on the target audience of these guidelines, testers have latitude in their choice of map publisher and vintage as well as map-matching technology. These choices must be disclosed (i.e. publisher, quarter, version or other naming convention), and must preferably remain bounded to technologies that are well-established and accepted within the industry. In particular, testers should have access to a digital representation of the roadway that contains link-by-link reference to adequate segmentation units such as TMC location codes. It would not be appropriate to manually interpret the TMC location table on a base map which does not display TMC codes (e.g. using an online mapping application). In such practice it is highly likely that nuances in the exact placement of the segments used by both the map databases and the TIS could be misrepresented.

Individual GPS records must be filtered upfront to ensure a valid run. Regardless of the technology involved at this stage, the following practices constitute minimum best practices:

· If a GPS record appears 25 m or more from the roadway segment on which it is supposed to be snapped, the record should be dismissed as invalid;

· If a GPS record is within the vicinity of the roadway segment to which it belongs, but ends up being snapped to the wrong roadway by the map-matching software, it is also recommended that this record be dismissed;

· If GPS records are not contiguous to a route, i.e. they do not line up in sequential TMC segments, they should be dismissed;

· Even though speed readings from the GPS data collection device are not used directly in these guidelines, it is recommended that GPS records that feature unreasonable speed values be dismissed;

· Under some circumstances ‘divergent lane conditions’ may occur where the driver cannot emulate the average traffic flow (such as congestion on one side of the road due to slow ramp traffic, toll gates with automated and manual collection, HOV conditions, etc.). GPS records for these conditions should be discarded.

6.2 Valid segment requirements

For each TMC location code or segment used to aggregate individual records, the following conditions must be set:

· The segment must have been entered, traveled and exited at full traffic speed (ruling out acceleration or deceleration phases due to ingress/egress);

· Based on the recommended sampling rate of 0.5Hz, at least 90% of the records supposed to be collected over the length of the segment must have passed the filters set in the map matching step described in section 5.1;

· Gaps in the GPS trace must remain less than 10 seconds. However, there are cases in which such gaps may not be a cause for concern (e.g. if the distance traveled is small). Ultimately, testers should apply judgment in deciding whether or not gaps in the data should invalidate a segment. If gaps greater than 10 seconds remain in a run that is used for benchmarking, a note should be added to the reporting of results.

Unless otherwise noted, data collected along the TMC location code or other segment must be discarded all together when the above conditions are not met.

6.3 Generating segment speeds

The calculation of the space-mean speed for a TMC segment from a GPS trace requires an accurate TMC segment distance value 𝑑 and an accurate calculation of the segment travel time

Error in the TIS estimate.

On Segment-ID 3, we see a degradation in performance of some 10 seconds per mile by using the TIS estimate over the reference speed.

Another useful metric that can be derived from this numbers is the performance of both the reference speed estimates and the TIS estimates of travel time to the actual travel time observed, where:


These metrics provide the context for the amount of congestion observed in the test and the impact of the improvement in performance in the context of the total actual drive time. These metrics as all travel time metrics tend to provide more clarity when aggregated at the route level.

In our sample, the PREF values vary from 50% to 84% accuracy of travel time prediction and average only 59% accuracy for the whole route. PTIS also varies from 56% to 84%, but in the context of the route, the travel time estimate using the traffic data is 92% accurate.

Table 1 - Example calculations for seconds-per-mile determination (3)


6. Traffic Content Processing

external image new-en.gif
external image new-en.gif
The WikiSpace Version is UNDER CONSTRUCTION... Please refer PDF above for governing version 1.0

7. Speed Comparison

external image new-en.gif
external image new-en.gif
The WikiSpace Version is UNDER CONSTRUCTION... Please refer PDF above for governing version 1.0

8. Travel Time Comparison

external image new-en.gif
external image new-en.gif
The WikiSpace Version is UNDER CONSTRUCTION... Please refer PDF above for governing version 1.0

9. Congestion Level Comparison

external image new-en.gif
external image new-en.gif
The WikiSpace Version is UNDER CONSTRUCTION... Please refer PDF above for governing version 1.0