At the conclusion of a test campaign members will receive a Test Performance Summary Report for products tested in the Austin lab.  The report provides valuable insights for FCG staff and the member company regarding their campaign. 

 

Throughout this note we will use the term “campaign” to refer to a set of test services that result in the registration or re-registration of a member product.  The word “product” can refer to a DD, a device, an FDI device package or combinations of these.  

 

We will use the following test report as an example



Test Performance Summary


The report reveals that it is split into three sections.   

 

The top section, with black highlight bar provides summary information about the campaign.  Company name, campaign requestor, open and close dates, and campaign decision are included in this section. 

 

The middle section provides a set of metrics about the campaign, and the lower section (red table) adds detail. 

 

Starting from the last section there are a total of five tickets associated with this campaign.  The initial ticket opened by the member is always classified as the Registration Ticket.  All tickets follow a defined set of stages  describe in the Registration Process Stages article.  Once all prerequisites are received,  the collection phase is completed and a number of campaign tickets are created to track different test campaigns.   


For HART field devices, the EDD Test is the first campaign and proceeds until an initial check of the EDD is completed.  Once that check is done, in parallel, the device testing can then start.  Devices that have findings may be subject to one or more cycles during testing.  If a product fails a campaign and requires an update, additional campaign tickets are created to track re-testing. 


In the event that testing is paused due to input from the member, wait time is calculated against the campaign.  These are then aggregated into overall performance metrics described in the next section.

 

Performance Metrics


The metrics section includes several pieces of information about the campaign.  All calculations are based on calendar time and include weekends. 


Data Collection Time & Start Date 


When a member opens a ticket for registration, instructions are provided on information that must be collected before testing can begin.  This will be registration type and includes all required documentation, payment and software (e.g. EDD files).  In the example above the registration ticket was opened on 4/17/2023.   During this 20-day period information was being delivered by the member. 


The support portal provides a list of that documentation and confirmation when the different elements are received.  It is also a time when members may ack questions about process.   The start date is calculated as 5/6/2023, the completion of the collection period. 


Campaign Duration

Campaign duration is equal to the difference in days between start date and close date.  In this example case the campaign lasted 177 days.  Due to snapshot frequencies, these days may be off +/- a day.  Duration is calendar time.


Waiting for Requestor 

During the course of a test campaign, it is very common to engage in question/answers exchanges with the member.  At times, continuing to work on the campaign requires answers to these questions.  The Waiting for Requestor metric summarizes the days that our support system indicated that a ticket associated with the campaign was in the “waiting” state. At times, more than one ticket in a campaign may be in a “wait” state on the same day.  In this case more than one ticket is stalled waiting for input from the member, but we only count one day toward the total Waiting for Requestor total. 


Active time at FCG 

This final metric represents the net number of days that FieldComm Group worked on the campaign.  This time includes any delays due to availability of bench space.   For this campaign that number is 150 days.

 

Campaign Specific Details

The third section of the report provides individual metrics on the different type campaigns.  Each row of this section shows a specific ticket number, the type of test associated with that ticket, the duration of that phase of the campaign, the amount of time in wait state for the phase, and the ultimate test decision for that phase.  


As multiple tests are occurring in parallel the sum of the individual phases will be greater than the overall campaign duration.


For the example above, one registration ticket, two EDD tests, one product test, and one FDI test.  When the EDD was first submitted on ticket 99764 it failed, resulting in a new ticket 99991 being opened for a second EDD test.  


Summary

As with any process with parallel activities, choices are made to give an overall assessment of the campaign performance. Metrics may also have slight inaccuracies due to data entry errors. However, each reports is reviewed prior to sending to confirm the values give an accurate representation of the campaign performance.


The Test Performance Summary provides valuable information to help FieldComm Group and members implement process improvement to improve the efficiency of the test and registration process.