Selecting an eGov Travel System

» Posted by on Apr 11, 2014 in Electronic Travel Systems | 0 comments

Selecting a travel system can be an overwhelming task.  I was a team member on the core team to select an eTravel provider for our department.  The core team was made up of a manager with a extensive travel business experience, an analyst who had experience with travel and administration of a travel system and I was an information technology staff member  who had written the interface between the existing travel system and the accounting system.  The remainder of the team was made up of representatives from 12 different bureaus within our department.  The bureau representatives were mostly document preparers or administrative people who had experience with travel arrangements or travel policy.  One of the biggest challenges was to have representatives from 13 different bureaus evaluate the three different eTravel systems consistently.

The core team was fortunate because we were using an electronic travel system and had knowledge of what was needed to create documents and administer a travel system.  We also knew things that were lacking in our current system and the improvements which would be valuable in a new system.  In order to be fair to all three eTravel vendors, a  extensive list of requirements was developed by the core team and distributed to the entire team.  The team reviewed the requirements and added any missing or desirable requirements for the future system.  The core team took the requirements and translated them into scripts to create documents.  Many requirements could be satisfied in the creation of one document.  Traveler requirements included creation of different document types, routing the documents, conditional routing, approving documents, rejecting documents.  Administration requirements were creating routing lists, changing routing lists, imposing global changes, adding accounting to the system, updating accounting for new fiscal year, creating and updating traveler accounts.  Interface requirements listed all the data elements needed to populate and track records efficiently in the accounting system.  The expected results of the scripts were included in the requirements document that the rating team was given.

Each requirement was assigned one of three “weights” based on how critical the requirement was to the processing of travel.  Requirements were given more “weight” if identified as a “show stopper” (department mission could not be completed without the functionality).  The second “weight” was a requirement that would be difficult to do without, but a workaround existed to complete the requirement. The third “weight” was considered nice to have but not critical.

The three eTravel vendors were invited to demonstrate their system and were allocated three and a half days each to satisfy the requirements.  Each vendor was provided the requirements document an equal number of days in advance their demonstration .  The requirements were written in script like scenarios to create documents with situations such as crossing the international date line to see if per diem was computed correctly.  There were also deliverables from the requirements such as reports and electronic files which would contain the data elements that would be transferred out of the eTravel system to use as an interface to an accounting system.  A document explaining the record layout and a description of the data elements was also a deliverable.  Each vendor was allowed two weeks following the completion of their demonstration to provide the deliverables to the core team.

Prior to the beginning of the evaluations each team member was given a copy of the requirements/scripts and instructions on scoring during the evaluation.  Team members had to score the vendor during their presentation on how the requirement was met.  A 0 indicated the requirement was not met, 1 indicated the requirement was met but did not work in the manor which was expected but with some adjustments would work and a 2 indicated the requirement was met as expected.

The core team was responsible for scoring the deliverables which were received after the demonstrations.  Since no other members of the team had an information technology background, I was responsible for evaluating the interface file and the data elements required by the accounting system.  At the end of the evaluations, the scoring documents were collected by the core team and given to an independent associate to accumulate the scores.  A spreadsheet was developed and all the scores were applied to the appropriate weights.  The score for each requirement was averaged and used as the score to be added to the cumulative total for the vendor.

After the results were compiled all bureaus met to review the results and vote on the eTravel vendor which would meet our needs.  Majority ruled and an eTravel system was selected.  Our Department was the first to place a task order and the system was implemented with a high level of cooperation which led to a successful project deployment.

By: Debbie Sams

“The views expressed are those of the author and do not reflect any position of the Government or my agency.”



Submit a Comment