By Ian Coutts


We’ve all heard that old line about someone who, perhaps after a night of convivial socializing, resembles “forty miles of bad road.” Try 20,000 kilometres.

That’s what the Army is subjecting four new Armoured Combat Support Vehicles (ACSV) to. Bad roads, tough terrain and pretty much anything else that they can throw at them. All to make sure that by the end of the evaluation—however many miles or kilometres each ACSV may traverse—the vehicles do not resemble or operate like all that bad road.

That’s Reliability, Availability, Maintainability and Durability (RAMD) testing.

Canadian Army Today recently spoke with Major Alex Bazinet, RAMD Test Director for the ACSV project, and Major Phillip Gartner, project director for the Directorate of Land Requirements’ Armoured Vehicle Systems, about the testing the ACSV underwent from August 8 to October 6, 2023.

The Army has ordered 360 ACSVs from General Dynamics Land Systems-Canada (GDLS-Canada), in eight separate configurations, ranging “from the ambulance and troop carrier to the fitter/cargo variants, to the CP [command post] and EW [electronic warfare] versions, to the engineering variant and a repair and recovery vehicle,” Gartner explained.

They will replace the LAV II Bison, which first entered service in 1990, and the M113 tracked armoured personnel carrier, a Beatles-era piece of equipment that was adopted in 1964. The design and production stages for the CP, troop carrier, and ambulance variant are by this point largely done. The Army accepted the first four ambulance variants at a ceremony in Petawawa on October 19, and will deliver a total of 49 of the medical platforms to bases in the coming months.

That being the case, it might seem strange that what appears to be key testing started in just the past two months. Would it not make more sense to do the RAMD evaluation much earlier, perhaps even as computer simulations?

“Simulations could work as an initial first step to predict where we might see issues in a platform,” said Gartner, “but a simulation is only as good as the person programming it. It doesn’t actually predict how a vehicle will be used in real life.”

With RAMD, they take the ACSV and push it as hard as possible to see how it fares.

“How long can something be relied on before it breaks?” said Bazinet. Then, “if something were to break, is it really hard to fix and [does it] maybe take special tools? If an operator can do it and it’s easy to fix, then it’s easily maintainable.”

If that answers the challenges of reliability and maintainability, there is then the question of durability—how hard something is to break. Add them all up and, to mangle the old song, and they spell availability.


To evaluate the vehicles for these qualities, the Army subjected them to a range of tests. These included driving them hard, far harder than an armoured vehicle like the ACSV might normally be subjected to, and testing seemingly innocuous gear. The four test vehicles—two ambulances, a CP, and a troop carrier—each have their little differences. The CP version of the ACSV, for example, has special fold-down seats in the back and a platform for an observer built in.

During testing, “we had a soldier hit that platform again and again,” said Bazinet, comparing it to that machine at Ikea that simulates a sofa being sat on repeatedly.

What they look for are what the Army refers to as “incidents.”

An incident could be anything from a plastic part that snaps off in someone’s hand, to a major system failure. They then want to figure out why the incident occurred. In a case where something broke, “did it break because the hardware broke, or did it break because [the vehicle] got into an accident or maintenance did not follow the proper procedure? Or was there a training accident?

”It’s important, Bazinet said, to distinguish between those instances that are external to the goals of RAMD testing and ones that represent problems that suggest there are shortcomings in the vehicle that do not meet the contract requirements.

The officers conducting the RAMD tests collect information about possible instances in several different ways. “We can use some of the onboard maintenance devices that are on the vehicle,” he said, including health usage monitoring systems in vehicles.

They also depend on the Quality Engineering and Testing Establishment, which can place instrumentation on the vehicle that “can report a lot on drive throttle positions, left-right steering, selection. They can also measure inputs from the engine and transmission,” he explained.

“But we also need real feedback from people on what they see and hear while driving the vehicle. Combine all that together and you get a pretty good idea of what’s happening when an operator says, ‘I was doing t
his when this happened’.”

“We’re discovering a lot of not just bugs, but interesting observations from drivers, maintainers and operators on how the vehicle interacts with the environment,” added Gartner.

After the RAMD data is collected, “what we’re going to do is finalize all the scores of all the different instances that occurred, and discount the ones that don’t apply, and that’ll give us the result,” said Bazinet. “GDLS-Canada will do their own analysis and then we’re going to review answers to see if they meet our requirements.”