I have written in the past about the requirements process when developing a plan for equipping a new airplane. Basically, you start by figuring out what you want the airplane to do, then figure out functions you need to carry out that mission, then figure out what types of equipment you need to carry out those functions – and only after doing those steps do you open up catalogs, magazines, and web sites to look at specific products that will give you those functions. It’s a simple, logical process that takes a lot of the emotional consumerism out of the mix. Not that there’s anything wrong with getting emotional about an airplane, but that should come AFTER you know the cold hard facts.
While this all sounds simple, and pleases the engineer’s sense of esthetics, I will be the first to admit that putting the process in to practice in reality takes a little more effort. The devil is always in the details, as they say, and in this case, you’ll find him when you start evaluating some design “intangibles” that come with avionics selection. It is very easy to identify a GPS as being IFR approved for instance. It is simple to determine the number of isolated power feeds there are on a specific EFIS. And it is not hard to figure out if an autopilot can function on its own, without reference to an EFIS. But how do you quantify “reliability”? How do you determine if software is “robust”? Frankly, how do you figure out if you can “trust” a specific system to keep on doing what it is designed to do?
[Before all of the quality engineers out there raise their objections…yes, there are many different ways to determine these things in traditional commercial, military, or government projects. Unfortunately, they all require extensive, time-consuming, and high-sample testing, little of which can be counted on with the latest homebuilt avionics. Even with the best of intentions (and my experience has been that all of the popular suppliers truly have the best intentions), it is hard to say that a particular system has gone through the kind of testing it takes to achieve certificating – because if it did, well, it would be certified – and cost six times as much! So here in the niche experimental market, we have to create a little microcosm of what the high-dollar aerospace market does…]
For me, the answer comes down to a few things.
1) Pedigree of the designers - do the designers have a verifiable background in the avionics world? There are a lot of code-slingers out there without a lot of background in the avionics or aerospace world. It is easy to find an OEM display/computer, write some pretty code and rent a booth in one of the big buildings at Oshkosh to show off your new system. But I like to know that the people behind a product to which I am going to trust my butt have been ingrained in the aerospace world for awhile. I want them to have a fist-hand understanding of what it is like to have their own little pink body in a machine hurtling through the air. And I like them to have been involved in big-time, real-world development of systems where reliability was king. Look into the background of the people behind a product, and see if it includes a decade or two with one of the “big boys” – the quality of the product is going to reflect their background.
2) Testing – how long has it been “in the field” with beta testers, and how many bugs have they found? Despite a reputation as an “early Adopter” in some areas, I am actually quite cautions about what I fly IFR with. As long as hardware or software is in Beta test, I won’t depend on it to function – I always keep an older, stable version of the software in another box, as well as a previous version on a thumb drive in the airplane. I will upgrade part of my system at a time, and keep a reliable backup on board. And I still use backup instruments in case the physical laws governing the movements of electrons decide to take a hike. I like to see software flown by a large group of active beta testers before it is released for general use – even though our market is small enough that it is hard to get a “large group” doing anything at the same time. In addition to knowing that software and hardware has been tested, I like to know that bugs have been found, for there are ALWAYS bugs, and if the testing hasn’t found them, then they are still lurking….
3) In-service time – how long has it been out there being flown by customers? Like Beta testing, I like to know that a system has seen some real-world service in day-to-day operations. It is neat to have a very low serial number of a new system, but I wouldn’t depend want to have to depend on it working. When companies announce new products and people rush to praise it before the first units have been delivered, I usually hang back for a year to see how it does. (Sometimes, it takes that long for systems to show up anyway…). Ask yourself if you have seen actual, installed systems, or heard people talk about them, or seen folks write about them on the ‘net. Until that happens, you have to wonder where the units are in the development cycle!
4) Company Longevity – how long have they been around? Everyone has to start somewhere, but the newer the company, the less experienced they are likely to be in the simple area of survival. Several of the companies building EFIS’s today started out many years earlier building something else for the aircraft industry – if they are still producing that “something else”, that’s a good clue that they might continue to stay around. I am nervous about start-ups when I have to design an airplane’s systems around a product like an EFIS – there is too much to re-do if you have to change horses later on.
There are other ways to deal with the “intangibles” – these are just a few of the things I look for when trying to decide what to put in my airplane. Whether it is avionics, an engine, or the kit itself, I want to know that what I choose has a track record of success behind it.
While this all sounds simple, and pleases the engineer’s sense of esthetics, I will be the first to admit that putting the process in to practice in reality takes a little more effort. The devil is always in the details, as they say, and in this case, you’ll find him when you start evaluating some design “intangibles” that come with avionics selection. It is very easy to identify a GPS as being IFR approved for instance. It is simple to determine the number of isolated power feeds there are on a specific EFIS. And it is not hard to figure out if an autopilot can function on its own, without reference to an EFIS. But how do you quantify “reliability”? How do you determine if software is “robust”? Frankly, how do you figure out if you can “trust” a specific system to keep on doing what it is designed to do?
[Before all of the quality engineers out there raise their objections…yes, there are many different ways to determine these things in traditional commercial, military, or government projects. Unfortunately, they all require extensive, time-consuming, and high-sample testing, little of which can be counted on with the latest homebuilt avionics. Even with the best of intentions (and my experience has been that all of the popular suppliers truly have the best intentions), it is hard to say that a particular system has gone through the kind of testing it takes to achieve certificating – because if it did, well, it would be certified – and cost six times as much! So here in the niche experimental market, we have to create a little microcosm of what the high-dollar aerospace market does…]
For me, the answer comes down to a few things.
1) Pedigree of the designers - do the designers have a verifiable background in the avionics world? There are a lot of code-slingers out there without a lot of background in the avionics or aerospace world. It is easy to find an OEM display/computer, write some pretty code and rent a booth in one of the big buildings at Oshkosh to show off your new system. But I like to know that the people behind a product to which I am going to trust my butt have been ingrained in the aerospace world for awhile. I want them to have a fist-hand understanding of what it is like to have their own little pink body in a machine hurtling through the air. And I like them to have been involved in big-time, real-world development of systems where reliability was king. Look into the background of the people behind a product, and see if it includes a decade or two with one of the “big boys” – the quality of the product is going to reflect their background.
2) Testing – how long has it been “in the field” with beta testers, and how many bugs have they found? Despite a reputation as an “early Adopter” in some areas, I am actually quite cautions about what I fly IFR with. As long as hardware or software is in Beta test, I won’t depend on it to function – I always keep an older, stable version of the software in another box, as well as a previous version on a thumb drive in the airplane. I will upgrade part of my system at a time, and keep a reliable backup on board. And I still use backup instruments in case the physical laws governing the movements of electrons decide to take a hike. I like to see software flown by a large group of active beta testers before it is released for general use – even though our market is small enough that it is hard to get a “large group” doing anything at the same time. In addition to knowing that software and hardware has been tested, I like to know that bugs have been found, for there are ALWAYS bugs, and if the testing hasn’t found them, then they are still lurking….
3) In-service time – how long has it been out there being flown by customers? Like Beta testing, I like to know that a system has seen some real-world service in day-to-day operations. It is neat to have a very low serial number of a new system, but I wouldn’t depend want to have to depend on it working. When companies announce new products and people rush to praise it before the first units have been delivered, I usually hang back for a year to see how it does. (Sometimes, it takes that long for systems to show up anyway…). Ask yourself if you have seen actual, installed systems, or heard people talk about them, or seen folks write about them on the ‘net. Until that happens, you have to wonder where the units are in the development cycle!
4) Company Longevity – how long have they been around? Everyone has to start somewhere, but the newer the company, the less experienced they are likely to be in the simple area of survival. Several of the companies building EFIS’s today started out many years earlier building something else for the aircraft industry – if they are still producing that “something else”, that’s a good clue that they might continue to stay around. I am nervous about start-ups when I have to design an airplane’s systems around a product like an EFIS – there is too much to re-do if you have to change horses later on.
There are other ways to deal with the “intangibles” – these are just a few of the things I look for when trying to decide what to put in my airplane. Whether it is avionics, an engine, or the kit itself, I want to know that what I choose has a track record of success behind it.