Some one said the glass is more reliable than Steam gauges based on their experience. In my nearly 400 hours on my RV7 with steam and glass I have had 3 instrument failures, all electronic. One electronic Tachometer with an analog display and two different glass panel devices have failed. Both glass failures were display failures that healed themselves over time. None of my steam gauges have failed. I think our new glass devices could use more vibration testing and environmental (temp and humidity) testing of the hardware. Besides hardware failures in the electronics we also have a new reality and that is software failures. How many of these devices have validated software in them? I don't know the answer but maybe a manufacturer could comment on the state of the art in this area of reliability testing. Based on my good experience I would bet that Garmin does both environmental and software testing.
(Rats...lost my post and have to retype it all...grrr)...
A few comments.
Anecdotal evidence is not much help...for every statement about never having had a steam gauge fail, I or others could counter with "well, I have" (and yes, I have). However, your comments on testing bear investigation.
Garmin and others who sell TSO'd devices do, in fact, certify them to certain standards, including environmental. They also write software against DO-178 (and perhaps others). On the experimental side, well...perhaps that's worth digging into. Does Dynon follow DO-178? DO-160? etc.? AFS? GRT? Perhaps someone could put together a comparison matrix of relevant standards and requirements, and gather the info from the companies, so we can see whose equipment is being designed/built to higher specs/tolerances?
If you want to discuss software "failures", though...that's a whole new ballgame. Software doesn't "fail". It does precisely what it was coded to do. So the issue becomes one of verification and validation of requirements, design, implementation, test, integration and deployment/maintenance. In effect, software failures are, by and large, more *system* problems. (Yes, I know, someone can code something incorrectly, etc....I'm talking about things like architectural and algorithmic issues).
This is a rich field, and we've discussed this in a few other threads. If you're interested in safety-critical software, there are quite a few books, conferences, etc., that deal with this.
My view, as a software systems engineer, is that software cannot be analyzed (hazard analysis, FTA, FMECA, etc.), in isolation, and that it must be done as *part of* the system in which it resides and which it controls.
(You also have to have an agreement on language..."verification" and "validation" mean two different things, and they are quite different, for example).