Z:gnu-www-ja-rms-nyu-2001-transcript--8a2872-In the early '90's, somebody f/en

In the early '90's, somebody found a way to do a scientific measurement of reliability of software. Here's what he did. He took several sets of comparable programs that did the same jobs &mdash; the exact same jobs &mdash; in different systems. Because there were certain basic Unix-like utilities. And the jobs that they did, we know, was all, more or less, imitating the same thing, or they were following the POSIX spec, so they were all the same in terms of what jobs they did, but they were maintained by different people, written separately. The code was different. So they said, OK, we'll take these programs and run them with random data, and measure how often they crash, or hang. So they measured it, and the most reliable set of programs was the GNU programs. All the commercial alternatives which were proprietary software were less reliable. So he published this and he told all the developers, and a few years later, he did the same experiment with the newest versions, and he got the same result. The GNU versions were the most reliable. People &mdash; you know there are cancer clinics and 911 operations that use the GNU system, because it's so reliable, and reliability is very important to them.