A couple of weeks ago, I made the case that, in the early days of Linux, most of the momentum behind the open source operating system revolved around building a Unix-like system that could run on personal computers and would be free, as in cost no money. The enthusiasm about freely shared code came later. Today, I'd like to extend that argument further, focusing on changes in the commercial atmosphere surrounding Unix and personal computers in the 1980s and early 1990s.

First, though, let me make clear that—despite what some uncharitable readers seem to think judging from their comments about my earlier post—I'm not out to denounce Linux or free software, or to make Linus Torvalds out to be a penny-pinching poser. Far from it: I love Linux and open source (and, although I've never met Torvalds, he seems like a really great guy), so much so that controlling for my bias in their favor is one of my biggest challenges as I research a book about the history of free and open source software.

But I also want my book to present a more objective, fact-based interpretation of the conditions surrounding Linux's birth than the one that comes out of existing accounts, which were all written by, or in close collaboration with, people who were themselves active leaders of the open source community—making their versions of the story inherently subjective. In particular, I seek to re-evaluate the narrative that Eric S. Raymond presents in his essays, "A Brief History of Hackerdom" and "Revenge of the Hackers," in which he takes as a given that most open source programmers, including those involved in the early Linux movement, were card-carrying "hackers" who programmed because they cared deeply about the principle of sharing code, not because they were worried about money, cost and the commercialization of software.

As I pointed out previously, that's not the story that emerges if you look at early Usenet posts about Linux, where Torvalds and his supporters appear primarily interested in the fact that Linux was free-as-in-beer and free of a commercial license, not that the code would be freely shared.

Placed within the context of the time period, those attitudes only made sense. That's what I'd like to focus on today.

The Push for a Free Unix

To understand the impetus behind Linux when Torvalds wrote the first version of it in 1991, it's necessary to grasp the concerns of programmers and computer-science students of the time. Back then, Unix was the operating system of choice for most of these people. It was what they knew best, and it provided a lot more programming power than the dumbed-down, consumer-oriented platforms (of which MS-DOS, the progenitor of Microsoft Windows, proved the most enduring) that emerged with the personal-computing revolution.

Following Unix's development at AT&T Bell Labs in 1969, it spread to academic computer science labs—the same places where many future open source luminaries cut their coding teeth. For a while—about 10 years or so—Unix remained affordable and, more importantly, computers remained expensive enough that very few people had them in their homes. Instead, they ran Unix on their employers' or universities' machines.

Things started changing in the late 1970s, when personal computers began hitting the market, making it possible for ordinary people to own a computer. At the same time, Unix became more expensive, as licensing fees increased dramatically. These conditions presented hackers with a conundrum: For the first time, it was possible to have a computer in your home, but you couldn't afford to run Unix on it unless you were a millionaire. And if you couldn't run Unix, what was the point of having a personal computer?

In response to this challenge, there were some efforts to create affordable Unix clones for use on personal computers. One of the most successful was Minix, an operating system designed by the computer science professor Andrew Tanenbaum that appeared in 1987. (Other, more commercial Unix clones also existed, but most had little success. And there was, of course, GNU, a project with more clearly ideological goals, but it chronically failed to produce a viable open source Unix kernel.) Importantly for our story, the source code for Minix was freely available to universities, yet the operating system was not free to use.

As a computer science student at the time, Linus Torvalds knew Minix well. What he and many of his peers apparently did not like about it was that it cost money. That's the message that comes clearly out of this newsgroup post from January 1992, in which Torvalds lashed out at Tanenbaum: "look at who makes money off minix, and who gives linux out for free. Then talk about hobbies. Make minix freely available, and one of my biggest gripes with it will disappear."

Torvalds and the early Linux crowd were not the only ones thinking this way. Indeed, as Peter Salus pointed out in his 1994 book, "A Quarter Century of UNIX," "the main motivation for the creation of a license-free Unix lay in AT&T's fees." That was because, by 1978, the cost for a license of AT&T Unix was more than $100,000. By 1993, it had neared $200,000. It's easy to understand the strong push to build a free-as-in-beer Unix clone under these circumstances—especially if the clone had the added benefit of being easily compatible with personal computer hardware, rather than the expensive, institutional mega-computers for which Unix was originally designed.

So, to sum up: What Linus Torvalds, along with plenty of other hackers in the 1980s and early 1990s, wanted was a Unix-like operating system that was free to use on the affordable personal computers they owned. Access to source code was not the issue, because that was already available—through platforms such as Minix or, if they really had cash to shell out, by obtaining a source license for AT&T Unix. Therefore, the notion that early Linux programmers were motivated primarily by the ideology that software source code should be open because that is a better way to write it, or because it is simply the right thing to do, is false.

That, at least, is the conclusion I have reached, based on research with primary sources that date to the time period in question—as opposed to essays such as Raymond's, which were written after the fact, by authors who were inclined to perceive the narrative in particular ways. If you think I'm wrong—which happens a lot, actually—I'd love for you to point out why in the comments below, or to contact me privately.

But, since I am in the business of writing fact-based history, I'd just ask you to do one thing: Show me the (primary) source that proves your point (to paraphrase Richard Stallman).