The essence of this theory can be illustrated by this graph:
Of course, this graph has no actual numbers to back it up. It is just a visualization to help explain a concept that I've seen illustrated many times over. The X-axis represents the complexity of the task at hand. It could be setting up networking, viewing an image, making a movie, building and running a website, automating some complex database migration; really any task that might be done with a computer, large or small. The Y-axis then represents the relative cost of completing that task. I mean "cost" here very generally. The cost could be your own personal time, the cost of a license for a piece of software, the cost of training, or the cost of hiring a developer to write a program to do what you need.
Naturally, the cost of completing a task goes up for both Windows and Linux as the complexity increases. Notice though the different rates of change. The Linux curve is nearly linear, and even begins to flatten as the complexity goes up. This implies that as the complexity of the task rises, the corresponding increase in cost is proportionally similar and that as one learns some of those skills are readily applied to the more complex tasks. The cost is commensurate with the complexity and knowledge builds on itself, making the harder tasks more accessible. I like to call it "the fair cost of enabling technology."
For Windows, the curve starts out very flat. Moderately complex tasks don't cost much more to complete than trivially complex tasks. This brings us to the Difficulty Divide. That's the space bounded by the curves where Windows wins. It is easier to complete tasks of that difficulty class in Windows than it is in Linux. It is my belief that most of the people who give up on Linux are the ones who aren't able to get past the Difficulty Divide. They may not have the time or the interest to get to the next level. The reason doesn't really matter, the fact is, they don't make it.
Notice too though that the curve for Windows approaches vertical very quickly. As one's needs and knowledge increases, the cost to get to the next level with Windows goes up more and more quickly. Soon you find yourself searching far and wide for software because tools to complete your task are not readily available. Often times those tools don't exist or are are very costly. Even if you have the know-how to create a solution, the tools necessary to do that are often also quite costly. The relative opacity and monolithic nature of Windows comes into play here too. Comparatively sparse logging, cryptic errors, and complex hidden interactions can lead even a seasoned system administrator like myself to wasting time guessing at the root cause of a problem. Often times even once the root problem is found, there is a high likelihood that no single tool will do what you need and making the various pieces work together is in and of itself yet another challenge. These factors all add to the cost of completing the task at hand.
It is at this level of complexity that Linux wins. Thanks to the free availability of powerful tools and the open sharing of information amongst the Linux using community, the cost of completing tasks rises much more slowly. It is surprising how quickly one's needs and skills can rise to the level where problems are easier to solve in Linux. For what I call "The Typical Technologist", it can take only a few months to get there with no previous knowledge of Linux. I've seen it happen. For people who aren't so into tech, it may take longer, but odds are they will get there. For the truly "hardcore" it may be a matter of days or weeks.
Recently though I've had to modify my theory on the difficulty divide, and I'll give you my take on the current state of affairs next week. In the meantime though, what do you think? Is the difficulty divide real? Does my explanation here accurately characterize it?