Well, I’d like to talk a little bit about how random testing differs from fuzzing and the short answer is they’re the same thing. The long answer is going to require a bit of explanation. So let’s go back in time to 1990 when a professor called Bart Miller and his students published a paper called An Empirical Study of the Reliability of Unix Utilities. And so what they did as part of the fuzzing effort was provide a completely random data to a bunch of Unix command-line utilities. These were things like editors, terminal programs, text processing utilities, and other similar Unix tools that you can basically think of as being tools predating the era of graphical use or interfaces on Unix systems. And what they found is using this incredibly simple technique, that is doing random testing without worrying at all about the input validity problem they were able to crash a quarter to a third of these utilities on basically any version of Unix that they tested. And so what you have here is a pretty strong result. They were able to crash lots of applications with minimal effort. What that means is that the quality of the input validation done by these sorts of programs at the time was really rather bad. A few years later in 1995, the same group repeated the effort and wrote another paper about this. This time they not only tested the same kind of utilities that they tested 5 years earlier but they extended the work to testing network applications and GUI applications and basically they got very similar results. Now, in another 5 years in 2000, the same people did another study and this time they fuzzed Windows applications. And what they found was basically more of the same. They can crash most of the applications that they tested. And then finally in 2006, the most recent installment of a fuzzing study by this group was published. This time they attacked Mac OS X. And this time they found something a little bit different. The command-line utilities on Max OS X would hardly crash. They found a much lower rate of crashes than they have found earlier. But on the other hand, of the 30 GUI apps that they tested, 22 could be crashed. It’s worth mentioning that as this group evolved their fuzzing work, they kept having to write new tools. For example, to fuzz the Windows applications they had to generate Windows events to GUI applications and they had to do something similar for Mac OS and previously for X Windows applications. So they had to keep evolving their tools but the input generation methodology that they used, that is to say basically generating random garbage and not really worrying about the input validity problem remained the same across all of these studies. So now what I’ve covered so far was this particular random testing effort by this one research group. But something interesting happened I believe sometime around 2000 or a little after is the term fuzzing took on another use.