Some of the presentation seems a bit dated (the book was written in 1999) - there's no mention of Google, Ebay, or spam filters, to name just a few common bots. Moreover, the style of presentation endows bots with human-like characteristics - bots are insatiable information gatherers, greedy and remorseless hagglers, socially-inept negotiators - which make it hard even for me to keep in mind that bots are just software programs entirely designed and written by humans, rather than autonomous agents. (I encounter anthropomorphic tendencies in people's interaction with technology all the time, such as the belief that the computer is maliciously crashing or that the search engine is petulantly withholding desired information, and I often have to remind people that other people are actually at fault for bad design.)
Brown and Duguid warn that bots can't be trusted to be fair, and that they lack the ability to make the negotiations and accommodations that provide stability to economic markets and social forums alike. They allude to what biologists call the "Red Queen Effect," where bots (actually, bot designers) have to constantly innovate just to stay in place - e.g., worms and software patches.
They also warn against the faux panacea of better encryption through Moore's Law, saying that the same tools will be available to break encryption as exist to make it, and that encryption, no matter how strong, is just one part of a series of actions that has much weaker points, such as human trust or institutional accountability. (It is for this reason that I believe that training in human-computer interaction techniques should be required for all computer scientists - many really are myopically focused on information exchange and leave out the users entirely, or claim that people can easily be replaced by bots without understanding any of the social reasons that bots will fail.)