At the VI Computer Law World Conference, Andres gave an interesting presentation about the distribution of nodes in networks (definitely not random), and the impact of this particularly on regulation.
This reminds me of a presentation at the recent iCi Research Symposium by Sandy Pentland, where he explained the research being undertaken at MIT on predicting human behaviour through wearable computing. Sandy showed results that he could identify social outliers and influencers, predict the location, associations, and health of participants, and even predict the outcomes of wage negotiations and blind dates, with astonishing accuracy, using such things as movement sensors in mobile phones and voice pattern matching (not speech recognition, but pitch and tempo etc).
In a previous abstract, Andres asks:
“How should we regulate networks if we can find certain deterministic characteristics to them? Is enforcement of illegal behaviour easier to regulate because we understand the technology?”
Obviously these focal points provide big juicy targets for regulation; one comment from the discussion was that we would have a much larger effect if we regulated Google rather than any other random node. Indeed, we need to seriously consider how we apply or let pressure be applied to central nodes.
I'm a little bit more concerned with forces internal to these powerful nodes. Instead of regulating Google, what happens when Google changes itself to effect everyone else? We've already seen this debate with reference to alteration of results on the Chinese Google site. Fundamentally, a little bit of social engineering on the part of a small number of core private companies could radically change the way the Internet looks. Everyone trusts Google – maybe they could convince a critical mass to adopt new root name servers, for example…?