I was reading an article on Ray Kurzweil's site kurzweilai.net titled
"Is AI Near a Takeoff Point?" One section of it raised some interesting points. (at least interesting to me)
The author, J. Storrs Hall, compares strong AI (that's Artificial Intelligence to those not in the know) and the controling of a true strong AI machine (a truly intelligent machine) to the control of a government. He begins by calling modern governments a type of huge computer system (with guns).
More below...
Cross posted
from my new blog.
Hall goes on to points out that those political systems controlled by a single individual or small group tend to be the most oppressive and least self-correcting. It's those systems, like that traditionally found in the US and other modern democratically-based governments, that distribute the power in a system and contain internal checks and balances that are least likely to go down the path of oppression. It's when a single person or small group takes over the various controls of power in a government, no matter how ostensibly it's supposed to be distributed, that governments go bad.
In other words, as long as a large complex system like a government retains distributed centers of power with conflicting goals that all have to negotiate to get along things will run fairly well. In my opinion, though the author does not point this out, we can see a wonderful example in our current US administration.
Normally the checks and balances of the US Constitution keep the power distributed across the executive, judicial and legislative branches with the legislative holding the majority of the controls (money and laws) because the legislature is comosed of another distributed system of conflicting and cooperating goals amongst the various representatives and senators. This normally keeps things on a fairly even keel. Even wild extremes, like wars and depressions, can eventually be ridden out. The peaks and valleys are generally smoother than in other systems. (think, for example, of currency crises and political upheaval around regine change in other countries vs. the US's usual reaction to similar conditions)
The problem we encounter now is that the current government has amassed far too much control into one branch of the government. It so happens that it is the branch in which this much control is the most dangerous and harmful to the system as a whole because it has a single focus at its head: the president.
If Congress gains too much power, there is still the further level of distributed power in the conflicting and cooperating agendas of the individual legislators. If the courts gain the upper hand (which would be a very strange situation) they really couldn't do much since they are essentially a reactive body, not an active one. They hold great power, but cannot truly initiate things, only react to situations brought before them.
This all leads me to think about how to control such a strong AI that the author is actually discussing. How do you control, how do you regulate such a "self-creating, self-modifying intelligent system"?
One possibilty may be to build into the AI a set of base instructions similar to The US Consitution that ensures that different parts of the AI are motivated by different agendas and thus need to cooperate while keeping watch over the others. Or would the AI system as a whole be a part of a larger system similar to the government where other AI and/or human systems keep an eye out for the others?
This may not, as it does not in our current world, prevent one force from coopting one of the watchers and force them to go along with whatever is wanted by the central power. We have to be aware of this possibility and be especially vigilant. Could we have other systems that serve the role that the press is supposed to have in our current political system? Muckracking AI? Investigative AI? Would "memory leak" come to be something different?
I'm not sure if this is merely an interesting analogy, or a possible mental model for future controls over not just our own intelligences, but also those transhuman intelligences to come. We already speak of "contracts" in certain types of software architecture today. But in today's software we generally need to deal with distributed, possibly conflicting agendas that need to be controled. For this we usually create a strong central authority that can mediate between the conflicting goals among the users of the service.
Could this be an equivalent point in computational evolution to our period long ago in social evolution when we discovered that we needed some central authority, tribal chief or clan elder, to help negotiate social contracts and conflicts within the group? We shall see. But if so, this time I'd like to skip the whole God-King phase of control, thank you very much.