So, what does it mean that it has become possible for us to submit our choices of the alternatives before us to rules and formalized judgments, even, at times, in opposition to what works for us as individual selves (and persons defined by a cultural identity)?
It means that we can now function as moral agents, defined as ones who follow rules, laws, or formalized judgments even against short-term self-interest when necessary.
It also means that we can now make mistakes, that is, we can now fail to follow the right rule. Regarding the latter, when following intuitive snap judgments, we could always choose an alternative we later regret (in hindsight), but it will always have “seemed to be a good idea at the time”. For our standard was never more than to please ourselves, and we are always doing that in whatever we try to do.
But an expert experienced in the rules can now say “you should have known that that was the wrong choice to make”. And this applies in both the technical sense (“that was never going to work”) and the moral sense (“that was wrong and you knew it”).
Human beings do what no AI can do, re-programming themselves by overwriting their top-level algorithm ( for an elaboration of this, see AI & My BS.)
And while no human being does this except in the wake of an autobiographical experience which is externally determined at countless places, that is not as decisive as it sounds. To show this, let’s explore the software to hardware (or wetware) analogy.
What runs the AI? In one sense, of course, it’s the software program, the algorithm. In another sense it’s the hardware, hardware specially designed to be able to receive a guiding software to run it.
It’s just the same for a human being. One who knows and practices calculus follows the rules of calculus knowledge, or they’re a bad mathematician. But evolution had to design (in it’s ass-backwards trial-and-error way, which is quite efficient if you have geological-scale quantities of time on your hands) the wetware able to receive a guiding schema of calculus knowledge to run it.
The design and manufacture of the hardware in the one case is determined by engineered mechanical causes and in the other by evolutionary determined causes. And yet, in both cases, an abstract conceptual schema determines the outputs thereafter, whether decisions or actions, of the hardware/wetware. And should either the AI or the human diverge in their outputs from the guiding schema, the schema takes precedence and sets the standard. The divergence makes the empirical object “wrong”, and in need of repair, remonstrance or reprogramming — real, but wrong, as judged by an abstract conceptual standard that is internally coherent but which can only be embodied in a tangible body using physical forces. And in both cases, failure to adapt to changing conditions means being out-competed and made obsolete or extinct.
In the case of the AI, those physical forces are designed and controlled by the programmer in accordance with a conceptual blueprint that the programmer creates. In the case of the human being, evolution has, through a lengthy process of trial-and-error, hit upon the felicitous combination of forces and factors that bring into being the thing with powers not previously present.
Now, let’s return to the question “what does it mean that humans can guide their decisions and actions by rules and formalized judgments?”
It means that it’s useful to divide them into two classes, those who reliably follow the rules and those who don’t. In moral matters, it is better to form partnerships (in marriage, in business, in politics) with those guided by rules and such. And having chosen a rule-abiding life partner, it’s better to raise rule-abiding offspring. And it’s better to be such a person because you are more likely to be accepted as a partner by people worth having as partners. And that’s quite apart from the shame of being given the chance to become fully human and having chosen to remain an animal instead, little more than a by-product of some rather slimy and imperfect wetware biological processes. Bad people and good are equally real and equally biological, but avoid the bad and partner with the good.
Likewise, right answers and wrong answers are equally products of neurological processes, but seek the one and eschew the other. Knowledge and ignorance are equally qualities characteristic of people. Recognize each for what it is wherever it occurs, but never rely on the rudderless opinions of the ignorant, and seek out the rationally-tested opinions of the knowledgeable.
For all practical purposes, in any sense of the phrase, the difference between agents guided by rules and formalized judgments and those without such guidance is critical. Next to that the observation that all agents are strongly shaped by their environments is absolutely trivial.
The rural economy is essentially extractive, and land is the essential asset it extracts from.
Mineral mining and fossil fuel extraction are obviously extractive but farming and ranching are no less so. Farmers extract nutrients from soil using cultivated crops as their means, while ranchers extract nutrients from grazing land using livestock as their means.
Populations spread out to convert available land (whether one’s own or someone else’s) into resources for active extraction.
Think AP Euro. Think Russia, China and American Manifest Destiny. Communist Internationalism is just their version of Manifest Destiny on steroids. (We were smarter, grabbing while the grabbing was good, and then growing out of it. Remember: don’t ask for permission, ask for forgiveness!)
Extractive economies expand to maximize land under cultivation by settlement, conquest, and annexation.
Even when wealth is land-based, the rural economy needs population centers in which cluster those with the special knowledge and skills required for trade, shipping, and auxiliary manufacturing. Over time, banking, finance, governmental management, the legal management of disputes, and religious and cultural legacies require universities, centers for propagating knowledge and knowledge-based skills.
Extractive/mercantile and industrial/mercantile economies create trading posts and colonies to maximize trade; when trading posts are denied, restricted, or attacked, colonization follows.
With the industrial revolution, economies of scale dictate large-scale factories and labor forces, adding new impetus to the growth of cities. The more emerging industries require specialized knowledge, experience, or skills, the more those industries develop in city-centered clusters.
The larger the enterprise, the greater the pools of capital upon which it must draw. Again, the industrial capitalists cluster in cities, as do the experts who serve them and the providers of new services which they demand. The gameboard tilts further towards cities and the deeper and broader circulation of knowledge, both general and specialized. Universities broaden their scope, their connections to industrial production, economic and political management; their density and geographic footprints increase, as a growing percentage of the labor force (as well as owners and managers) need a college education. Universal public education is introduced because even the workforce (not to mention the electorate) needs a degree of literacy, numeracy, and general knowledge.
With the digital revolution, network effects tilt the gameboard yet further toward urban centers and away from rural districts.
The urban digital economy produces two final threats to the rural economy and the ways of life it supports: AI and robots, the “driverless” means, respectively, of extracting information from data and of powering manufacturing processes.
Human labor will focus more and more on the knowledge to control those means, using the knowledge of STEM disciplines, and innovative, creative knowledge, human interaction knowledge (knowledge about leadership, management, and people skills), and other context-sensitive forms of knowledge, which will be the last to be delegated to AI, if ever they are. Power is the one thing people are loath to delegate. (You can do my scutwork, but I’m still in charge!)
If digital devices ever threaten to overtake human knowledge in those latter areas, I suspect that the number of voters directly at risk would be sufficient to use political means to harness those gains for the broad base of the electorate (which will be a middle-class with a sufficient level of knowledge to have maintained their clout in both the economy and politics).
That seems to me the likeliest outcome, short of the sci-fi dystopic scenario in which humans create AI with both the instrumental capacity and the will to, in effect, stage a palace coup against humanity in general. Not likely, but perhaps possible if we are stupid or greedy enough to set it in motion.
Far more likely: Machiavellian persons or parties using AI and robots to enslave their own societies (with Russia and China, Facebook and Google, Alibaba and Baidu leading the way). After all, if all that AI-driven robots lack is the will to become dictators, any number of narcissistic power-seekers would be happy to provide that.
Rural districts will maintain disproportionate power under legacy systems of representation, which they game desperately to counter their continually diminishing role in the economy, and with a generational delay, in setting public opinion and cultural norms.
Culture wars seem the obvious result of such conflicts. The short-term advantage is with the rural districts which can invoke traditional values, their own entrenched in-group interests and privileges, and fear in the general populace of accelerating fundamental changes. The long-term advantage is with the urban centers and the better educated, for the underlying economic and demographic trends ineluctably favor them.
Such big picture thinking is a tradition that has been developing since Kant, through figures like Hegel, Marx, John Dewey, Samuel Huntington and Francis Fukuyama, not to mention professional think-tank futurists like Herman Kahn or contemporary popularizers like Alvin Toffler, Ray Kurzweil, and Michio Kaku.
Such big picture speculations are good as conceptual starting-points but must be tested using empirical studies to be considered at all reliable. I suspect that many, but not all, of my speculations would bear up under such scrutiny.
Why should a philosophy & political theory guy like me be writing on AI?
Well, besides “because I feel like it!”, I do have an MS in Computer Science from DePaul University, with a specialization in AI. But I got it in 1986, so it’s a little out of date (sell by June 1990?) And, yes it’s a Master’s (MS) not a Bachelor’s (BS), but BS is catchier, don’t you agree? Besides, an MS is, arguably, just a BS with extra toppings (at about $1,000 per additional ingredient).
Back in ’86, the capstone of AI was the expert system. Expert systems were essentially giant flow charts composed of if-then statements. If <condition = x>, then< execute instruction y>. You built expert systems by interviewing people in fields dependent on complex rule-based judgments (oil prospecting geologists, medical doctors, tax lawyers, chess masters), and then representing those judgments in large programs composed of elaborately nested if-then statements.
The irony of early AI was that it proved much easier to convert into software the knowledge of rare experts who had spent their careers mastering complex systems than to mimic the common sense capabilities of your average joe: recognizing objects, categories, and words, walking, talking and not shooting oneself in the foot (a computer freeze is essentially a computer shooting itself in the foot forever and ever; we clever humans usually stop after the blood begins to flow).
For my Master’s Thesis, I created a program in the programming language Lisp, that played Spades, a bidding game like bridge, but simplified in a few ways, among them, that spades is always trump, and bidding (on how many tricks you’ll take) is limited to a single round.
I built it as an expert system, meaning I knew the basic strategies a player used, and coded them into if-then rules, with a randomizer for resolving toss-up situations. It was really just a matter of formulating the rules-of-thumb I followed when playing, and converting those rules into algorithms — precise unambiguous sets of instructions that a computer can follow.
The big breakthrough in AI came when research shifted from rule-based expert systems to Deep Learning. Deep Learning uses computers organized as neural networks, mimicking the structure of our own brains, composed of networks of neurons connected to one another adaptively, as dictated by incoming experiences.
With a neural network in place, the strategy was initially to feed it massive input streams of data relevant to the task to be learned. So, if you were constructing a chess-playing AI, you’d input classic games between grandmasters, and the AI would grow to recognize which moves raised and lowered the probabilities of eventual victory.
The big surprise came when researchers discovered that the newer, faster, more powerful computers could ditch learning from the best human experts. These computers had the speed and power to simply run through all possible permutations, thus discovering brilliant moves whose beginnings were so unpromising that humans experts overlooked them, for humans let their intuitions guide them away from less promising to more promising avenues to explore.
AI has no intuition. It simply explores all possible permutations by trial-and-error. In short, Deep Learning removes intelligent design from chess-playing. It blindly explores random permutations of the totality of possibilities, following in the footsteps of evolutionary biology, in which mutation works as a randomizing function. Biological mutation is just permutation through all possible configurations. Those that can’t survive, or “win”, die or go extinct.
Deep Learning is just: 1) an algorithm operating within a permutational space 2) that generates algorithms for identifying patterns found within that permutational space, and 3) formulating responses to identified patterns that 4) optimize the user’s situation in terms of a set of victory conditions.
Moral of the story: one of our main cognitive advantages over AI is that we take intuitive shortcuts instead of using brute force to survey and exhaust all possibilities.
In the long run represented by modern chess- (and Go-) playing computers and by biological evolution, this no longer looks an unqualified advantage.
And our intuitive thinking sometimes helps and sometimes hurts, as Nobel prize-winner Daniel Kahneman shows in Thinking Fast and Thinking Slow.
We may then have to rely on our other advantages: a contextual understanding of the conditions of our own survival and of the value of life and living well.
Utopian and dystopian possibilities abound. The evidence of history is mixed.
If an AI were ever to resist being disabled or unplugged, then the game might be up.
Or perhaps it would only be entering a new phase. Who’s to say they’d be any more sure-footed at living well than we are?
The AI that has played and then beaten the best human masters of chess and then Go, has gone through 3 stages.
First, they were programmed to mimic the best human players using rules and algorithms.
Then, they were trained by having classic games by masters input into them for big data pattern recognition.
Finally, they were just left to work through every random possibility. At first the computers made moves so stupid, you or I would never even consider them. But they quickly “learned” to abandon moves that led to defeat. Ultimately these systems beat the best human players and the earlier AIs, and by wide margins.
My only explanation is that human masters take intuitive shortcuts, never even considering lines of play that look unpromising. But some of these turn out to be hidden gems, and these are discovered by brute force computing, and added to the computer’s arsenal of weapons.
So, now I make a habit of altering my habits in minor ways, just to see if something shakes loose. And a surprising amount of the time, these random variations produce new approaches that are either straight out better, or better in some circumstances, than my old rigid habits.