Two Drunk Guys Versus AI

There’s a really annoying abundance of hype around artificial intelligence (AI) lately.  Or at least it’s annoying to me because the current hype sounds like a total rehash of old news.  However, it inspires me to share a story of human triumph over AI.

Before jumping into that story, I’d like to acknowledge that I’m far from the only person who is having a cranky “get off my lawn” reaction to this.  In particular, I found a lovely article written by Dr. Alok Aggarwal which provides a nice background on the last 50+ years of AI development.  That article is a nice antidote for anyone feeling feverish about the latest Gartner Hype Cycle for AI.  Long story short, each decade sees new crazy predictions about AI which inevitably fall short.

I’m not going to tread over that same ground.  Instead, I’d like to share some background on the human side of AI in the marketplace.  Starting with our own stupidity.

We’re Making Artificial-What??

There is NO universally accepted definition of what “intelligence” is or means.  It is, after all, a concept rather than an observable phenomenon of nature.  Or put another way, there’s no physical device available that can objectively measure intelligence like you can measure mass, velocity, force, etc.  Not here to debate it, just establishing that so we can move on.  Want to debate it? Go have fun editing the Wikipedia page or maybe just read one of many university research papers by CS majors who couldn’t think of something new (it’s an evergreen area).

We’re Irrational About Our Rationality

In my experience, discussion of artificial intelligence often assumes that human intelligence is a concept in the realm of logic and reason.  Which is pretty irrational considering how much we know that other factors such as fear, bias, and rationalization influence human decision-making.

Or to put it succinctly, combine the Dunning-Kruger effect with optimism bias and the Pareto principle , and you have people consistently making unrealistic predictions about what AI can accomplish and how long it will take to get there.

Whether it was expert systems, case based reasoning, neural networks, natural language processing, or the stuff that sometimes gets lumped-in with AI discussions like data mining, genetic algorithms, and game theory.  There’s always someone really excited about the “promising” applications that aren’t quite ready for widespread adoption, but they’re super-confident that the final 10% of whatever it is will be solved when we get another 10-X in computing power, data, or both.  I personally remember some of that from the 80’s and 90’s and we have at least 1,000-X more computing power and data today and we still don’t have HAL 9000 powering my Amazon Echo.

Underestimating Human Un-Intelligence

Image result for pc magazine cover artificial intelligence

A lot of the current crop of AI hype is around areas that don’t require starting with a big established base of human knowledge.  Instead, with the amazing amount of data available for a variety of domains, there’s been much more focus on machine learning techniques.  In brief, you give the machine a lot of positive and negative examples of what you’re looking for, then it figures a mechanism that would predict those outcomes based on that data.  Then you feed it more data, it tweaks itself some more, and so on and so on.  On paper, this should uncover/evolve “intelligence” that is beyond humans because our brains can’t handle the amount of data to spot those patterns and insights.

Except, of course, there’s still humans involved in writing the machine learning algorithms and also in the selection of the data.  And then there’s our irrational denial about problems existing in the first place, so we wind up with the largest machine learning systems in the world unable to identify people of color.

AI By Any Other Name Actually Sounds Much, Much Sweeter

Related image

My early job experience in AI in the 90’s taught me a few things, among them that the market had experienced a backlash against AI in the 80’s.  It played out in much the same way industrial robots sparked backlash – people didn’t want automation to take their jobs.  AI was the automation of thinking while industrial robots were the automation of labor.  My old boss at Gold Hill, Celia Wolf, explained to me that the companies that survived the big AI downturn of the 80’s focused on delivering embedded AI solutions.  So people weren’t confronted with “artificial intelligence”, they just got products that worked better.

After college, I worked for Oracle in Silicon Valley in the Decision Support Systems (DSS) division.  DSS was like a watered-down concept of AI labeled to avoid provoking backlash.  You see, the computers won’t make the decisions, they’ll merely provide tools and information to support human decision-makers.

Within a couple years there, the DSS label was replaced by the now-ubiquitous term Business Intelligence (BI).  And BI is even more watered down concept that includes tools as simple as spreadsheets – essentially anything that presents data in a more tractable package so people can learn something.  With the move to BI, it seemed like the market had completely removed any expectations of “intelligence” from computers.  Years later, it seemed that some concepts were working their way back into the mainstream under the BI umbrella, albeit with new names.  I’m particularly thinking of Big Data.  And you probably can’t see my eye-roll as I typed that.

So AI has a lot of human issues working against it – it’s created by people who can’t define it, judged by people who are irrationally biased, and used by people who are threatened by its success.  That said, let me tell you of a lovely story about humans and AI working together to beat other AIs.

How Two Drunk Guys Beat Skynet

For the people who don’t know “Skynet” – it’s the fictional AI system that tries to kill all humans and take over the world in the Terminator movies.  So, we didn’t literally beat Skynet.  But we did beat a bunch of really “smart” chess programs.

Do the little wavy thing with your fingers so you know you’re about to read a flashback.  And now it’s the 1990’s.  I was an undergrad at MIT in the software engineering lab class 6.170 (now called the software studio class).  The final project was to form a team of 2-4 people and create a program that can play antichess.  Then all the teams would have their programs pitted against each other in a tournament to find a winner.

What’s antichess?  It’s a variant of chess where the goal is to lose all your pieces – first one who does wins.  Also, in our assignment, we followed the rule where capturing is compulsory, e.g. if you can capture your opponent’s piece, you have to take it.  And there is a time limit for deciding each move, purposely selected so our programs would not be able to evaluate the complete space of all possible moves on each turn.

Why antichess?  My understanding is that they wanted to give us a problem that was kinda familiar but not as common as “regular” chess where people could easily find playbooks and algorithms.  So this helped level the playing field.  And we had to program in Portable CLU, a rather obscure language that also served to help level the playing field.

My team was… special.  We were a team of three, and we already knew each other and were friends.  Only one of us had significant programming experience before MIT, and of course, she ditched us because the stench of impending failure was palpable in our early days.  That left me and my buddy Sam, and we were far from rock-stars on our prior team assignments.

We had two good things going for us though.  First, we were decent chess players.  Second, I worked part-time at a software company (Gold Hill) that made LISP and expert systems tools.  So I had some appreciation of the power of heuristics , which is the fancy AI term for “rules of thumb”, and how to incorporate them into an algorithm.  And rather than just start coding an algorithm to search all possible moves based on the general rules, we knew it was important to actually have some human intelligence about the game.  Hence, we hunkered down and played a ton of games of antichess against each other and against friends while drinking a lot of beer and MD 20/20.  And we discovered a powerful heuristic.

If you play antichess, it becomes clear that your pawns are your own worst enemy.  They are essentially convenient suicide stations for your opponent’s pieces.  They have very little mobility, so it’s easy to maneuver all the other pieces such that pawns MUST take them.

Realize that the most common approach other teams took was to implement some form of A* path searching algorithm.  That meant the program would go through as many combinations of moves as possible within the allotted time (it couldn’t search them all) and pick whatever it determined to be the best.  And in the beginning of a game when you had the most pieces on the board, the programs couldn’t look very far ahead given the time limit.  So the difference between who would win and who would lose had more to do with who could do a better job of optimizing essentially the same code.  Which often meant discarding most of the lessons of the class regarding abstraction and modularity and just in-lining the hell out of low-level operations.

Except for us.  Our heuristic was simple – if we have any pawns on the board, only search for the best way to kill a pawn.  That simple rule of thumb drastically narrowed the moves our program had to search so our program was able to look much further ahead down paths we knew would be successful.

Needless to say, our TA was shocked when Sam and I, the worst team in his section, beat all his other teams during a practice round, including the team many expected to win the tournament.  After all, they had the highest performers in the overall class, and we heard some of them even had years of professional programming experience before MIT.  And we kicked their asses.

And then the class-wide tournament came!

And we lost.

Miserably.

Why?  We made an illegal move.  Did I mention Sam and I weren’t the best programmers?  Yeah, we had a bug in our code that we didn’t catch in our testing, and it was a total bummer because it wasn’t related to the design of our algorithm, it was just a simple coding error.  To use the classic John Cusack movie “Better Off Dead” as a metaphor, our Lane Meyer actually biffed it on his face just before the finish line.  Still, we could hold our heads up high because we knew we did something the others ignored – we tried to really understand the problem before we jumped into solving it.

The Moral Of The Story: Humans Make Smart, OK?

Is machine learning cool?  Sure.  Are neural networks neat?  Yeah.  But that doesn’t mean that human experience and expertise no longer have value.  If anything, the best bet is to combine them in a virtuous cycle rather than pit them against one another.  Like if you’re creating a face recognition algorithm, maybe consult with experts who know some people have difficulty identifying people of other races.  Not that you could instantly negate that effect, but being aware of it would at least help you check the assumptions of the people involved.

And it’s not that heuristics and human experience should trump all else.  In fact, that falls prey to heuristics bias which touches on what many of would call “conventional wisdom”.  When you keep accepting things are true without understanding and validating the reasons why, you may fall prey to conventional wisdom instead of discovering some great new insight.

So have fun with all your wacky AI projects.  Just be sure you don’t dismiss the value of incorporating human intelligence, and while acknowledging existing rules of thumb, remember that rules are meant to be broken.

(And go easy on the cheap liquor.)