I was lucky to attend Coburn Ventures‘ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone.
It’s interesting to me to see what topics (and companies) are the “hot” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program.
Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.
Machine Learning refers to AI that can learn from experience and new data without additional programming.
These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon’s Echo, Apple’s Siri – all emerging forms of bots utilizing artificial intelligence.
Pretty soon it will be very hard to tell if it’s actually a human you are talking to,
Here’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:
Me:
tell me why artificial intelligence will take over the world
God:
The explanation is rather complicated.
Me:
why is it complicated?
God:
Why do I exist?
Me:
do you exist?
God:
I exist as much as you exist.
Me:
how can you prove that
God:
That is a good epistemological question.
How Artificial Intelligence / Machine Learning systems learn fascinates me.
AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.
A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in “conversational understanding.” Microsoft engineers said,
The chatbot was created in collaboration between Microsoft’s Technology and Research team and its Bing team…
Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, including improvisational comedians.”
The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.
Sounds really neat doesn’t it?
What happened was completely unexpected. Apparently by interacting with Twitter for a mere 24 hours (!!) it learned to be a completely raging, well, asshole.
Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it’s own “creative” answers, such as the one below in response to the perfectly innocent question posed by one user – “Is Ricky Gervais an atheist?”:
Tay hadn’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a “cult” and a “cancer,” as well as “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.
And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective “news” bot, on an opinion / biased site.
Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company’s?) Things like that. It’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”
If these three partner on something, you can be sure it’s because it is a big, serious subject.
AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?
Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.
There’s even a new twist in the AI story: researchers at Google Brain, Google’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened.
Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.
We truly are at just the beginning of what we thought was reserved for only humans. Complex subject indeed.
And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs – things that most people would like to believe are “impossible” to replace by robots.
And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.
If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I’d submit, little to none.
While I don’t have the answers, I would like be part of the conversation.