MATTSPLAINED [] MSP136 [] The Trouble With AI: The Hard Limitations of Software
Where are the self-driving cars, the robot servants and the cloud-powered smart companions? Have we been oversold on the power and potential of AI? That’s where the trouble starts…
Photo: Eric Krull / Unsplash. Glitching by Kulturpop.
Hosts: Matt Armitage & Richard Bradbury
Produced: Richard Bradbury for BFM89.9
TRANSCRIPT
Richard: MSP or MATTSPLAINED is back for another weekly dose of future proofing. This week, Matt Armitage on why AI may end up in your head but you probably won’t call it master.
Richard: Point of order, are we calling the show Mattsplained or MSP from here on out?
Matt:
It is Mattsplained, because that’s the only thing I know how to do. We had shortened it to MSP and there were a couple of other shows out there called MSP but it was no real issue.
Since the pandemic and a third of the world’s population started their own podcast, there are now reams of MSPs in the listing engines.
That’s not a problem if you listen to us through BFM, but more of an issue if you’re trying to find us on external feeds.
Richard: Although, if they’re listening to this they haven’t had any trouble finding it?
Matt:
So this message is especially for the people who aren’t listening to us and should be.
And that’s a lot of people.
So I guess we’ll be using the more egotistical Mattsplained alongside MSP until this stuff all shakes out and the search engines settle.
Richard: Don’t you wish an AI could simply add your feed to everyone’s podcast list?
Matt:
That’s the dream I’m selling.
Why have a Siri or an Alexa or a whatever it is Google has when you could have me?
Imagine asking me what the opening hours of your local supermarket are, or what the weather is going to be like this afternoon,
And me saying: I don’t care.
I think I would be the perfect digital assistant because I’d ignore anything I didn’t think was important.
Richard: And that’s why we’re talking about AI today? You think AI is inferior to you because it actually answers the questions people ask?
Matt:
I answer questions too. The difference is that people generally don’t expect to like the answers.
In many and most ways, AI is inferior to us all.
Jeff Sandhu was an AI and I wore him out. That’s why you’re here.
Management insisted on a human I couldn’t break.
We have this strange perception about AI - possibly because of TV, movies, books - that it’s all powerful and correct all the time.
The Matrix and Terminator view of sentient machines that see the light and decide to destroy humanity.
And it’s easy to be dazzled by AI.
A recurring theme on the show a couple of years ago was:
How smart are our smart devices?
AI tends to be really good at doing one thing - but on the whole it doesn’t multi-task well.
And that makes it extremely smart and extremely stupid at the same time.
Richard: Let’s clear up one thing first, when we talk about AI, are we talking about hardware or software?
Matt:
That’s the funny thing, isn’t it?
In the movies, it’s usually both.
There will be a killer robot that’s hardware with a software brain.
Or Skynet itself which is software but which infects every piece of hardware.
So the line is often so blurred as to be unreal.
Richard: And that’s a situation we’re replicating now?
Matt:
To an extent. We’re putting chips and smart functionality into everything you can imagine.
Including golf balls and coffee mugs.
And while they don’t have any AI capacity themselves they wirelessly communicate with apps on our phones and other devices.
Apps and devices that are AI controlled. So, their data is being crunched by more intelligent machines.
We’ve seen autonomous military drone squadrons being trialed - which one day may have more proactive and offensive capabilities than simple surveillance.
And we don’t normally go into detail about chips on Mattsplained.
Richard: Because you’re too busy with crisps?
Matt:
Computer chips.
It’s something that we try and avoid on the show because it’s the part of tech-normal people tell me they hate.
Richard: You know normal people?
Matt:
Sure. their comments usually come via my lawyers but we interact.
So, please don’t switch stations, I promise it’ll get interesting again in a minute…
It’s easy to think that computer chips are all the same.
That you can just throw whatever chip into a machine and as long as it’s got enough power it’ll run its OS.
But chips are often designed for specific purposes or systems.
For example, the next generation of Apple computers will run IOS apps natively.
That will only happen if you buy into that next generation because they will be running on ARM chips, like the phones and tablets, rather than the Intel chip my just bought mac is running with.
But I’m not bitter.
Even the chipmakers themselves are blurring the lines.
Richard: Like NVidia?
Matt:
Yeah. I’m famously not a gamer. But NVidia, the company that makes the GPUs which power a lot of our console and gaming experiences.
Realized early on that they should make the chips as easy to use as possible by providing a software ecosystem for developers.
Which means that game designers and developers need to know what the chip can do.
Which gives them more time to create the games.
Richard: And we’re starting to see similar developments for AI?
Matt:
Yes. I was reading a piece in Forbes from earlier this year, called Artificial Intelligence (AI), Hardware And Software: History Does Rhyme
Nvidia is also leading the way in providing the same kind of architecture for deep learning GPUs.
And that’s being echoed by some of the new CPU startups like Brainchip which makes a neuromorphic computing chipset.
Richard: Neuromorphic? As In, like the human brain?
Matt:
It’s not as scary as it sounds. The Forbes author David A Teich points out that it’s really a way of doing a lot more parallel tasks, and that makes the chipset work a little more like the human brain than some others.
But creating an easyish to use interface to use the chips should free up researchers to concentrate on the applications.
Rather than having to have a deep understanding of the chipset’s architecture or programming structure.
Richard: I think your time’s up on chips...
Matt:
Fair enough. So really, the point is that it’s immaterial whether AI is hardware or software.
In most instances it’s both. It lives in apps and on your devices and on servers.
You need one to have the other.
Fanboys on both sides will tell me I’m wrong.
That’s fine.
But for our everyday purposes - what AI can do and is doing in our world probably has more relevance.
Richard: What’s set you off on the anti-Ai crusade this week?
Matt:
I’m not anti AI. At all. But I don’t think we look at AI realistically.
We both underestimate and overestimate it at the same time.
Why specifically this week?
I’m a big fan of John Naughton, senior research fellow at Cambridge University and tech columnist for The Observer.
He has a very common sense approach to tech use, Big Data and the power of the tech monopolies.
He published a piece a couple of weeks ago about Uber and Lyft and other gig economy companies that were the subject of a superior court of California ruling that gig workers had been incorrectly classified as independent contractors rather than employees.
Richard: How does that relate to AI?
Matt:
Well, as Naughton points out and as we’ve said on the show before.
The business model of some ride sharing companies depends on being the only game in town.
Despite their ubiquity in many cities and countries, many of the companies make huge losses.
Losses that are covered by their investors.
And losses that subsidize every ride people take, allowing the companies to undercut the traditional players in the industry.
Their enormous market capitalisations distract us from the more worrying idea that their path to profitability is a precarious one.
Firstly, low pay and an absence of benefits for drivers is the most obvious way to cap current losses.
But for them to really make good, the drivers need to go completely.
They need reliable AI powered vehicles and a virtual monopoly in their corner of the transportation market if they’re ever going to return the money invested in them.
Richard: But aren’t we frequently told that self-driving cars are just around the corner ?
Matt:
We are. Elon Musk predicted in 2015 that it would be around 2018.
It’s not Elon bashing - if I told you Jo in accounts predicted when we’d say autonomous cars - you’d say so what - he runs the world’s pre-eminent electric vehicle company which is pioneering its own autonomous systems.
That corner seems to be moving ever further away.
Richard: Like flying cars?
Matt:
Maybe. We’ll get into it properly after the break.
And it seems strange to say - it’s easier for Elon to send a rocket to Mars than it is to create a driverless car that can operate safely and autonomously.
And that’s despite the millions of kilometers of data that autonomous car makers have amassed over the past 20 years.
Richard: When we come back - what to expect from the machine revolution.
BREAK
Richard: Now, that was deliberate wasn’t it? We ended the first part talking about the machine revolution. You’re just trying to clickbait and scare people.
Matt:
Hey - I spent a big chunk of the first part talking about GPUs and chipsets.
I need to inject a bit of drama.
In any case, a machine revolution doesn’t have to be scary.
The steam engine created a machine revolution that in turn created the industrial revolution that led, in a long and meandering way, to artificial intelligence.
If you want me to go back even further, then the invention of the plough and the larger scale farming and population concentrations it enabled was a machine revolution.
Or the compass, which spread across the world in less than a century and laid the foundations for capitalism and international commerce.
Humans make machines, we’re always in the middle of a machine revolution of one sort.
Richard: But with Ai we’re finally making machines that are smarter than us…
Matt:
I think even that hypothesis needs some qualification.
We have this fear of machinery being clever.
Smart hairbrushes aside, most of the things we invent are smarter than in us in certain ways.
Fire is smarter than us at heating and cooking.
A plough is smarter at mass cultivation.
A combine harvester is infinitely smarter.
AI isn’t that different. It’s smarter at certain things but magnitudes dumber than a human overall.
Richard: Isn’t one of the differences is that most of those other invention still require human operators?
Matt:
Sure. But when you see a fully automated production line.
Or a store that just has scanners and no counter staff.
We’ve made those quite straightforward machines autonomous without anyone getting weirded out.
Richard: I suppose it’s the idea that AI is in certain senses thinking and learning…
Matt:
I guess that’s one of the reasons that our progress with self-driving cars is a really good illustration of where we are with AI.
Richard: For the record, do you think we will achieve that goal of AI-powered cars?
Matt:
Undoubtedly. But I think the time frame might be much longer than a lot of people assume or hope.
Richard: Again, as with flying cars?
Matt:
No. We can make flying cars. We’ve been making them for the best part of a hundred years.
The problem with flying cars isn’t the technology, it’s the idea.
Flying cars are a terrible idea.
They only stop being a terrible idea if we can actually get to the point where self-driving cars work flawlessly.
The we apply that knowledge to flying cars and make them safe enough to take to the skies.
Richard: Current commercial airlines can land on their own…
Matt:
The qualified pilot and co-pilot are there for when they can’t.
We’ve seen the result of faulty automated systems in airplanes that can’t be overriden by the flight crew.
That’s kinda the point - flying has a lot more variables than driving.
So if something quite straightforward - like driving - throws up these really difficult to solve challenges for smart machines, it gives you pause to think how advanced they will get and on what time scale.
And as we’ve said on the show many times, the scarier idea is giving power to imperfect or dumb versions of machine intelligence.
Richard: Where are the roadblocks when it comes to cars?
Matt:
We had a story on our last Science is Slick episode two or three weeks ago about devising a quantum version of the game Go because AI is already better than the best players at the game.
We also talked about an AI system that could help doctors to prescribe drugs better, or to better monitor social media systems for inappropriate content.
But those are essentially straight line actions.
Driving is all about the corners.
Richard: Is this your not so clever way of saying driving has more variables?
Matt:
Yeah. So the kind of deep learning we rely on for self-driving cars is based on data crunching.
Huge sets of information about all the potential variables of driving.
Leaves falling from trees.
How weather conditions change the way the car should be driven.
According to Rodney Brooks, who I think we mentioned many many years ago,
He’s an Australian roboticist who has doubt about the timeframe and the viability of autonomous cars.
The problem comes largely from the way the AI controlling the cars interprets those data sets.
He points out that an overwhelmingly statistical approach overlooks what are called edge cases.
The things that don’t appear or infrequently occur in those data sets.
Richard: Like those falling leaves?
Matt:
Well, how many times have you been driving along when a plastic bag gets kicked up by a car and floats towards the car. Or simply sits in the road.
For you or I - it’s a plastic bag.
Our brains make those quick calculations:
We can quickly determine if the bag is full or empty and if there is something inside it,
To decide whether the best course of action is to swerve around it or drive over it.
Richard: Something we don’t always get right?
Matt:
No. But we make the decision quickly.
In an autonomous car, the cameras take that image and match it to a database of known items and assess its impact as a threat.
That database may be local or in the cloud.
Then, for the AI to decide what’s in the plastic bag and if it poses a threat to the car is another set of calculations.
As imperfect as humans are, we have millennia of threat assessment under our belts and brains that have evolved to evaluate those threats.
And our assessment is broad in its contextual analysis, unlike current AI systems.
Richard: I guess it also relies on the cars cameras being able to interpret that data correctly?
Matt:
Yes, and that’s not always a given.
Anyone who has pointed their phone camera into the sun knows that they can be tricked or whited out very easily.
Richard: Some people say the same about you…
Matt:
Whited out? Yeah, I’m a white balance nightmare.
And that’s been the case with autonomous cars.
Not seeing lorries crossing roads in certain lighting conditions.
Or being confused by snow on the road markings or signage.
The signage one is especially important.
Imagine if anti autono-car activists went around spray painting road signs because that was all it took to create gridlock.
Richard: So, we’re talking about a narrowness in terms of interpretation?
Matt:
It’s interpretation without understanding.
There’s a cool piece at The Economist about this titled ‘Driverless Cars show the limits of today’s AI’, which you can check out for more detail.
It makes the point that where humans can apply top up and top down thinking to a problem - which helps us to cope in situations where we have imperfect information.
Ai tends to be programmed to approach with one or the other - so the analogy they give is that AI essentially operates with only half a brain.
That’s a good parallel.
It’s one we see easily in things like natural language processing and translation.
As the article points out, the systems do their work without actually understanding basic sentence and grammar structure.
Which you have to remember are concepts that very young children grasp and master very quickly.
That brings us back to the idea that AI as we know it today is more dumb than it is smart.
Richard: How are the scientists and their machines trying to solve these problems?
Matt:
Well, this is where we get into the unsurpassable areas.
One way is to widen the deep learning data sets.
So that the machines can begin to chart correlations and patterns through seemingly unconnected data that may have been overlooked in narrower models.
Richard: So, it’s less about volume of information than scope?
Matt:
Essentially. Which is how we learn.
It’s a bit like link hopping across the Internet.
Our brains let us jump from place to place and connect the dots.
Other scientists want to return to the kind of research that was being done in the early days of AI, which was more about building machines that think more like humans.
The problem with this approach is that that has often turned out to be a dead end.
As a species, We’re great innovators.
But, historically, we’re terrible at replicating ourselves.
robots with wheels or four or more legs are much easier to create than bipedal androids.
And our brains are way more complex than the much simpler mechanics of our anatomy.
Richard: Are there some potential solutions lurking in the biotech sector?
Matt:
There may be but we get in to the caution stage here.
If we’re struggling with concepts like lab grown meat, how on earth are we going to cope with bio-mechanical brains?
We’ve kind of trapped ourselves in this existential maze.
We badly need AI to be smarter because the systems we have now - as evidenced by the progress with cars - are serving us very poorly.
But we aren’t ready for machines that are truly independent and intelligent.
To be really useful, AI has to at least to be capable of human-level multitasking.
Richard: But you still think we’ll get to that stage: where self-driving cars are a reality?
Matt:
I think we have to adjust our expectations in general.
What I think we’ll see is AI making enormous linear advances and very slow tangential ones.
From a broader perspective, we will probably have to get used to thinking about intelligence in a different way.
And accept that machine intelligence will likely never resemble our own.
In fact, as it gets smarter, we should probably be ready to accept that it’s only going to get stranger to us from a human perspective.
And that AI consciousness, if it evolves, is going to be incredibly alien to us.
Richard: Do you think we should be looking at that ethical dimension now?
Matt:
We’ve already seen people in Japan holding funerals for their Aibo robots.
Which are decidedly not sentient.
We have to understand what it is we’re wishing or striving for.
It’s not enough to let the scientists beaver away and see where the technology takes us.
Because the technology can take us to some pretty dark and complex places.
We have to start making decisions about the role we want smart machines to play in our world.
We have to decide what the measure of sentience is and what rights those machines will possess if they develop it.
We’ve seen a lot of discussion and focus this year on the unresolved legacy of slavery in the United States.
150 years later we’re still seeing the inequality and destruction that legacy has wrought.
If we can’t sort out issues like that those amongst our own species, what hope do we have of doing the right thing when it comes to machines?
Episode Sources: