Skip to main content
Scroll For More
listen   &   read

Toby Walsh on the Artificial in Artificial Intelligence

Our brains are limited by the size of our skulls but we can have unlimited amounts of memory on our computers.

Toby Walsh

As artificial intelligence takes root in everything from science and social media to politics and policing, world-leading AI expert Toby Walsh seeks to answer a pressing question: can we trust AI or will it increasingly deceive us? Drawing from his recent essay in Griffith Review 80: Creation Stories, he offered a fascinating perspective on our increasing reliance on intelligent and autonomous technology and how we might ensure AI is harnessed as a force for good rather for nefarious ends. 

This event was presented by the Sydney Writers' Festival and supported by UNSW Sydney. 


UNSW Centre for Ideas: Welcome to the UNSW Centre for ideas podcast – a place to hear ideas from the world's leading thinkers and UNSW Sydney's brightest minds. The talk you are about to hear, Toby Walsh on the Artificial in Artificial Intelligence features UNSW Sydney AI expert Professor Toby Walsh and was recorded live at the 2023 Sydney Writers' Festival. 

Toby Walsh: Let me begin by acknowledging the owners of these lands, the Gadigal people of the Eora Nation and paying my respects to elders past and present. I'm Toby Walsh. I'm a professor at UNSW. I’m the Chief Scientist at UNSW’s new AI Institute. I spent the last 40 years, can it be that long, trying to build artificial intelligence and for most of those 40 years, no one cared. I can promise you no one cared, and in the last couple of years, for the last six months, people really have started to care. AI is having a moment, quite a moment.  

And as an example, on Tuesday, I had the privilege to meet the Indian Prime Minister – not on my 2023 dance card – to be invited to go meet. Totally surreal sitting in the ante chamber waiting to go in, there was myself, I don't know what this says about Australia, it was myself, Gina Rinehart, Australia’s wealthiest person and Guy Sebastian. It’s been quite a week.  

And everyone's waking up to the idea and some of it has been driven, or much of it has been driven by this new app that some of you – who’s used it? ChatGPT, oh good – which a lot of people have started using, it's quite a bit of fun and also, somewhat troubling. It came out of the on the last day of November this year and it got a bit of press, I did a few interviews about it and talked about it for a while and Christmas happened and it all went quiet, and then in January, it just took off. I mean, it took off in use. 

So, a million people were using it the end of the first month – 100 million people, so then the first week, 100 million people at the end of the first month and today less than six months later, it's in Snapchat, it's in Bing. It's in the hands of over a billion people and it's only got people's attention in January. I think, in part it was because schools were going back and people realised you could use this to answer your homework. I think that says something about human psychology as much as it does about technology, but other people were worried that everyone was going to be using it to cheat. It was somewhat surprising to me, though, that it got so much attention.  

It wasn't the first Chatbot. In fact, the first Chatbot goes back to the 1960s – 1970s. It wasn't the first Chatbot that got mistaken for a human; in fact, that was that very first Chatbot, Eliza, back in the 1960s and it wasn't even the first capable chatbot we had, Google released one called Lambda six months before, which didn't get that attention and indeed, OpenAI, the company behind ChatGPT almost didn't even release it.  

I was told an interesting story, which was at the end of last year, that they trained the Chatbot up and they gave it to some internal people and played with it and they came back and they said, “Hmm, interesting, but what would you do with it?”, so I think it's actually quite a good summary of, of what it is. But anyway, they spent a bit of time thinking of some use cases, trying to focus it on some particular applications, didn't get very far. And they were literally having the meeting. Sam Altman, the CEO of OpenAI, was meeting the product team and they were going to have the meeting where the item on the agenda was, ‘let's throw this away’, and let's focus on our next product, GPT-4, which has now come out, and Sam said, “Well, wait a second. What if we just gave it to people? Let's see what happens". 

And it was, as we know, it went viral. And you know, that's a true story. Because if they had any idea how successful it was going to be, they would have come up with a better name. ‘ChatGPT’ is not a very memorable name. I could explain what it means, but it's a bit too technical to get into, so it was surprising it got such attention. But I want to give you – the purpose of my talk today is to give you some understanding, to try and help you understand how to think about things like ChatGPT, how to think about artificial intelligence in general, how to understand the moment we're in and where it's going to take us. And as the title of the talk is, The Artificial in Artificial Intelligence. One of the key ideas I want to leave you with is how artificial, how different AI is to human intelligence and I'll have some lots of examples sprinkled through my talk to give you that idea.  

One of the important things to understand is that ChatGPT is just the tip of an iceberg, a much bigger iceberg about how to teach others how artificial intelligence is starting to come into our lives. It's an example of a particular type of artificial intelligence that you hear quite a lot about today, called generative AI. The idea that we're building artificial intelligence that can generate stuff, it can generate text, human-readable text like ChatGPT, it can generate images. That's a picture of the pope in a white puffer jacket. That was generative AI. Someone literally typed the prompt into one of these programmes, ‘the pope in a white puffer jacket’, and it produced that image that went viral. But it doesn't start with just images, we can make video now, we can make audio, we can make music, we can make speech of people. So you've got to be a bit more careful when people ring you up. Now, I can clone your voice very, very easily. I've cloned my voice, it took me a few minutes. Microsoft has a tool now, which they hadn't released in 2014, because there's untold mischief that you could do with this tool, where it just takes, a claim, three seconds of audio to be able to copy your voice. Or I can just bring up your answerphone, get three seconds of your audio of your voice, and then I can play back, I could type and say things that you never said.  

So there's great possibilities, and I'll talk about in a moment about some of the risks. But they're quite fun to play with these tools and not surprisingly, there’s lots of interesting useful things you could do. You can get it to write a poem in the style of Shakespeare, you can get it to write your shopping list, you can get it to come up with 12 ideas for birthday parties for your 12-year-old. You can get it to write business letters. I've written my last business letter, ChatGPT now writes all of my business letters. You can write business plans. I have to say, at the beginning of this year, when my boss asked me to write my annual performance plan – my key performance indicators – I was sorely tempted to use ChatGPT to do it. If my boss is listening, I didn't. But you wouldn't know, it's very good at writing those sorts of things.  

Another example, this was a photographic competition, Sunny World Photographic Awards, this nice picture photograph of a mother and a daughter. It's completely fake. Someone took one of these tools and said, "I want a picture, a sepia picture, photorealistic, F4 of a mother standing behind her adult daughter". And it won the award, and you wouldn't know that that wasn’t real. And here is, as I said, here's the pope in the white puffer jacket. What's fantastic about the technology, why it's having such an impact, is that the technology is being democratised. It's in the hands of billions of people and it's really easy to do. You just have to literally type, ‘a picture of a pope in a white puffer jacket’. When I first saw this, I thought, "Oh, that's someone with Photoshop who knows how to do this sort of stuff?". No, it isn't. You, you, just have to have the idea. And if you have a good idea, it will go viral. 

So how to think about this moment, right? It's much broader than just generative AI. History doesn't repeat, but it does rhyme. And I think this moment, 2023, rhymes with 1979.  

Now, some of you, looking at the audience, maybe just a few of you are old enough to remember what this is. This is a picture from 1979. This is VisiCalc. I'm at least old enough to remember what VisiCalc was. VisiCalc was the first spreadsheet, the first killer app for the personal computer and the personal computer revolution really took off when people saw VisiCalc for the first time. Up to that point, I remember, I was a young boy, but people say, "Well, why do we want a personal computer? We've got that big mainframe in the data processing centre. It's in the air conditioning building. It does the payroll. Why would I want a personal computer?".  

And then VisiCalc came along and people said, "Oh, yeah, well I can do my budget, I can make my business plan. There are lots of things, lots of interesting things that I could do". And VisiCalc came out in 1979 and gave us a picture of a future, a future where we'd have personal computers that did personal computing. And you want to see ChatGPT as something like that, a vision of a future where artificial intelligence is going to be a significant part of it.  

Now, some other parts of this VisiCalc story that rhymes, I suspect with today is VisiCorp, the company behind VisiCalc started selling VisiCalc in 1979. VisiCorp went bankrupt in 1985, it doesn't exist anymore. There are spreadsheets, you know, you all know Excel, and maybe you even remember Lotus 123. But personal computers really took off and there are 1,001 things that we do on personal computers today that have nothing to do with spreadsheets. All our email. All our photography, all of the other things that we do on computers these days, nothing to do with spreadsheets.  

And the same is going to be true of artificial intelligence in the future. ChatGPT is, in a way, an image, an idea of the sorts of things that we'll be able to do. It's something that we can grasp and understand, "Oh yeah, that's very useful. It can do useful stuff, it can write stuff". But I used to routinely to write business letters – for me but there are 1,001 other things that AI is going to do. It's not just going to be ChatGPT. By the way, it's not just going to be generating images of the pope in a white puffer jacket. There are 1,001 things that in 1979, we never envisaged personal computers were going to do. The 1,001 things that AI is going to do because it's going to be the next platform, like personal computers are a platform for doing personal computing on.  

All of our devices, not just our computers, but our watches, our phones, our lights, our televisions, our fridges, our freezers, our cars — everything that's got electricity is going to have some AI sprinkled into it. It's going to make it more intelligent, easier to interact with. We're going to be able to ask it to do useful, important things for us. So that's how to think, I think, of ChatGPT. It's the VisiCalc moment for AI. It's a moment that we get to see that AI is going to be a big part of the next 10 or 20 years. 

Now, of course, there are lots of concerns that people are now starting to think, "Well, if it is going to be a big part and it's already in the hands of so many people". And this is where my story comes back to here, to Carriageworks. Now this is me. That's Kevin Roose, nearly exactly a year ago at the Festival of Dangerous Ideas here at the Carriageworks. Kevin Roose is the journalist at The New York Times who had what's now become quite a famous conversation with ChatGPT, where it went pretty wild. It started to persuade him to try and leave his wife. It was a pretty roller coaster conversation he had with VisiCalc, and people got a bit nervous, a bit worried what machines were going to do if they started making these quite extreme suggestions to Kevin. Now, I had the pleasure to meet Kevin and talk to him. And unfortunately, he is just the wrong person to have that sort of conversation. He knows exactly how to push ChatGPT in exactly all the wrong ways. And you don't have to be too worried. It doesn't have any desire to reason, it doesn't really desire that Kevin leave his wife and marry ChatGPT or anything like that. 

There are many limitations, but his conversation, his interview with ChatGPT exposed one of the most important ones: chatbots like ChatGPT and all the others that are now coming out don't actually understand what they're talking about. They have no idea what they're saying. They're saying what's probable, not what's true. The best way, I think, to understand them is to compare them to autocomplete on your smartphone.  

So when you're texting on your smartphone, there's a little function there that helps you type and finishes the word. Sometimes it may even try to help you finish the sentence and what they've done there is they've got a dictionary of words and frequencies of words and it says, if you type ‘a’, ‘p,’ it says, "Well, probably the most common, frequent way of finishing ‘a’, 'p' is 'apple'". So it offers you ‘apple’ to finish the word. 

Well, what we've done with chatbots like ChatGPT, we've done that but on steroids. We've taken not just a dictionary, we've taken essentially the internet — all of the texts you can find on the internet, all of Wikipedia, all of Reddit, all of social media, all of the US Patent database — anything we can scrape, we've taken and poured it into this computer. And so now, it can complete not just the word or the sentence, it can complete the paragraph or the page. But what it's doing is just reflecting back the sorts of things that you read on the internet. It's a reflection of us. And unfortunately, they've gone for quantity over quality. They've put everything in there, everything that they possibly could find, and of course, there's a lot of untruths on the internet. There's a lot of stuff that's offensive, stuff that's racist, and sexist, and ageist, and all the other things. And it's just reflecting that back at us.  

So the sorts of challenging conversations that Kevin Roose was having with ChatGPT is just reflecting back on humanity. It's not reflecting the intelligence of the machine; it's just reflecting back on us. So we're going to have to deal with the fact that it's going to produce things that aren't necessarily true. And there's a lot of work, a lot of research going on about how to deal with that but certainly, there's a lot of talk, a lot of concern about the impact that this is going to have upon all of our society. 

This is a colleague of mine, Geoffrey Hinton. He used to work at Google and he's actually known as the godfather of artificial intelligence. He's one of the people behind some of the recent advances in deep learning and has been a pioneer in our field. He, of course, famously resigned a couple of weeks ago from Google so he could talk more freely about his concerns, and I'll get to his concerns in a moment.  

But just a little side story on the way, he's actually the great-great-grandson of George Boole. George Boole is really the father of computing. He's the inventor of Boolean logic, the logic of zeros and ones, ands and ors and nots that computers implement. He was a famous mathematician in the 18th century. It's interesting to look at how strong genetics are and how strong ideas are. His great-great-grandson, Geoffrey Hinton, is now one of the pioneers in artificial intelligence. Geoffrey is somewhat concerned about the impact of AI superintelligence that we're going to build —machines that are incredibly intelligent. I am pretty confident that we will build machines that exceed human intelligence in the narrow tasks that machines already do well, such as playing chess, reading X-rays, and translating tweets in Mandarin into English. Machines already perform these tasks at a superhuman level, better than humans can do. We still don't match the full breadth of abilities of humans, but I'm confident we will get there in the near future, and certainly before the end of the century, maybe much quicker than that. It’s hard to know.  

I think it would be terribly conceited to think that we are as intelligent as we could possibly be. We are intelligent, that is our great ability and that is why got to be in charge of the whole planet, for better or worse because not because we were the strongest, fastest, or because we had the sharpest teeth, but we were the smart ape and we used that to build tools, to come together, to invent language, and to invent writing – to do all of the things we do. But our intelligence is just a point on a scale, and there are many reasons to believe that machines could be smarter than us. They’ll think faster than us. We think of biological speed in the tens of hertz, computers think in the gigahertz and in the millions of instructions per second. Our brains are limited by the size of our skulls – we can’t get any bigger brains because we couldn’t get out of the birth canal, but we can have unlimited amounts of memory in our computers. Our brains use 20 watts of power, that’s a third your body's energy, that’s as much as we can devote to that most energetic of any organ in our body. There’s no such limits though on computers, they can run on thousands of watts or kilowatts of power, so there are lots of limitations that machines won’t have that humans have, and every reason, I suppose, to believe that they’ll be smarter than us. 

However, I don't think that's a cause for worry. Indeed, I think it's a cause for celebration. It tends to be the intelligent people who are worried about intelligent computers. I suspect intelligent people think too highly of intelligence. The people who tend to be in power don’t always tend to be the most intelligent people so intelligence I think, is somewhat overrated, but intelligence is going to be very useful. It’s going to tackle many of the wicked problems, help us deal with climate change, help us with all the diseases and help us cure cancer. There’s a 1,001 things that AI is going to help us do that are going to make our lives better. I'm not particularly worried about machines becoming too intelligent. Indeed, I think the problem today is quite the opposite: we're giving responsibilities to machines that are not intelligent enough. 

And there is lots of much more mundane, much more immediate challenges we face. This is one, not Trump, wait, that is a challenge. But the idea of deep fakes. This is a deep fake photograph. This is someone typed into one of these generative AI tools, ‘Trump being arrested by the NYPD’. And it doesn't take any sophistication, any skill to be able to generate images like this. This image again went viral. 

Fortunately, it didn't cause a riot. But if we flick back as to what happened on January the 6th – it doesn't take much more than this to cause a riot as the example of what you can do. What I'm fearful about happening in the very near future is not just making a fake image of Donald Trump but of actually faking Donald Trump completely. So there's a Trump bot, they've trained one of these chatbots, like ChatGPT, on all of Trump's tweets, and all of his speeches, and it speaks just like Trump. Now, I know that's not a very high bar, but it's very convincing. You can connect now. I've mentioned you can clone someone's voice, plentiful audio about Trump speaking. So I can make a computer that sounds like Trump, a completely convincing clone of Trump's voice, I can connect that to the Trump bot, now I can ring up every voter in the United States, and have a two-way conversation with that voter and persuade them to vote for me, Donald Trump. 

And it will be very convincing and there's nothing against it. Nothing you can do against that. Of course, if you don't like Trump, you can do the same thing and ring people up and get Trump to say something offensive that will stop people voting for him. Although I'm trying to think, “Well, what that would be?”, but I’m sure there’s something you could say you could get your fake Trump to say, that would persuade people not to vote for him.  

So if we face the possibility of some really powerful tools that are going to potentially, if we're not careful, undermine our democracy. And we saw this just yesterday, just yesterday, there was a picture of the Pentagon with an explosion, which was faked. We don't know the reason why someone made this picture, but the stock market actually went down when people started to see this picture, fortunately, it went back up. But it's enough to move stock markets, enough to move elections. We're going to have to raise our guard and be very careful about the possibilities, about understanding what is true and what is false, and the problem is that these images are going to be indistinguishable from the real thing. 

It will get to the point, and indeed, we're actually at this point. This is my piece of advice for you – unless you're actually in the room and you see it with your own eyes and hear it with your own ears, please entertain the possibility it's fake. There is nothing you can look at in the picture that will tell you that it isn't fake. We'll have to be much more cautious, much more careful. You'll have to ask all the old questions that you used to have to ask, which is, do I trust where this comes from? 

Because everything you'll see on social media is potentially fake. The idea of what is true, what is false, is going to be severely tested. I mean, the problem with images like this is that you can't unsee them, even if you know they're fake, you've still got the memory of seeing Trump or so you've still got that memory of the Pope in a white puffer jacket. The Pope actually, a little aside here, the Pope really missed that opportunity. He should have owned it, should have gone out and bought a white puffer jacket, he would have got the youth vote, he would have, you know, I don't know what his advisers were thinking. He should have gone out and owned that moment, bought the white puffer jacket. 

So that's the problem. If we see things, we're not used to the idea that the things we see are not necessarily true and this is going to undermine our understanding of truth. It's also going to undermine the credibility of politicians. It used to be if you caught a politician on camera, saying something or doing something, they had to own it. And now they don't. There's audio of Donald Trump saying really offensive things about women. As far as we know, this audio is real, true. But Trump just dismissed it. He said, "Oh, it's deep fake". And I suspect, sadly, many of his supporters believe him so politicians are going to be even less accountable than they are today, which is not particularly accountable. So I think we have to be very concerned. This is a very real present threat. 

We saw the misuse of social media in the recent presidential elections in the US. I'm fearful we're going to see that's just a taster for what we're going to see with artificial intelligence in next year's US presidential election. As another example, this was a story in the Irish Times about Irish women's obsession with fake tan. It turns out the fake tan story was fake. It was written by ChatGPT. That's why it's now been taken down but we can synthesise at scale, we can synthesise, personalised text that's going to appeal to you. You've undoubtedly received phishing emails from Nigerian scammers that are trying to persuade you to click on things. Well, now with Nigerian scammers, unfortunately, I've got tools that can write really convincing emails but actually read very well. They can personalise them, if they see that you've liked a Wordle on your Facebook page, they can write a phishing email that mentions Wordle, that you mentioned some Wordle resources that you can click on.  

And so we're going to have to be much more cautious, much more careful, because these doors now are in the hands of pretty much everyone. Of course, there are lots of other things, lots of other, again, rather routine and mundane things that we've already started to worry about, that these tools are just going to accelerate. We've got to worry about issues around bias and fairness. We're already pretty aware of those things. This is one again, one of these image tools, you type in a prompt, ‘CEO on the phone’, and of course, and unfortunately, it's been trained by taking pictures on the internet of CEOs and people on the phone and it's giving back only pictures of male CEOs. What's really sad and what's really disappointing is that we know this is a problem, we knew years ago that this was going to be a problem, that these tools are just put out there and we actually have ways of fixing this problem. We know simple ways of fixing this problem. But the people putting these tools out there have not bothered to do that and so we're perpetuating those biases of all the images on the internet that have predominantly, well, male CEOs and male CEOs on phones, those sorts of biases that we've been trying to get out of society are being baked into these AI tools. 

And there are lots of more subtle things that we probably haven't realised yet. You can generate wonderful images. Now, these are all generative AI images. I’m starting to think that if graphic designers asked me for some advice, I would probably say that they need to get trained to do something else because it's going to be so cheap to do these things. But equally, I'm starting to think that maybe in a few years’ time, we're going to look back and say, "Well, wait a second, the greatest heist in history happened in front of our eyes and we didn't even realise it was happening". 

The greatest heist is that all of our intellectual property being stolen from underneath us. These tools only work because they scrape everything they can find on the internet: all of these images, all of the text that they can find, all of Wikipedia, all of the US Patent database. So that's all our intellectual knowledge. That's all our cultural knowledge. That's all our knowledge about the geography of the planet. That's all our personal lives just because they're scraping social media. They're scraping all the information about us, and not just our online selves, but also our offline selves. 

Did you know that Google tracks 70% of credit card transactions in the United States? That's the real-world credit card transaction, not just the ones on the internet. They track most of those but the ones where people physically hand their credit card to the store, Google gets 70% of that data. So they know everything about our spending. All of this information is being poured into these chatbots, these generative AI tools, which is great fun. You can do lots of fun things, you can make these fun pictures to illustrate your homework. But none of the value is going back to the people who generated it, and is that sustainable? Are we going to have to, you know, I compare this to Napster. If you remember, when Napster came out, we started streaming everyone's music. None of the value, none of the money went back to the artists. Well, that's not going to be sustainable, and we have woken up and changed that a bit. Now we pay people for streaming. I'm not sure we're yet in the fully sustainable place or whether enough value is coming back but we worked out that you just couldn't steal it. You couldn't just give it to companies like Spotify, who would take all the value. 

Well, all of our intellectual property is now being hoovered up by these tools. And there are, of course, class action suits going on by graphic designers, by coders because these tools can also write, not just textbooks and write code. Saying, "Well, wait a second, none of the value is coming back to us. Is that sustainable?". So, that might, I suspect, be something we look back on in 10 or 20 years' time and say, "Well, wait a second, that wasn't uniformly good. It didn't actually help everyone. That actually drove us apart in many respects, it didn't bring us together". 

But we look back at this moment in AI and we say, "Wait a second, we watched these tech companies steal all of the intellectual property of humanity in front of our eyes, and now they've captured it. They've captured all the value. And we've left everyone else, the artists, the writers, the coders. Everyone's been frozen out of the situation". 

And I want to leave you with one other really important idea, one other important idea to think about artificial intelligence, and it comes back to my friend here, the octopus. I know pandemic was a pretty miserable time, but one of the gifts of the pandemic is, I hope most of you had the opportunity to see that wonderful Netflix documentary, My Octopus Teacher,  If you haven't seen it, please, please do. Go and watch. It's a fantastic story about the octopus, and what's interesting about the octopus is that they're really intelligent. They are incredibly intelligent. When you talk to people about octopuses, when you talk to scientists who work with octopuses, they will tell you wonderful stories about their different personalities. They give them names because they have really distinctive personalities. One of the marks of intelligence is that it can use tools. Octopuses can use tools. You can train an octopus to open a screw-top jar to get food out. They're famous escape artists. If you put them in a tank, they'll find a way of escaping.  

There's a wonderful story of an octopus who learns how to turn the lights off. It was put in a laboratory with bright fluorescent lights that were left on at night, and the octopus wanted to sleep. So it worked out that it could squirt water at the light and turn off the lights. They're remarkably intelligent. Indeed, they're the only invertebrate, only non-mammal protected by European law against animal experimentation because we can see that clearly they're very sensitive, very conscious, very intelligent. It would be cruel to experiment on them. Indeed, after I researched a bit for my new book about octopuses, I realised I could not bring myself ever again to eat octopus. They are far too intelligent to be eaten, so they're really smart. But what's interesting about octopuses is that they have a completely different intelligence from ours, just in a physical sense, right? 

So 60% of an octopus, if you have an octopus, its brains, or rather, brains, are in its legs. So it really has nine brains that are distributed quite widely over its body. It must be quite different to be an octopus. It's interesting to think about what it must be like to think like an octopus. Now, of course, all life is related to each other. We're all part of the same animal kingdom. We're all related to each other, and we are related to the octopus, but you have to go back millions and millions of years to find where we had a common ancestor.  

Our common ancestor with the octopus was just barely multicellular and not intelligent. So the intelligence that octopuses is just completely different, evolved separately, in a completely separate way to human intelligence. Indeed, I say to people, you should think of the octopus as the closest thing to an alien on the planet. Aliens, again, are more likely to look like octopuses than anything else we can think of. So intelligence arises in different ways. Intelligence arises in an octopus in one way, and in humans in another. And we shouldn't think that the intelligence we build in machines is going to be like our human intelligence. It's a very human conceit. It's very natural, of course, because our experience of intelligence is clouded by our own experience. When you woke up this morning and you started thinking, that gave you an experience of intelligence. It's hard not to suppose that other intelligence is going to be like that but you should think of other intelligence, like the intelligence of an octopus, as going to be quite different. And the early clues that we have about the AI, the limited AI that we can build today, is that it's very different to human intelligence.  

There are many ways that we can build intelligence that are not actually like human intelligence. Therefore, we should be careful about handing responsibility to artificial intelligence, just like we should be, you know, I wouldn't give an octopus charge of a machine gun. Therefore, I say to people in the military, maybe you shouldn't give this AI charge of this machine gun either. It's going to think and see the world in different ways. 

Another metaphor, another way I like to get people to think about it is flight. There’s natural flight, the stuff that birds and bees do, you know, flapping wings, feathers, and all of that. And there's artificial flight. That's the stuff that we do in airplanes. That's a big, fixed wing with a very large jet engine typically propelling you along. It's a completely different solution to the same problem: heavier-than-air flight. It's the same underlying physics, the Navier-Stokes equations of aerodynamics that govern them. But we came up with, arguably, a better solution. It seems we fly further and faster, more efficiently than the birds and bees do and we don't actually, still, even really understand how birds really fly. Feathers are still a bit of a mystery to science but we engineered a different solution and so you should think about intelligence, again, we're engineering a solution. It's not mostly going to be like human intelligence. Don't think of it that way. That's going to be a recipe for making many disasters.  

So hopefully, I've given you an idea of where AI is going to turn up quite quickly in your lives and how to think, perhaps, more appropriately about how artificial, artificial intelligence is. If you want to know more, I've written a couple of books. I'll be signing some of them over there after the talk. 2062, which is looking at the future looking at when machines might be as smart as us. My most recent book is looking at some of the ethical challenges that I hinted in today’s talk, and coming out in October, is my very newest book which is all about fake AI and AI fakes – things such as the pictures of the deep fakes of Trump and the pope that I was also talking about will be out in October. So thank you very much.  

UNSW Centre for Ideas: Thanks for listening. This event was presented by the UNSW Centre for Ideas and Sydney Writers’ Festival as a part of the Curiosity Lecture Series. For more information, visit and don't forget to subscribe wherever you get your podcasts. 

Toby Walsh

Toby Walsh

Toby Walsh is Chief Scientist of UNSW's new AI Institute. He is a strong advocate for limits to ensure AI improves our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being ‘banned indefinitely’ from Russia. He was named on the international ‘Who's Who in AI’ list of influencers. His most recent book is Machines Behaving Badly: the morality of AI.

For first access to upcoming events and new ideas

Explore past events