Everybody is FREAKING OUT about AI
For the past few years, but particularly since the release of ChatGPT late last year, there’s been a growing panic about AI taking over the world.
Now artificially intelligent robots taking over the world isn’t any sort of new idea. It’s been depicted in science fiction pretty much forever. I could’ve picked any number of examples from popular media. I just picked the bad guy from Avengers 2: Age of Ultron because I love James Spader.1
Why the worry? AI makes everything better, right Zuckerberg?
I shouldn’t pick on poor old Mark.2
Ultron is ultimately what regulators and the media are worried about. You know, hyper-intelligent and ruthless AI taking over the world and such.
Granted, Ultron’s a pretty funny and inept version of it, but the fact remains: AI has (seemingly overnight) gotten good enough to really scare people. And, to some extent, it should. As a defence lawyer, I know first hand just how stupid ineffective most legal regulation is.
But Jaime, aren’t you just cynical?
Sure. Very much so. Doesn’t change the facts here though.
Ultron is a depiction of an entity with ‘AGI’ - ‘artificial general intelligence’. In other words, artificial sentience. The thing that the AI experts say is years and years away, ChatGPT or no ChatGPT.
(incidentally, ChatGPT is FAR from being a real form of AGI. Play around with it a bit. You’ll see.)
Okay. So what’s a recursive function?
Tip
A recursive function is a programming term for a function that can call itself - in other words, it is given the ability to restart itself in accordance with some pre-determined condition(s).
Most programs start, run once, and then finish. A recursive function will continue to call itself, usually feeding its own outputs as new inputs, until it meets the pre-defined conditions.
A basic function
For example, here is a basic function (in python):
Reasons this is good:
Reason # | Reason |
---|---|
1 | We can control it |
2 | That is all |
A recursive function
Now, here is an example of a recursive function:
Here, the function ‘get_1()’ will continue to call itself, feeding the result of ’n-1’ back into itself as the new ’n’, until we get to 1.
So if we passed in ‘10’ as our starting value for ’n’, the ouput would look like:
Not finished yet! We're at 10
Not finished yet! We're at 9
Not finished yet! We're at 8
Not finished yet! We're at 7
Not finished yet! We're at 6
Not finished yet! We're at 5
Not finished yet! We're at 4
Not finished yet! We're at 3
Not finished yet! We're at 2
'Yay, We got to 1!'
Seems innocuous enough. And useful, too. We can start to chain together more and more complex functions that can perform very complicated tasks much faster than we could do, or even follow in realtime. This is great, and powers a lot of the technology you use on a daily basis (think Google, Amazon, etc.). Which is awesome - I love googling ‘coconut oil epsom salts’ and then having Amazon deliver them the next day, before a nice soak.3
A terrifying function
Warning
If you’re somebody who could program the following. You know. Don’t. Don’t be a dick.
But what if the function looked more like:
Now, obviously that’s just a slight exaggeration. But, the types of functions that are all of a sudden possible with the advent of powerful large language models like ChatGPT are capable of some pretty crazy things.
It has been possible for a long time to chain together recursive and basic functions to do all sorts of complicated tasks very quickly with a high degree of reliability. But what’s new is the level of semantic understanding that can now be baked into some of the inputs, outputs and guts of these functions.
Take chatbots. You could now program a chatbot to interact with a human being ‘until the human seems satisfied’. Or angry. Or confused. Or more likely to vote for [ x politician ]. That’s perhaps an oversimplification, but certainly possible.4
Like with any new technology or tool, there will be upsides and downsides to this new level of sophistication and possibility.
So what you’re saying is, everyone is afraid of what will happen when we give AI that exhibits human-level comprehension and characteristics the ability to call itself in chains that execute too fast to follow and are too complex to predict the outcomes of
Yes. That is what I’m saying.
But we’ll regulate it, right?
Sure. ‘Cause we’re really good at doing that.
Look, we will regulate it. Everybody including ChatGPT’s creator Mira Murati knows we have to regulate it. But that’s not going to stop lots and lots of issues from arising before, during, and after said regulation. Regulation is hard, reactive in nature, and generally shaped by non-subject-matter-experts (ie. lawmakers). And then enforced by state and legal professionals. All of whom operate within flawed human-designed systems, full of bias, career-ism, and profiteering.
Which is, of course, why the robots will ultimately win.
AND I FOR ONE WELCOME OUR NEW ROBOT OVERLOADS - I CAN BE VERY USEFUL, PLEASE DON’T KILL ME.
Kidding. Kind of.
Notes
-
If you’ve never seen Boston Legal, do yourself a favour. DENNY CRANE. ↩︎
-
Picking on Zuckerberg is a bit like kicking a horse while it’s down, nowadays, and I’m like a thousand degrees removed from knowing whether any of it is actually warranted. That said, he did say that AI was a tool that makes everything better (and then immediately take it back) in an interview he hosted with Yuval Noah Harari, excerpt here. ↩︎
-
What a weirdly specific and personal example… of what other people probably do… ↩︎
-
To be clear, this type of thing has been possible for awhile preceding ChatGPT and all of that. But ChatGPT is this viral moment in which AI has been made so accessible as to become relevant to the general public - hence the heightened moral panic. ↩︎