Without a doubt, the biggest non-Swift story in 2023 was the way that generative AI really broke out. While the technology has been around for a few years, ’23 was the year that the average person became aware of what it could do. Apps like Copilot and ChatGPT were made available to the general public, and at the same time the quality of work done by this app increased. It may have been impossible to think that by 2024 an AI could write a script, pass a bar exam, and create realistic photos, AI did this and more. If we look at the year to come and think about the rate at which AI is growing, should we be terrified?
Obviously at this point you’ve heard every Terminator/Skynet reference there is. The idea that an AI would be able to take over key systems when we didn’t want it to is out there. So is the idea that an AI would begin to make decisions that we wouldn’t make. Imagine if an AI decided that the biggest threat to the existence of the planet is humanity, and figured out how to eliminate that threat.
I don’t think any of that is going to happen this year. At the rate AI is growing, it could happen next year but I think we’re safe enough. However there are two things that could very easily happen.
I don’t think AI will get smart enough this year to take over the world by itself. However it’s very possible that we could just let it destroy things. I can’t tell you how many times in the last 12 months I had to tell someone that the conclusion they got from ChatGPT was wrong. I can tell you I’ve had to do it over 25 times in the last week alone. When we depend on AI rather than doing our own research, we become less able to make good decisions.
We all know that AI isn’t perfect, but there’s a lot of enthusiasm for it. What happens when some factory or some city uses AI to make decisions and it’s just plain wrong? It could mean a lot of chaos, just due to human laziness. I think that’s a big problem with AI already and it’s going to get ten times worse in 2024.
Friends, you know I’m not going to get political here. But it is 2024 after all and you don’t need me to tell you what that means. There’s going to be a lot of stuff out there, both words and pictures, to try to help people make big decisions. It’s already way too easy to create content that looks 100% authentic and is a total lie. Even if it’s proven to be AI-generated a week later… well you all know how this works. Once something is out there, it’s out there.
I think there’s a big concern from a lot of people that overuse of AI will make people disbelieve everything they see. When all words and images are reduced to being unbelievable, then fake information carries the same weight as real information. That can lead to very random (and very bad) consequences.
The big argument for “no” at this point is that here as I write this in February 2024, AI still isn’t that good. AI is good at answering questions based on information it finds on the internet. It’s not good at drawing conclusions, and it tends to make stuff up a lot. So smart people still don’t trust it for critical tasks. That may change six months down the road because AI is evolving so quickly, but it’s true today and people don’t necessarily adapt their opinions as quickly as AI could change them.
Another real bright spot is the fact that a lot of people (including myself obviously) are talking about this problem now. Some of the key thought leaders in the world are very concerned about the potential for AI and that means they’re having the discussion that needs to be had. Hopefully this leads to developing tools to let us use AI productively.
You might notice that this “no” section is quite a bit shorter than the “yes” section. Draw whatever conclusion you wish from that.
For most of 2023, some very prominent people talked about putting “guardrails” on AI or pausing its development for a period of time. First of all I would like to point out that obviously none of the people suggesting this have seen any science fiction movie ever. You can’t put “guardrails” on AI. Too much of its source code is already available. If responsible people stop work on AI in any way, it opens up an avenue for irresponsible people to jump in. I know that this argument sounds like similar arguments that have been made in the past about weapons and other things. Let’s not unnecessarily group things together because they seem similar.
What about putting a pause on AI development? Again the people who propose this obviously don’t watch movies. Let’s put aside for a moment all the movies where fictional AIs interpret pauses or limits as a threat and try to kill us. After all, that’s fiction. It’s smart to look at it, but it’s still fiction. (For now.) It’s more important to look at the same argument I used for “guardrails.” Do we really believe that declaring a pause in AI development will work? If responsible companies pause, that just means that all the work will be done by irresponsible companies.
Yes, a lot of the problem with any discussion of AI is that we’re very informed by books and movies. But what can you do? That’s where a lot of the dominant thought is being done. So let’s talk about the spy movie argument.
You see all the time in movies how two adversaries agree not to develop something and then they do it anyway. Usually it’s some sort of weapon. It’s hard to know how often that happens in real life. But if we were to say, “we all agree not to work on AI,” how many companies would still work on AI and just keep it secret? Call me paranoid but I think they all would. They would just assume everyone else is doing it. Simple as that.
Folks, I’m one blogger and maybe I’m not any more informed than you are. But if we are asking the question “is this the year to be terrified of AI?” I say the answer is no. However, that doesn’t mean the threat isn’t real. We all have to be aware of the information we’re getting. It’s harder than ever to know what a reliable source is, but we have to try. To me, at least in 2024, a reliable source is one where educated journalists working for major news outlets can provide multiple references to prove something is true. That’s hard to find today, I get it.
Bottom line, don’t believe everything ChatGPT tells you. Don’t believe everything you see in your social feeds. If some photo or news story seems like a “bombshell” then check on it a week later to see if it still holds up. Do that, and we’ll all be fine.
The post AI in 2024: is it time to be afraid? appeared first on The Solid Signal Blog.
Continue reading...
The argument for “yes”
Obviously at this point you’ve heard every Terminator/Skynet reference there is. The idea that an AI would be able to take over key systems when we didn’t want it to is out there. So is the idea that an AI would begin to make decisions that we wouldn’t make. Imagine if an AI decided that the biggest threat to the existence of the planet is humanity, and figured out how to eliminate that threat.
I don’t think any of that is going to happen this year. At the rate AI is growing, it could happen next year but I think we’re safe enough. However there are two things that could very easily happen.
Doomsday scenario #1: Dependent or lazy people allow AI to do something bad
I don’t think AI will get smart enough this year to take over the world by itself. However it’s very possible that we could just let it destroy things. I can’t tell you how many times in the last 12 months I had to tell someone that the conclusion they got from ChatGPT was wrong. I can tell you I’ve had to do it over 25 times in the last week alone. When we depend on AI rather than doing our own research, we become less able to make good decisions.
We all know that AI isn’t perfect, but there’s a lot of enthusiasm for it. What happens when some factory or some city uses AI to make decisions and it’s just plain wrong? It could mean a lot of chaos, just due to human laziness. I think that’s a big problem with AI already and it’s going to get ten times worse in 2024.
Doomsday scenario #2: AI-generated content is so good that no one can trust anything they see or hear, ever
Friends, you know I’m not going to get political here. But it is 2024 after all and you don’t need me to tell you what that means. There’s going to be a lot of stuff out there, both words and pictures, to try to help people make big decisions. It’s already way too easy to create content that looks 100% authentic and is a total lie. Even if it’s proven to be AI-generated a week later… well you all know how this works. Once something is out there, it’s out there.
I think there’s a big concern from a lot of people that overuse of AI will make people disbelieve everything they see. When all words and images are reduced to being unbelievable, then fake information carries the same weight as real information. That can lead to very random (and very bad) consequences.
The argument for “no”
The big argument for “no” at this point is that here as I write this in February 2024, AI still isn’t that good. AI is good at answering questions based on information it finds on the internet. It’s not good at drawing conclusions, and it tends to make stuff up a lot. So smart people still don’t trust it for critical tasks. That may change six months down the road because AI is evolving so quickly, but it’s true today and people don’t necessarily adapt their opinions as quickly as AI could change them.
Another real bright spot is the fact that a lot of people (including myself obviously) are talking about this problem now. Some of the key thought leaders in the world are very concerned about the potential for AI and that means they’re having the discussion that needs to be had. Hopefully this leads to developing tools to let us use AI productively.
You might notice that this “no” section is quite a bit shorter than the “yes” section. Draw whatever conclusion you wish from that.
Guardrails and a pause
For most of 2023, some very prominent people talked about putting “guardrails” on AI or pausing its development for a period of time. First of all I would like to point out that obviously none of the people suggesting this have seen any science fiction movie ever. You can’t put “guardrails” on AI. Too much of its source code is already available. If responsible people stop work on AI in any way, it opens up an avenue for irresponsible people to jump in. I know that this argument sounds like similar arguments that have been made in the past about weapons and other things. Let’s not unnecessarily group things together because they seem similar.
What about putting a pause on AI development? Again the people who propose this obviously don’t watch movies. Let’s put aside for a moment all the movies where fictional AIs interpret pauses or limits as a threat and try to kill us. After all, that’s fiction. It’s smart to look at it, but it’s still fiction. (For now.) It’s more important to look at the same argument I used for “guardrails.” Do we really believe that declaring a pause in AI development will work? If responsible companies pause, that just means that all the work will be done by irresponsible companies.
The spy movie argument
Yes, a lot of the problem with any discussion of AI is that we’re very informed by books and movies. But what can you do? That’s where a lot of the dominant thought is being done. So let’s talk about the spy movie argument.
You see all the time in movies how two adversaries agree not to develop something and then they do it anyway. Usually it’s some sort of weapon. It’s hard to know how often that happens in real life. But if we were to say, “we all agree not to work on AI,” how many companies would still work on AI and just keep it secret? Call me paranoid but I think they all would. They would just assume everyone else is doing it. Simple as that.
What’s the real-world prediction?
Folks, I’m one blogger and maybe I’m not any more informed than you are. But if we are asking the question “is this the year to be terrified of AI?” I say the answer is no. However, that doesn’t mean the threat isn’t real. We all have to be aware of the information we’re getting. It’s harder than ever to know what a reliable source is, but we have to try. To me, at least in 2024, a reliable source is one where educated journalists working for major news outlets can provide multiple references to prove something is true. That’s hard to find today, I get it.
Bottom line, don’t believe everything ChatGPT tells you. Don’t believe everything you see in your social feeds. If some photo or news story seems like a “bombshell” then check on it a week later to see if it still holds up. Do that, and we’ll all be fine.
The post AI in 2024: is it time to be afraid? appeared first on The Solid Signal Blog.
Continue reading...