Tuesday, October 28, 2025

Genuine intelligence

For my job, I write general interest stories about computer science research. My colleagues call themselves "science writers." I dislike this term, but it's generally correct.

After three years here, I've written about a lot of weird research — everything from robots to reinforcement learning to cancer cells to virtual reality. All of it involves artificial intelligence. I've learned enough that now none of it seems weird anymore. 

I've also learned enough that now podcasts and tech journalism are starting to make me mad, because most reporters have no idea what they're talking about. So I'm at the same level as, say, a second semester college freshman. 

I like my job a lot, even though I have to talk to faculty members about their innovative machine learning algorithms when I barely passed high school calculus. It's a good job for me: I'm curious and I don't care about looking stupid. (Faculty are often astounded at what I don't know. Most of them are polite about it.)

All this to say: I now know just enough about computer science that the current "discourse" around AI is driving me nuts. It used to be that only the political chatter drove me nuts, because I had studied too much political science. Now the tech chatter is also driving me crazy. It's a good thing I don't read the news or there would be nothing left to read. 

My most urgent request: can people please start using the right terms? AI as a concept has existed for like 70 years. It is not synonymous with ChatGPT! AI includes many different types of technology. Machine learning is just one type of AI and LLMs are a small subset of machine learning. And ChatGPT is just one brand of LLM! If AI is the internet, then ChatGPT is AOL. 

Also: If you read anything that suggests AGI is coming within the next decade, you should immediately know that the writer is both full of shit and/or receiving a paycheck from a company working to build AGI. No one even understands yet where general human intelligence comes from! How exactly do you think these same people will build general artificial intelligence? I don't care if they have billions of dollars to do it. Throwing money at an idea doesn't make it magically appear.  

Finally: you may think that AI is coming, but AI is already here. Do you use facial recognition to open your phone? Does Spotify recommend music for you to listen to? Do you sometimes follow people that Instagram recommends? Do you use GoogleMaps to check traffic? Do you check the weather forecast?? Do you exist in the world with a computer and a smart phone? To stop using AI you will need to learn how to time travel. And I bet AI could help with that. 

The only question in my mind is what we do with AI. It's the same story as the internet: computer scientists invented this amazing network for sharing information and we used it to buy things (Amazon) and share what we ate for breakfast (social media). We did some good stuff with it too, but on the whole I'd say most people wish we had handled it better. 

Now we have AI, and we can either use it to create fusion energy or we can use it to write mediocre student essays. I can tell you which technology I would like my children to have. 

6 comments:

Alex said...

I think when people say AI is coming, they mean for their jobs. Just like AI was happening/coming for a while, and then it took over student essay writing wholesale, basically overnight. I think there is the fear that it will do similar things to many kids of jobs, very quickly.

Alex said...

I guess I don't think it matters if people understand that AI is used in facial recognition or whatever. They can see that generative AI is a tech sea change, like the internet, or the smartphone, and it has already started to change their lives, and more changes will happen. Good or bad, TBD. Of course agree that we should use it for scientific breakthroughs and not to rot our brains, but I don't see any way to influence that outcome, either personally or in a larger sense, through policy.

Julia said...

I think understanding what something is and what it can do (or not) is important to figuring out how we ought to use it, both as a society and as individuals. I don't have much hope that the government will regulate the AI industry effectively, but there is no hope of this at all if regular non-technical people don't have any idea what AI is or what it can do (other than write essays).

If you're not interested to learn that AI is embedded in much of our daily lives already, ok. I am interested! I was also interested to learn that none of these changes have happened suddenly — that generative AI was built steadily over many years and did not emerge out of the blue. I was also surprised to have experts in machine learning tell me that humans are vastly more intelligent that even our most intelligent machines.

All of this has helped me understand that AI is a normal technology, just like the internet. I think more everyone understands what AI is, the better off we'll be, as people and as a society.

Alex said...

I mean, I would like to believe that if I only had a greater understanding of what AI is and what it could do, I could have better control over its effect on my life, but I am not convinced. Like, it's important I know how my stove works because if I don't use it, I can't cook my food, and if I use it badly, I'll burn down my house. I don't feel like the more I know about AI, the less likely it is to take my job, or that it would decelerate the quickening pace of the mind-poisoning that the internet has well underway.

Also ten years is not a long time, in terms of a horizon for AGI. Our kids won't even be in the workforce yet.

Julia said...

OK. I will live on, despite my failure to convince you!

And, you're right, I don't know what you would think about AI if you learned more about it. Perhaps you would be even more worried about it than you are now.

But I did learn more about it. And I got less worried. I realize this doesn't convince you. That's fine! But it's honestly what happened to me.

You will never convince me that not understanding something is preferable to understanding it. Even if I can't change anything about it, I'm still glad I know something about AI, if only to give some nuance to the largely inaccurate optimism of corporations and depression of the media.

Alex said...

In general, I like to understand things as well, and I actually think I have a pretty decent understanding of what LLMs and generative AI are/can do. (And thanks to you, have now spent some time training them lol.) I'm glad you think corporations are misguided in their optimism (truly!) but they are still doing a lot of damage in the meantime, regardless of whether we have a nuanced understanding.