This article is an edited transcript of the provided video.
I've been getting a lot of questions recently about what exactly is generative AI — and what it means when one of your favorite tools suddenly says it has AI and has the little AI sparkle emoji. ✨ Let's break it down.
Watch the Q&A instead of reading!
The first thing to know is what exactly generative AI is and how it differs from other types of artificial intelligence and machine learning that have come before.
Artificial intelligence vs. machine learning: are they the same thing?
Artificial intelligence and machine learning are intrinsically related and they've been around for decades.
Machine learning
Machine learning has long been used in automations where you're saying to a program if X happens, then do Y. A really good example of this in a modern business context is an app called Zapier. (There's another option similar to it called If This, Then That.)
You can set these apps up so that if something happens in software program one, the app takes a specific action in software program two. For example, you might say, using Zapier, "when I get an email from this person or with this subject line, create a new project in my ClickUp account." That is an automation done by a machine. Machine learning is also what enables certain programs that crunch a lot of data.
If you do marketing, think of your favorite SEO tool of choice that lets you see how your website is ranking for different keywords. Those apps use machine learning to process vast quantities of data and come up with the specific metrics that you need to see. It's all very prescriptive. These apps are looking for a specific parameter that you set and then executing a specific action that you've told it to. A human, you, has told (via code and instructions) the machine learning program to do something very specific. And that is all it does.
Generative AI
Generative AI, however, is a predictive technology. Rather than simply executing, Y when X happens, it's statistically calculating what you the user are most likely to want or expect to happen next. It is in some ways a very advanced form of T9 texting.
If you've never used a T9 phone, each of the number keys has three to four letters associated with it. So if I type 238, it knows that I am most likely trying to type the letter B, the letter E, and the letter T for the word bet. But it also knows that I am possibly trying to type the word but and I can pick which word I want. There are limits to this particular kind of predictive text technology, though. T9 phones use a dictionary, not unlike the dictionary that powers your spell check on your iPhone or your Android.
Every time you start punching specific numbers, it checks the dictionary to see what words can be made out of the nine to 10 letters on those keys that you're pressing. And then it quickly organizes those words by what is most commonly used in human speech. But if you tried to make a sentence using only the predictions that this keyboard gives you when you punch in numbers and letters, things would get weird very quickly because it only knows so many words. And that even happens with your spell check on your iPhone!
In the case of a smartphone, the sentences that you can get will make a little more sense than they would from a T9 phone because the smartphone predictive technology learns from you. It learns what words you personally are most likely to use and which ones you tend to spell wrong.
Going beyond that, we move into the realm of tools like ChatGPT and Claude. These tools are also making predictions about what you want the next word in a sentence to be, but the predictions are pulling from a much bigger dataset. It's looking at basically all of the internet and every text that's ever been created because when these tools were trained, their makers gave them a treasure trove of data to crunch and analyze and figure out what parts of speech, what parts of images, what parts of videos are most likely to appear in order. Things like:
- What words tend to appear next to each other in a letter?
- If someone says, "Make a video of a woman drinking coffee," what kind of environment is she most likely to be in? (Probably a coffee shop because we see that in a lot of TV and movies and social media posts.)
Generative AI tools aren't meant to just come up with patterns based on a specific list like the dictionary in a phone. They're actually mathematically determining what you the user are most likely to expect or want based on:
- What the tool's program has inferred from humanity and society based on all of the media fed into it.
- How you are talking to it.
You and I could talk to Claude about the same topic. But if we're talking to it in different ways (which we would be because we each have our own speaking style) the responses that we get from it are going to be different. That is because of something called a large language model.
Large language models are computer programs that are basically made up of all kinds of training data. The whole internet, basically. There's no real intelligence here. A generative AI tool using a large language model, like ChatGPT, is predicting what the user wants. And that is why sometimes it gets weird and gives you answers that maybe aren't accurate because it's not prioritizing accuracy. It's prioritizing making you the user feel like you're getting what you want.
If you feel like you're getting what you want, you're more likely to stay on the tool and keep using it. And the longer you stay on the platform, you may also be more inclined to pay for it — because ultimately, these companies making AI tools are here for money.
But it's just important to remember that when you see "AI" splashed on a tool, it's not always the same thing. ChatGPT, Claude, etc are generative AI.
Some of the tools you've been using for years that now say they include AI are actually using an older form of non-generative machine learning — the same technology the tool has been using all along.
But because AI is a really popular term right now a lot of companies have rebranded their long time machine learning that they've been using for years as shiny new AI, sparkle emoji included.
Ultimately, though, remember that Generative AI is to intelligence kind of like what a purple lollipop is to grapes. It says grape on the wrapper, but we all know that the taste is actually the taste of purple, not the taste of a real grape. Artificial intelligence is similar. It's a simulation of intelligence, just like that purple lollipop is a simulation of grapes. It's fully synthetic and there is no real true intelligence in there, despite the name on the tin.
How to tell if the app you're using is generative AI or not
Does the app create something new? It's generative.
Does the app execute an action you've requested (and only this action) without creating something? It's not generative.
Does the app present you with nicely organized data and analytics, but doesn't give you any way to manipulate the results into something new like a blog post? It's not generative.
What's the problem if it's generative?
Remember: the goal of a company like those making ChatGPT, Claude, Meta AI, etc. is money. And to get money, the product that they build has to keep getting better at predictions, which means it always needs more data to run its calculations on. So when you use these tools, if you're using them for free, generally speaking, and there are some exceptions, what you type in and what it gives back to you are all retained by the tool. Your data goes into building the tool's data set and further refining how interacts with people.
This process can technically expose your content and ideas to other people, and this can happen in two ways:
- There have been some instances where people have, say, chatted with chat GPT and then created a link to share that conversation with one other person. At one point, there was an issue where those shared conversations were actually shared publicly with the whole world and they wound up in Google search and you could find them.
- The other way it can potentially expose your data and your ideas to other people is because of the fact that it takes what you're doing and it pulls it into its training model. So over time, the outputs that other people are getting start to become a little bit more like your ideas and your style and your sound.
Small indie AI tools vs. big chatbots
A lot of the AI tools you encounter today are actually built using one of the big platforms: OpenAI's ChatGPT, Anthropic's Claude, or MetaAI. You might be using a tool that's made a different company — it doesn't say OpenAI, Claude, or MetaAI anywhere — but they're paying for use of these companies' AI models and pulling that information and that capability into their own website or into their own app.
You may inadvertently be using these big companies' algorithms and their AI through a different application. If you're not sure and you want to know more about where a tool is getting its AI capabilities from, or what kind of training it's doing, you can ask the following questions.
Questions to ask the makers of an AI app
- Ask what foundation model they're using. The foundation model would be the AI that's powering it. The company may say they've built their own proprietary AI — which is the case sometimes, but doing so is very expensive.
- Ask how is your data used to train the foundation model. A company might say they aren't using your data, but if they're partnering with one of these other AI companies to create the actual AI capabilities, it's possible that your data, while it may not be used by the intermediary app, is being pushed back to OpenAI, Anthropic, Meta, etc., for further enhancing the capabilities of that AI.
Settings to look for in an AI tool
There are also some steps that you as the user can take, regardless of what the company says or how they're building their tools.
- In the app's settings, look for options to not train the model using your data. They may phrase it as, "Use my data to help make this product better," or, "Let the company use my conversations to improve user experience for everyone." Anything that talks about using your conversations with the AI — you want to turn that off wherever possible.
- Consider if anyone else's data is exposed due to your use of the AI tool? A big one here is the idea of an AI notetaker. When an AI notetaker joins your call, from your end, you're just getting a nice little summary of the call, what happened, and maybe some action items you can take away. But what's actually happening is the AI is recording, processing, transcribing, and analyzing that call. This means that you're exposing everyone else on the call to recording by an AI tool. Depending on where you live, it may not be quite legal for you to record a call in any way without the consent of other people on the call. Be sure that if you are adding note takers to meetings, and there are definite accessibility and usability reasons for doing so, you're letting other people know that you're doing this, and giving them the opportunity to consent, or say that they don't feel comfortable with it.
- Know what you're putting in the AI tool. If you're using an AI tool, especially a free one, and you're feeding it information, and you're putting in your own ideas, that's one thing. But if you're putting other people's content into it, or content that you don't have full ownership of, or maybe content from your employer, that can be a breach of privacy, and it can be unethical. It can also get you into trouble later on.
Are there ethical ways to use AI?
also get asked if I know any good or ethical ways to use AI. And this is a hard one to answer because ethical thresholds vary by person. The number one thing that everyone can do to use AI in a slightly more ethical manner is what I mentioned before:
If you are going to be adding an AI tool to a meeting, make sure that you're getting consent from the people in the meeting.
Also make sure you're getting consent when you are connecting any of your clients' or customers' data and accounts to an AI tool.
Next, think about copyright. AI models are typically trained on copyrighted data. I personally don't know of any generative AI model that hasn't been trained on on copyrighted data, which is something to consider. Think on how you feel about that. Where does that sit with you?
Finally, there's the question of the environment. AI companies are building giant data centers, which are essentially large warehouses filled with computers in areas that are already often struggling to maintain appropriate drinking water levels. They build these centers in dry, arid areas — you're not going to build on right by the ocean, where all the salt's going to get to the computers within.
These data centers require fresh water to flow through them to cool all of the computers because they get very hot. The facilities also add a lot of noise and light pollution to the communities where they're built. Plus, the buildings are huge, so there is some environmental destruction in terms of building them.
Data centers also suck up a lot of energy. You know, if you have a warehouse full from floor to ceiling filled with computers, that's going to use a lot of electricity. People who live in proximity to these data centers are finding their utility costs going up.
Some companies like Microsoft are looking at building or reactivating entire nuclear power plants to power their AI data center needs. One example is the Three Mile Island nuclear plant in Middletown, Pennsylvania — which was the site of a pretty bad accident that happened in the 1970s. And after that accident, half of the power plant got shut down. Only half of the power plant continued to work. A few years ago, the owners of the power plant decided to decommission it. (Decommissioning is when they start the long, multi-multi-year process of turning off a nuclear power plant and securing all of the fuel rods.) That process has never actually been reversed, but Microsoft decided that they would like to buy Three Mile Island and start reversing the process of decommissioning. Will it work? Who knows.
Are any companies using AI in a decent way?
I talk a lot about why there's issues with AI, but it can be helpful for accessibility and learning and planning. But a lot of what's commercially available to us is rife with some of the issues I've discussed above. Don't be misled by tech leaders. A lot of them talk about something called AGI, artificial general intelligence, and say that this is going to be the future.
But AGI doesn't actually exist. It is the stuff of science fiction. OpenAI has their own definition of AGI, which is based on revenue — not actual intelligence gains.
Some of the people who run AI companies are saying AGI is going to be here by 2027. That isn't the case. It is not.
We are not anywhere close to AGI, and in fact, the more AI content that gets put on the internet, the more AI tools scrape up that data and learn from it. And when AI learns from AI generated content, it actually makes the AI worse, not better.
If you do want to use AI, there are some slightly better options, but do remember that some of the ethical points I mentioned previously still apply here:
- Cloudflare. Cloudflare is actively using AI to fight the AI scraping of copyrighted content and websites. If you add Cloudflare's AI protection to your website, it'll actually use AI to generate a web of pages that trap AI companies' crawlers and prevent them from scraping your unique content on your website. I do think that's pretty cool.
- Proton. Proton, based in Switzerland, makes is a privacy-friendly suite of products and has introduced an AI tool called Lumo. There are still potentially some environmental and social issues at play here, but your data is supposed to be safer when using Proton Lumo instead of something like ChatGPT.
- ClickUp and Apple. These are two companies that have AI features, but you have to knowingly opt into them. You have to click a button that says, "Yeah, I want to use Apple intelligence," or, "Yeah, I want to pay an extra $30 a month fee to use the ClickUp intelligence." It's not automatic, and that's what we want to see. The alternative being companies like Microsoft, which just shoehorned co-pilot into the full Microsoft Office 365 suite of apps, like a new, bigger, more intrusive version of Clippy. They don't let you turn it off, and in fact they charged everyone more for it. My Microsoft subscription went up 20% this year, so I canceled it.
While I'm personally in the "no generative AI at all camp," I know that may not be realistic. As a middle ground, I'd like to see more companies like ClickUp and Apple who are setting up AI so it's an option that you can opt into, versus companies like Microsoft where it is AI becomes the default and you either have to opt out or you can't opt out at all.