MTEC | Miami Tech Enthusiast Club

Oppose Florida's AI age verification bill, protect your privacy

opposeSB482short

Time to Read: 13 min

As the Florida Legislature begins it's 2026 regular session on January 13, we face another unfortunate challenge to our digital privacy as Floridians. What was originally presented as an artificial intelligence bill of rights for residents has become an age verification law in order to use AI chatbots. While the rest of the bill is actually quite good, today we have to oppose the AI age verification bills SB 482, HB 1395, HB 659, and SB 1344 until or unless the age verification piece is removed.

If you already understand why this is a problem, scroll to the bottom to learn how you can take action.

Preface

First, I am not a lawyer, nor is anyone else in our club. This analysis is based on my reading as a constituent who is trying his best to engage in the civic process in an area he knows a little about.

Second, I am not a technology expert. While I have been interested in digital privacy for about four years, I would only consider myself to be a tech savvy hobbyist.

Third, I want to recognize how AI impacted the life of a teenager who tragically took his own life as he was manipulated by a chatbot from Character.ai. Unfortunately there have been many horrors that the use of AI as introduced in the last few years. Personally, as someone who sees little to no value in AI while clearly seeing all of the harm it has caused, I want to empathize with this person's parents and anyone else who has suffered deep pain at the hands of this technology.

Why Age Verification is Bad

At first blush, this all may seem like unreasonable pushback against an idea designed to protect kids online. We ask for ID at bars, after all. Minors are not allowed at certain events. We understand this in the physical world, so what is the problem with verifying someone's age in the digital world?

The problem is a fundamental difference in how that information is collected and used.

In order for an app or service to verify someone's age online, the developers have two options:

  1. Use facial recognition to estimate the age of the user.
  2. Require a form of identification which states the user's date of birth.

In the physical world, someone briefly glances at an ID you provide, checks your age, hands back the ID, and you're done. Either you are old enough to participate in whatever the thing is, or you're not and you are denied access.

In the digital world, you have either provided biometric data or a copy of vital government documentation over to another party, hoping that they will:

In the physical world, an ID check is harmless.

In the digital world, it is the backbone of a surveillance state.

What Does the Bill Say?

The section on age verification can be found in the original bill text of SB 482 starting on line 256.

The first line reads:

A companion chatbot platform shall prohibit a minor from entering into a contract with the platform to become an account holder or from maintaining an existing account, unless the minor’s parent or guardian provides consent for the minor to become an account holder or maintain an existing account.

That's it. That's the kicker.

The rest of this section is full of great parental control ideas. If an account is identified to belong to a minor and the minor can use it with the parent or guardian's consent, the parent or guardian can:

When a "companion chatbot platform" knows that an account belongs to a minor, they have to:

A "companion chatbot" is defined as "an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions." If it's just a feature or a small part of the service someone is using, like customer service or a video game thing, that's not what is meant. The exceptions can be found starting on line 209.

"Companion chatbot platform," by extension refers to "a platform that allows a user to engage with companion chatbots."

How This Plays Out

Let's run through the intended experience that you would most commonly have in Florida if this bill were to be signed into law.

You are interested in finally using ChatGPT. You download the app and open it up. ChatGPT prompts you to make an account. As part of the process, you will be asked to identify yourself or you cannot make an account.

"But wait," you may be asking, "I thought only minors had to verify their ages?" ChatGPT doesn't know anything about you. It needs you to identify yourself so that it can determine your age and therefore constrain or not constrain the experience. Everyone has to do the ID check, and then the app or service will know who is a minor based on the information provided.

Facial Recognition

Let's assume they implement facial recognition. It's possible OpenAI implements an in-house solution, but more than likely they will use a third party provider like Persona.

If the chatbot platform does the facial recognition themselves, they now have a copy of your biometric data.

Ideally they delete it as soon as possible, but you don't know what they will actually do, as many users of Big Tech have found over the decade. Here's one example where Google told users that their browsing history was not being collected in incognito mode, but it was.

For as long as they have that picture or video of you, they can use off-the-shelf facial recognition that maps to information from data brokers to learn your identity, like your name, address, phone number, location, job history, family, friends, any related contacts, and much more.

If the chatbot platform does facial recognition via a third party, then that other random company, who you do not know and did not seek out, but are forced to trust because of a choice the platform made in order to make an account, has a copy of your biometric data. And the chatbot platform may also have access as well.

Reject Convenience, a creator who covers issues around digital privacy, including deep dives on company privacy policies, reviewed what kind of data Persona collects on their users and what they do with it.

Per the forum post, here is the kind of information they collect:

Most importantly, scans of your face. They retain this data on their servers, it's not stored locally. They also collect your name, username, email address, postal address, phone number, age, gender, marital status, financial transactions, credit card numbers, financial account details, photos, documents, emails sent to their service, IP address, device type, OS, browser, other software, geolocation data, usage data, including what pages you view, how long you spend on a page, your actions on those pages, recordings or transcripts of audio or video communications you have with them, inferred data, including your city, preferences and other characteristics. They also collect information from data brokers, third party partners, including social networks, partners of the company, service providers and public information.

On top of all of the risk that you as a ChatGPT user are being exposed to because of the vendor decisions of OpenAI, here is what else Persona does or doesn't do (per the report from Reject Convenience):

This is the invasion of privacy you and every user of ChatGPT would be subjected to just for wanting try a new technology or get a better tool for planning a birthday party.

In the case of the ChatGPT maker, they do use Persona specifically.

ID Verification

In the case that you have to verify your age with a picture of your ID, you will run into many of the same risks as with facial recognition.

If the company handles the data in-house, they have access to fully identify you with information they can buy from other parties, as well as whatever else you share. If a third party handles it, now two companies have access to invade your privacy in the same way. If one of these companies suffers a data breach, you are the one bearing the risk of identity theft and harassment while likely having little recourse against the company which asked so much information of you.

With require ID verification, the risk is higher because it is a sensitive document. Not only does it show exactly what your legal identity is. It can show your date of birth, address, and your ID number, something which is important for signing up for important services like bank accounts or buying a car. It's one thing to leak a selfie. It could be financially devastating to leak a picture of someone's government ID.

Scale of the Exposure

And a data breach will happen. The UK's Online Safety Act, which went into effect this summer, requires age verification; part of a similar move that is overtaking Western countries to dismantle privacy. Many popular platforms immediately complied. In short order, one of them got hacked. Information collected by users of Discord was compromised through a third party vendor which Discord shared the data with.

You also have the issue specific to AI chatbots which is that they are inherently not private. Assuming the company you are trusting has perfect security, the two parties that have access to your conversations are you and the company. OpenAI, Character.ai, Anthropic, and any other platform can see everything you have typed or shared, and all of the responses and back and forth. Furthermore, as Project Director for Privacy Guides Jonah Aragon shared in a forum thread, there is very little that can be done to even try to make an AI chat private from a technological implementation standpoint. There is no way to end-to-end encrypt your conversation with an AI service. There is no way to keep your conversation private from the platform itself.

Whether we like with it or not, AI chatbots have become some of the most personal tools that people have. Someone may start to plan a vacation with ChatGPT. They could ask it for book ideas as they dream of being a writer. They could hastily ask for information in preparation for a paper for college. Some may process their emotions by going back and forth with an AI, perhaps about something too personal to share with others. Others may be considering important philosophical, religious, or political questions and expanding their minds through AI. Use of AI can range from creating shopping lists to organizing budgets to producing art that include grandchildren to adding passed relatives into pictures of important moments in their lives.

More than what we share in a browser, in a notes app, in a journal, or even with friends and family, there are some for whom AI is a second brain or a private space where they can be themselves the most.

It's a shame, therefore, to note the privacy weaknesses of chatbot platforms as they stand today, and how much worse they will be with the proposed age verification bills. Today, all of these potentially intimate details about your life are preserved between you and what is hopefully a trustworthy, secure platform. Tomorrow there will be absolutely no question as to who these details pertain to, and no safeguards against a hacker, company, or government that chooses to act on this information they know about you.

Further Consequences

But it gets worse.

The bill describes a companion chatbot as an AI that can sustain a relationship across multiple interactions. Character.ai obviously can do this, as can ChatGPT or Claude. If you are using an AI in the app like most people do, the AI can remember details about you across conversations, and you can continue a conversation with an AI where it will remember the context of the conversation.

What about Gemini, Copilot, Meta AI, or Grok?

These are the chatbots provided by Google, Microsoft, Meta (Facebook, Instagram, WhatsApp), and X (formerly known as Twitter) respectively.

Would you have to verify your age in order to use these platforms? They also have apps where you can chat just like as with ChatGPT, with similar features for remembering context across sessions. In fact, they go further because access to these chats can be spread across the entire ecosystems of these companies, like in the email client, spreadsheet software, or chat with another person.

Because these AI features are so interconnected with each other across the full range of products that these companies offer, will they not be able to separate access to their companion chatbots from the use of their regular products? Is Gemini too incorporated into Gmail, Google Drive, Google Photos, the Chrome browser, and Android for Google to uncouple them? Is Copilot too baked into Microsoft Outlook, Word, Excel, Edge, and Windows? Are Meta AI or Grok too interwoven with their respective social media platforms?

And even if these Big Tech companies could separate their AI implementations from their products, would they? Or would they be happy to take the last remaining piece of your privacy from you, and use this bill as an excuse to uncover your real identity forever?

There are more private implementations of AI that exist today, like Duck.ai, Leo in the Brave Browser, Kagi Assistant, or Lumo from Proton. However, if they fall in this bucket it is possible that these companies won't provide this service in Florida or scale back some aspect of the features, thus making it less powerful for those who use them. The companies behind these products want to respect the privacy of their customers and thus won't want to expose them.

Regardless, if the age verification is contained to just the main platforms that we think of when we talk about AI, then that's one thing. If this bill consumes the platforms which make up the foundation of our existence in the digital world, almost every person in Florida would lose their privacy. Everyone would have to disclose their real identity to these platforms, and thus every email, calendar invite, note, personal message, picture sent on these platforms and more would be tied to your government ID. Every comment or joke or political belief or speech expressed would be associated with your real identity. There would be almost no escape.

In the physical world, age verification is harmless.

In the digital world, it is the backbone of a surveillance state.

What We Can Do

The Florida Legislature's regular session runs from January 13 to March 13. By law the session runs for only 60 days. We have less than that to pressure our Florida lawmakers to oppose these bills or fix them by removing the requirement for age verification.

This is how you can take action:

  1. Call your elected representatives in the Florida House and Senate and tell them you oppose the AI age verification bills SB 482, HB 1395, HB 659, and SB 1344 until or unless the age verification piece is removed. Voicemails count! Find your Florida rep here and your Florida senator here.
  2. Email your elected representatives.
  3. Share the news with friends, family, and any content creators who you think would cover an issue like this.
  4. Follow us on social media as we follow news about this bill. Be ready to take more action!
  5. If you're up to it, set a reminder to call again the next day. And the next day. The more pressure we can put on our representatives, the better.

Use the following script for your call or email if it helps you. Feel free to add or change anything.

Hi, my name is $your_name. I'm calling to ask my state representative/senator to oppose SB 482, HB 1395, and HB 659. These bills would require AI chatbot companies to verify the ages of their users, and in doing so identify every user by their government identity, stripping them of what little privacy we have left online. While the rest of these bills have good ideas, we must remove the requirement to verify the ages of users in order to preserve our right to privacy. Otherwise our personal information will be exposed to chatbot platforms, third party vendors, Big Tech, hackers, and the government. Thank you for your time.

2025 was a year with several blows to privacy across the Western world. Let's do our best to stem the flow of these threats and fight back in 2026.

Additional Resources

EFF: Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.

Privacy Guides: Is This The End of The Anonymous Internet?

Fight for the Future: ONLINE ID CHECKS WILL RUIN THE INTERNET


Written by Joseph, Organizer for MTEC

Wants to see the world get better.


Follow us on social media and share your thoughts on this blog post!