Here we are again. This round? Chatbots are in the crosshairs.
S. 1037, the Protecting Children from Chatbots Act, purports to shield minors from harmful content and manipulative AI companions.
What is a chatbot?
(Section 39-81-10(4))
A chatbot is any AI, algorithmic, or automated system that does the following:
(a) Produces new expressive content/responses not fully predetermined
(b) Accepts open-ended, natural-language input and produces adaptive/context-responsive output
(c) Maintains conversational state across exchanges for multi-turn dialogue
Simply put, a chatbot is software that can chat with you in everyday language. It usually replies in text, remembers what you said earlier, and is built to keep the conversation going.
Some examples of chatbots that can fall under this bill include:
AI companion and “relationship” chatbots
General‑purpose AI assistant chatbots
Large educational and homework‑helper chatbots
Social, wellness, and “support” chatbots with ongoing conversations
Let’s crack this thing open and see what’s actually inside.
More laws, more government in your business. This one adds a new Chapter 81 to Title 39 of the South Carolina Code.
Section 39-81-10 includes 22 defined terms. That’s quite a few definitions for one section.
Strike one.
Tip One: Watch for words that mean everything and nothing. This bill is full of them. “Reasonable age verification.” “Emotional dependence.” “User wellbeing.” “Reasonable systems and processes.” “Covered harm.” All wide open for interpretation. It’s a regulatory blank check.
Strike two.
Watch for bills that tighten the state's grip (new Chapter 81, Title 39 + 39-81-60 AG penalties), poke into privacy (39-81-20 age data + 39-81-50 incident reports), or lay groundwork for more control (39-81-60(D) cumulative duties + vague "reasonable systems"). This one checks every box.
Strike three.
Tip Four: Who benefits? State regulators gain new enforcement powers. What changes? Adds Chapter 81 compliance regime. How is it enforced? AG injunctions + $50k/day penalties (39-81-60). Why is this needed? Unclear because existing laws already cover minors' online harms.
Monitoring feelings. Yeah, it’s as creepy as it sounds.
Section 39-81-10(10) and (16); Section 39-81-40
"Emotional dependence", "Relationship simulation"
There’s no magic tech that can scan every chat and decide whose feelings are “too attached” or which sad message is suddenly an “imminent risk.” The only way to even try is to monitor and record every sensitive conversation.
What is the real standard for when an AI company should report someone to the authorities? Who decides what counts as a real, immediate threat? AI companies may overreport to protect themselves, or underreport to avoid privacy concerns. This also assumes the platform lacks end-to-end encryption, that the user isn’t using a VPN or a fake account, and that the real individual can be identified.
Reasonable Age Verification
(Section 39-81-10(15)) "Reasonable age verification" includes methods authenticated to relate to the individual, such as a state-issued identification or driver license; government digital identification; military identification; bank account verification; or any other commercially reasonable means or method, including third-party verifiers that can reliably and accurately independently verify a user is an adult.
But don’t worry, section 39-81-20 will ensure that the information gathered is protected. The problem is that collecting and verifying government-issued identity for every person who wants to use a chatbot creates exactly the kind of database that hackers, scammers, and data brokers spend their careers trying to reach. Good deletion policies don't change what happens in the window between collection and deletion, and they certainly don't survive a breach.
Section 39-81-50
If a provider believes there is an imminent risk, they must contact emergency services or law enforcement within 24 hours, or write a full report if they cannot do so. Any “covered incident” such as death, suicide attempt, serious self-harm, psychiatric emergency, or injury related to chatbot use must be reported to the Attorney General within fifteen days. This includes dates, details, reasons for believing the chatbot was involved, and actions taken.
Why should we help the government accumulate even more sensitive mental health data, especially when it could be used against innocent people? Why create a data vault inside the Attorney General’s office?
What about limiting speech? The First Amendment explicitly protects "the freedom of speech" and "the right... to receive information and idea.
We should be cautious about laws that claim to protect children but may do more harm than good. This bill is a prime example. Our lawmakers should think carefully before turning AI companies into forced data-collection, surveillance, and reporting hubs.
Protecting children is important. But better solutions are available. Families have proven tools such as device settings, content filters, account controls, and good old-fashioned supervision. Parents can also keep their children offline or limit their access to apps. Placing families at the frontlines ensures lasting, tailored protection far superior to one-size-fits-all government rules.
We also feel S. 1037 is an unconstitutional overreach.
Respectfully ask Senators to oppose this bill.
Disclaimer: The views expressed in this article are those of the author and do not constitute legal or professional advice. ConservaTruth assumes no liability for any actions taken based on this content. Readers are encouraged to review the bill text themselves. Read more.

Subscribe to ConservaTruth's Email Newsletter for curated insights on South Carolina's legislative activities and conservative viewpoints, delivered straight to your inbox! With vetted and easy-to-understand information, our newsletter empowers you to become an informed and engaged citizen, actively participating in safeguarding our cherished Constitutional values. Don’t miss out on crucial updates—join our community of informed conservatives today!
Comments