Lawmakers weigh AI oversight, mental health concerns

Some fear the accessible technology has become a go-to for people seeking mental health support instead of turning to professional help from a licensed therapist.

CORPUS CHRISTI, Texas — In Washington, D.C., lawmakers this week took up concerns about artificial intelligence — specifically the dangers some say warrant tighter oversight, especially as it relates to mental health and wellness.

RELATED: TAMUCC student uses A.I. to track flood levels

Some fear the accessible technology has become a go-to for people seeking mental health support instead of turning to professional help from a licensed therapist.

Others say the technology is simply moving too fast.

“That’s because the development and deployment of this technology accrued faster than guardrails could be put in place,” said Congressman Frank Pallone, New Jersey’s 6th District (D).

“Or the case of a 16-year-old who committed suicide after conversations with a chatbot evolved from helping the teen with school work to providing advice on suicide methods,” said Congressman Brett Guthrie, Kentucky’s 2nd District (R).

Tim Tate, a licensed professional counselor, says that while AI can have benefits, it cannot replicate genuine human interaction.

“People who see you — they hear you, they get you, and you get them… right. That’s something I don’t think A.I. is able to do long-term,” he said.

The technology continues to grow in use and popularity, particularly among younger people — something Del Mar College counselor Alison Marks says can come with specific vulnerabilities for teens and young adults.

“Younger folks might feel more isolated and feel like this is their only option for support. It’s really designed to predict what you feel like will be the next right answer,” she said.

Lawmakers are also raising concerns about potential security risks, noting that chatbots do not follow the same confidentiality rules that apply inside a mental health office.

Political analyst Dr. Bill Chriss says the issue isn’t black-and-white because AI, like most technology, is nuanced.

“If the objection is people are using A.I. in order to get information from in which they can do xyz, well the same could be said of Google,” he said.

Chriss says the questions surrounding AI don’t stop at mental health — they extend into the legal world as well.

“You’ll have briefs filed by lawyers that cite cases that never happened. And those lawyers often get sanctioned,” he said.

Original News Source