Why is Elon Musk’s efficiency squad plugging federal data into unvetted AI systems? That’s what dozens of Democrats want to know as they demand answers from OMB Director Vought about DOGE’s unauthorized AI activities.

The Department of Government Efficiency, led by the same guy who runs xAI, is apparently feeding sensitive government information into AI tools without proper approval. Nothing sketchy about that, right?

Lawmakers are freaking out over reports that DOGE affiliates have been shoving federal data into unapproved AI systems. The problem? This sensitive info could end up training future commercial AI models. Your tax data could be teaching computers how to think. Sleep tight.

The security risks are no joke. Once data gets fed to these systems, the AI operator fundamentally possesses it. That’s a huge breach of public trust. Not to mention potential violations of the Privacy Act, E-Government Act, and FISMA.

Reports suggest Education Department data has already been used in AI, and there was even a plan to scan OPM emails. The GSAi chatbot built on commercial large language models has also raised significant concerns among lawmakers.

Let’s be real – generative AI is still pretty dumb. These models make tons of mistakes and carry serious biases. They’re definitely not ready for government decision-making without proper vetting. The FTC has already warned these tools can perpetuate illegal discrimination. Oops.

Rep. Stansbury isn’t waiting around. She’s introduced a Resolution of Inquiry demanding documents about DOGE’s AI use, including which systems they’re using and what federal data they’re feeding them.

She wants details on authorization paperwork, privacy assessments, and data sources.

The whole mess might be breaking multiple laws. DOGE appears to be using tools like Inventry.ai, which lacks FedRAMP approval – a big no-no for federal agencies. With a deadline of April 25 set for OMB to respond to the lawmakers’ concerns, pressure is mounting for transparency and accountability.

Rep. Connolly called it “reckless AI misuse” that disregards data privacy and cybersecurity standards.

Meanwhile, the GAO has been pushing for stronger AI accountability across government. Their recent report found many agencies have inaccurate AI inventories.

Thirty-five recommendations later, we’re still waiting for proper oversight.