When Bruce Reid served as the president’s vice-chief of staff in the Biden administration, he led efforts to work together on voluntary commitments with major AI companies such as humanity and open eye to ensure the safety of their products.
Reid has since left the White House, but he hasn’t completed AI, a technology he described to Mashable as “exciting, surprising, and sometimes scary.”
He will continue to work for Common Sense Media, a nonprofit organization that supports children and parents navigate media and technology. The nonprofit, commonly known for media evaluations of children’s content, including video games, television shows, and films, is also researching and advocating.
Teens talk to their AI peers, whether they’re safe or not.
A veteran of three democratic presidential administrations, Reid will lead common sense AI advocates for a more comprehensive AI law in California. Common Sense AI has already supported two state bills that establish separate transparency systems to measure the risk of AI products to younger users, protecting AI whistleblowers from retaliation when reporting “significant risks.”
Leads argue that they are in a critical window into implementing AI safeguards, especially for minors, before certain business practices become entrenched and regulations become difficult.
“We were in a crisis of young people’s mental health when social media companies rushed to break things and ignored the privacy and safety of children,” Reid says. “No one wants to see it happen again.”
Parental concerns about AI chatbots are harmful
While some experts disagree that social media has encouraged an increase in the mental health of young people, parents are already moving forward with serious concerns about whether their children are involved in AI chatbots.
Last fall, the bereaved mother, Megan Garcia, filed a lawsuit against the character. Her teenage son claims to have experienced such extreme harm and abuse on the platform that contributed to his suicide.
Shortly afterwards, two Texas mothers filed another lawsuit against the character. The company claims it intentionally exposed the child to harmful and sexual content. One of the plaintiffs’ teens is said to have been offered a proposal by a chatbot to kill his parents.
Common Sense issued its own parent guidelines for AI peers last fall, and has since added new safety and parental control features.
California, where Common Sense Media is headquartered, is the ideal place to pass legislation that addresses some of the new risks of AI, says Reed. He was instrumental in drafting the state’s Consumer Privacy Act in 2018. Without a federal bill, state laws have effectively become national standard as so many high-tech companies are based in California.
Masculine light speed
The politics of AI safety
Reed also seems unthreatened by changing political calculations as Donald Trump returned to the White House and gave the impression that AI technology companies had Carte Blanche to pursue “domination.”
One of Trump’s executive orders revoked rules for AI safety testing, which Biden himself enabled. Meanwhile, businesses that may have once voluntarily collaborated with the Biden administration on their safety commitments are calling Trump for fewer regulations.
Despite rhetoric and lobbying, Reed is confident that it is in the long-term best interests of AI tech companies and can test their products, ensuring they are safe before they can be brought to the market.
After all, lawsuits that force businesses to reveal internal labor and adopt safety measures tend to generate bad headlines, reduce investor trust, and sow public distrust.
Reed also recognizes the narrative that the Biden administration was intended to curb AI innovation.
Silicon Valley critics, including venture capitalist Mark Andreesen, have argued that the Biden administration wanted to control or “kill” AI. (Andreessen described the meeting with Biden officials on the topic of AI as “absolutely scary,” and the suspects said he helped persuade Trump to support him and financially support him.)
Reed has attended numerous meetings with key high-tech stakeholders, including Andreessen. He politely disagrees with all of the nature described by critics that took place throughout these conversations.
“The Silicon Valley slums have tried to suggest that the Biden administration has somehow gone too far with AI, but that’s not true,” he says. “We didn’t have any regulators to challenge, even if we wanted.”
Instead, Reid believes that the main objections to the Biden administration’s policies from Silicon Valley technology investors like Andreesen were related to the Securities and Trade Commission’s attempts to crack down on cryptocurrency companies. Andreessen has supported several of these companies, and the Trump administration has dropped many SEC lawsuits in recent weeks.
Promote support, safety
Regardless of Biden’s characterization of his anti-AA, Reed says he supports innovation.
“It’s important that the US wins AI races rather than China, but it’s also important for the US to set standards for AI trust, security and safety, as China won’t.”
Reed says, for example, the field of potential bipartisan and industry collaboration can tackle explicit deepfake, a technology that declares teens and youth with devastating consequences.
The Biden White House has developed its own strategy to curb the image of non-consensus. And first lady Melania Trump supported a bill that gives strong protection to victims.
Clearly, this is not a region where American companies need to pursue control at all costs.
Reed says there’s no time to waste in any way, especially as it relates to ensuring that AI products are designed with children’s privacy and safety in mind.
“We can achieve the most powerful AI, and make sure our privacy is protected and we are transparent about what businesses are doing to make their products safe,” he says.