LOS ALAMITOS, Calif. — California lawmakers have passed a new law to regulate what AI tech companies do and require Big AI to disclose what data they use for training models. Also, the proposal of social media platforms to provide deepfake-detecting software to their users has become more progressive. California lawmakers have proposed this to protect film industry workers in the future, after the SAG-AFTRA strikes last summer, to help protect them from possibly being replaced by AI.
Some examples of protected jobs include voice actors, audiobook performers, and music artists who have been replaced by deepfake voices, according to an Associated Press article by Tran Nguyen. Additionally, with these new proposed laws, call center agencies across California would not be able to replace workers with AI, and there would be possible penalties for cloning the voices of the deceased without permission.
From the new laws, the topic of AI literacy would be suggested for state working groups to educate people on AI’s influence in math, science, social science and possibly use in the classroom.
On Aug. 28 of this year, the new California AI bill was sent up to Gavin Newsom to be passed. On Sept. 17, Newsom passed the bills AB 1836 and AB 2602. Opposition to these new bills comes from large-interest companies like Google and OpenAI, the creator of ChatGPT. Google has developed its own AI system, Gemini, and both have heavily invested in the AI industry, according to Digital Democracy, a database by CalMatters. Meta has said this bill will stop the progress of AI innovation in California instead of letting it flourish and develop. Elon Musk, CEO of Meta, favors the bill, saying AI tools can be consequential, as the federal government has not done enough to combat the issue.
The furthest we can look back in history to where AI was created was in the 1950s, with the biggest name in computer science, Alan Turning, and his Computer Machinery and Intelligence, which included a test for machine intelligence called the Imitation Game. The 1980s saw the AI boom, with more interest in artificial intelligence, more people learning about it and government funding for projects. Innovations were made like Deep Blue, which was a computer that beat the world’s best chess players. Deep Blue became the first computer to outsmart a human. Companies also used algorithms to help recommend content to their users.
AI has already impacted schools in the past few years through writing tools like Grammarly, but school faculty has had to deal with many tools that seem like they’re doing the work more than the students. AI has become much more advanced than many have thought. In the past, it was mostly phones that were a problem at school, but now schools can’t keep up with the technological advancements of AI.
“We want students to take advantage of technology, but we want to use the technology responsibility. I want students to make good decisions and use these tools in a productive manner, in a way that benefits themselves or benefits society,” said Mr. Bowen, Los Alamitos High School’s assistant principal of student services and attendance.
Mr. Bowen shared his views on AI showing how he uses AI. He wrote a paragraph of his writing and had AI revise and correct his work. He also explained that LAHS has started to place regulations on AI in the school. If AI has been used to do a student’s work, platforms like Turnitin and Google Classroom flag it for teachers.
There have also been issues brought up as the election is coming up, like AI deepfakes of Taylor Swift supporting Donald Trump. Swift later announced her endorsement of Kamala Harris. She also wrote about her newfound fear of AI. Before, she never expected false information to spread like this so fast near the election.
These are some of the reasons California lawmakers have proposed this new law. The fear of AI and what it could do, especially so close to the election date, has caused many to want regulations on this new technology. Bill AB 2602 will make companies and AI developers use specific contacts to use the likeness of performers and actors. The performer must also be in a professionally negotiated contract with the company to help protect workers’ rights.
The second bill Newsom signed into law is AB 1836, which prohibits the commercial profiting and use of digital replicas of deceased performers in films, audiobooks, games and shows without the consent of those performers’ relatives or estate.
“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said.
Hopefully in the future, AI can grow productivity for society, and people can use this new technology to improve their lives. This technology is still very new, even with the large history it has had. AI can be used in many ways, and with the new laws in California, hopefully this new government issue can be regulated and used properly for the betterment of humanity.
Jaya Eapen • Sep 24, 2024 at 12:18 pm
This is such an interesting article Aiden! There’s so much information concerning AI, and I think you did a great job condensing it! I also like all the research and links you’ve added!
Jasmine Lee • Sep 23, 2024 at 5:48 pm
Wow such an informative article! Very impressed with how much research you did. Keep up the great articles!