If there was a word that meant both excitement and fear, that neologism would best describe the mood many teachers and education leaders have toward generative AI. (The word “jittery” doesn’t quite capture the mix, which includes both the giddiness of having a new toy to play with and a nagging worry that this amazing toy might somehow accidentally break all other playthings.)
A new guide designed to support developers building AI tools for education was released by the U.S. Department of Education this week and it addresses this duality head-on, blending optimism for how new AI tech might lead to improvements in education and frank talk about the serious potential risks.
That tone is perhaps best captured in a quote cited in the guide’s conclusion, by Patrick Gittisriboongul, Assistant Superintendent of Lynwood Unified School District in California:
“Would I buy a generative AI product? Yes!” he said. “But there’s none I am ready to adopt today because of unresolved issues of equity of access, data privacy, bias in the models, security, safety, and a lack of a clear research base and evidence of efficacy.”
That’s a long list of concerns. And there are real-life instances that illustrate those risks. For instance, the guide says there have already been publicly reported examples of generative AI producing output that describes “historical figures who never existed and give wrong answers to math problems.”
The document laid out the department’s vision for a world in which edtech companies develop systems to “ensure responsibility” as they experiment with innovative approaches involving generative AI. Some key steps noted in the guide include voluntary commitments, clear communication around the use of AI and participation in public forums.
The guide is organized around five key recommendations:
- Design for education
- Provide evidence of rationale and impact
- Advance equity and protect civil rights
- Ensure safety and security
- Promote transparency and earn trust
The 45-page guide is not a work of regulation or a detailing of legal requirements, the document makes clear.
The new guide supplements a 2023 report by the department, “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations.” And it comes in response to an executive order issued by President Biden in October 2023 urging the “safe, secure, and trustworthy development and use of artificial intelligence.”
The importance of “earning the public trust” emerges as a thread throughout the new guide’s recommendations, as a way to meet this moment of hope and concern.
“The Department envisions a healthy edtech ecosystem highlighting mutual trust amongst those who offer, those who evaluate or recommend, and those who procure and use technology in educational settings,” the guide concludes. “Developers should take precautions to design AI-enabled educational systems for safety, security, and to earn the public’s trust.”