The world is still grappling with the implications of AI on K-12 education. While district AI policy is currently "under construction," it's safe to say that our guidance will adhere to the laws and recommendations below.
AI detectors often produce inaccurate results, risking both "false positives" (i.e., claiming work was AI-generated when it was created by humans) -and- "false negatives" (.i.e., claiming work was created by humans when it was actually generated by AI).
Recommendation: At the moment, AI detectors don't work. Avoid using them (except as a demonstration of the limitations of AI technologies).
AI can streamline grading, but may introduce bias or inaccuracies in assessments.
Recommendation: Rely on your own professional judgment first, using AI tools as a supplement to evaluation (or better yet, for quick informal feedback that doesn't need to go in the gradebook).
AI can be a transformative force, but it can also widen existing gaps. While AI tools have the potential to be powerful "field levelers," unequal access can widen educational disparities.
Recommendation: Be mindful of the fact that there is no such thing as banning AI—there is only limiting AI access to those students privileged enough to have a cell phone, home computer, and internet connectivity.
AI tools challenge traditional definitions of academic integrity in education.
Recommendation: Educate students on responsible AI use and design assessments that encourage critical thinking and originality. Use hypothetical scenarios (such as this Common Sense Education Dilemma) in order to foster a classroom discussion and provide an opportunity to communicate your own expectations.
The Family Educational Rights and Privacy Act (FERPA) is a federal law that protects the privacy of student education records. In the context of AI in K-12 education, FERPA ensures that any data used by AI tools is handled securely and with parental or student consent when required. Schools and districts must carefully evaluate AI tools to ensure compliance, particularly when tools process personally identifiable information (PII). Proper safeguards and agreements with AI providers are essential to uphold student privacy under FERPA guidelines.
Rule of thumb: Never input PII (e.g. student names, emails, etc.) into generative AI systems.
The Children's Online Privacy Protection Act (COPPA) is a federal law designed to protect the online privacy of children under 13. For AI in K-12 education, COPPA ensures that AI tools collecting student data do so with explicit parental consent and adhere to strict privacy standards. Schools and districts must verify that AI providers comply with COPPA, particularly when using tools that gather personal information. This law is essential in safeguarding young students' data and maintaining trust in digital learning environments.
Rule of thumb: Never input PII (e.g. student names, emails, etc.) into generative AI systems.
The Student Online Personal Information Protection Act (SOPIPA) is a California law that safeguards student data privacy by prohibiting the sale of student information and the use of data for targeted advertising. In the context of AI in K-12 education, SOPIPA ensures that AI tools used by schools prioritize the security and ethical handling of student data. It requires vendors to implement robust protections and restricts the use of data to educational purposes, making it a critical law for maintaining trust and transparency in AI adoption.
Rule of thumb: Before using an AI tool in your classroom (or purchasing one), please confirm with the Technology Department to ensure it aligns to mandated privacy and security standards.
The Williams Act, a California law focused on ensuring equitable access to educational resources, has implications for AI in K-12 education. While not directly addressing AI, its principles underscore the importance of equitable access to technology and tools, including AI resources, across all schools and student populations. Schools must ensure that AI tools are implemented in a way that promotes fairness and does not exacerbate existing inequalities in access to quality educational materials and support.
Rule of Thumb: Before using an AI tool in your classroom, ensure it supports equitable access to resources for all students.
This bill, which went into effect on January 1, 2025, requires the California Instructional Quality Commission to consider incorporating Artificial Intelligence (AI) literacy content into the mathematics, science, and history-social science curriculum frameworks when those frameworks are next revised. It also requires the commission to consider including AI literacy in its criteria for evaluating instructional materials when the state board next adopts mathematics, science, and history-social science instructional materials, as provided. The bill defines “AI literacy” for these purposes as the knowledge, skills, and attitudes associated with how artificial intelligence works, including its principles, concepts, and applications, as well as how to use artificial intelligence, including its limitations, implications, and ethical considerations.