This post comes from a member of the AI Existential Safety Community.
Artificial intelligence (AI) is rapidly transforming various sectors, and education is no exception. As AI technology becomes integrated into education systems, it offers both opportunities and challenges. This article gives children the opportunity to consider strategic decisions that must be considered in educational settings and the future and short-term risks that it is embedded in society as a whole. Focus on the impact. He is unaware of data and privacy risks for students using tools such as Snapchat’s “My AI.”
This blog post explores the impact of AI in education, depicting insights from recent conferences and my experience as an AI for an education consultant in the UK.
AI in Education: A Global Perspective
In early September, I attended the United Nations Education Conference in Paris. The summit highlighted both the positive and negative effects of AI and discussed ways to mitigate and support these changes.
Countries such as Australia and South Korea have particularly successful in integrating AI technology into real-world applications. For example, many Australian states are developing large-scale copilots built on Azure datasets. It provides strategic support to learners while collecting new data when using educational chatbots (steps towards “personalized” learning). This is an important stepping stone for education. Although historical datasets have been held individually by schools, this collective method of retaining and using data at a strategic level opens up more opportunities for targeted applications. However, it increases the broader impact of cyber threat vulnerabilities and AI inconsistencies if not thoroughly monitored.
During the meeting, Mistral AI and Apple highlighted the major differences between AI models based on how AI models are trained. Mistral emphasized that their models are trained to be multilingual, rather than being first trained in English and then translated. This has a significant impact on the student’s learning interface and emphasizes that nuanced differences in training data or techniques have a significant impact on educational outcomes. Rather than making rash decisions to acquire new technology in line with values, institutions around the world need to consider the best tools for students’ outcomes and unique learning needs.
Collaboration with educators and policymakers to shape the future of education
The guidance UNESCO has created for AI in education is great, but there is a disconnect between those who write policy in the education sector and those who are actually involved in education. While in Paris, I observed that many educators are already familiar with using AI tools, but policymakers seem to be built into Microsoft Copilot Studio, Custom GPTS, or Google Gems. I was very adored by the basic custom chatbots. This concerns me, but although technology in the education sector is often inadequately funded, the technical and educational understanding of AI potential in education is primarily for those working in schools. is held by.
There must be more opportunities to move forward between educators and government. Stakeholder conferences, including education experts, should not be limited to university researchers, but also include teachers who regularly use these tools. Without this, governance continues to miss the detailed knowledge of people in the education sector and simply relies on statistical research or researcher interpretations.
Deepfake
Another concern highlighted in Paris was the rise of deepfakes. I have seen their influence in schools in Korea and in the UK. This is an area that policymakers must address. While many schools are trying to tackle this “in-house” through welfare programs, others have adopted a more innovative approach. China, for example, creates deepfakes for principals to promote understanding of the potential risks and harms of technology. The development of SynthID’s DeepMind, featured in the journal Nature, could help to increase the transparency of AI-generated content.
Innovative tools and ethical considerations
While working on an advisory panel with Microsoft and helping to develop AI tools based on pedagogy, focusing on user safety, I witnessed a variety of innovative tools that will change the educational environment. . One of the most important developments is Microsoft Copilot 365, set up to revolutionize education practices. This product allows you to mark assignments, create lesson resources, and perform data analysis within 365 suites. I think this product could solve the global education shortage by streamlining the management elements of education, allowing teachers to focus solely on pedagogy and pastoral care. .
However, there are concerns about the control of Microsoft and Google in the field of education. Although both companies have a huge amount of educator support and input, there are questions about transparency and ethical considerations in product design. Furthermore, while chatbots for many students are designed for memorization learning, we live in an age where creativity and critical thinking are more important than ever. This underscores the need for more governance and oversight in the field of AI in education.
Data and Cybersecurity
Data and cybersecurity are critical education concerns. While many schools have moved to cloud computing to make their systems safer, data centralization by governments and councils such as Australia has created attractive goals and vulnerabilities for hackers.
A further concern related to this data, including emotional, social and medical information, is how it will be used by businesses and the AI tools themselves. Microsoft explicitly states that this data is not used by Microsoft and has not been communicated to Openai, who is slow to report its own data breaches. Google does not provide a clear explanation on how to preserve educational data.
We also need to raise the question of how this information can be guaranteed not to produce biased output. Are echo chambers of students’ “personalized” chatbots actually changing their worldview in harmful ways? The more personal these models are for students, the higher the risk.
AI use is often measured in adults by companies such as Microsoft and Openai, but the way young people use Generative AI is not well understood worldwide. Myai from Snapchat has been used by over 150 million users so far. The majority of this user base consists of young people from the US and UK. Some educational research also demonstrates the widespread use of ChatGPT by young people, but without the commercial data protection offered by specific organizations, Snapchat lacks appropriate privacy management, data protection and content filters. Considering this, concerns are growing about the educational community. This may affect.
To address this, parents need consultation and digital safety education. These companies also need to be accountable by ensuring that children have access to AI tools without proper privacy and safety management. This is not the priorities of all governments, businesses and policymakers in AI development so far, but economic growth is prioritized over child safety. This is demonstrated by the veto power of the SB 1047 Safety Bill, a proposed California bill aimed at regulating advanced AI systems to prevent catastrophic harm. This does this by requesting developers to conduct risk assessments, implement emergency shutdown mechanisms, and include whistleblower protection. Governor Gavin Newsom rejected the bill on September 29, 2024, claiming that it could potentially provide false sense of security by focusing solely on large models and potentially restraining innovation . This veto emphasizes that digital security for children is not a priority for governance people and therefore must become a deeper priority for those who work with young people.
Future Curriculum
Beyond the actual development of AI in education, it invites questions about what we can teach young people today. This question has been raised by many, along with answers provided by the Economists, the World Economic Forum and Microsoft. However, education must seek to educate children in a holistic way that allows the sector to go beyond the economic focus centered on economic growth in today’s curriculum, allowing them to thrive in the future. Not there. UNESCO offers excellent guidelines on this, as shown below.
Clifton High School in the UK has launched a course tailored to implement this framework. Students can learn basic skills such as emotional intelligence, citizenship, innovation, and collaboration through different cultures, and learn about AI. The Skills for Tomorrow course aims to provide proof of concept for schools around the world to replicate, and to provide education that complements the changes seen in the 21st century. Governments and policymakers are the next generation It is important to take action to reduce the risk of redundant skills.
Access and socioeconomic division
The global issue in education that exacerbates socioeconomic division is access. With 2.9 billion people still offline and major technical disparities within the country, those who can and can learn how to use AI in schools will benefit from these tools. Those who are not left will be left behind. But while I was at the United Nations, I noticed promising projects in this field. One example is the African AU continental strategy. It aims to concentrate education in African countries on acquiring the benefits of artificial intelligence. We want to take advantage of the adaptability of the young African population. 70% of African countries have an average age of under 30. AI was also used to support social and emotional learning in Sweden and Norway.
However, in a report entitled “The Fourth Industrial Revolution and Recolonization of Africa,” strategies such as Aucsa are hijacked by technology companies, leading to data and employment colonization, leading to asymmetric power relationships. The possible report highlighted by Everist Beniera (unevenly distributed resources and knowledge) and further challenges to global equality in the education sector.
Conclusion
There is a lot of optimism as AI becomes integrated into education, but the pace at which it is integrated is not inadequate governance, the curriculum of an outdated country, and the teachers themselves This can lead to harmful consequences. To prepare the education sector globally for this technology, action must be taken by the government rather than mere theorizing. Its rapid development has even left Edtech experts chasing a series of innovations.
Positive legislative efforts such as the EU AI Act, built on the EU GDPR Act, will take steps to ensure safety is at the forefront of further testing of AI and that child data is not misused. It means. Companies that adhere to such laws should be praised, but companies that design safe educational tools as secondary goals must readjust their value.