Calculating the Potential of AI in Education: Insights from Code.Org’s CAO

Blog post Ashley Shaw, SREB Communications Specialist

This is the third post in a five-part series. You can find past posts and see what is upcoming here:

  1. Meeting Overview, Notes from SREB President Stephen Pruitt
  2. Bruce Brossard, CEO of Humana
  3. Asa Hutchinson, former governor of Arkansas
  4. Pat Yongpradit, CAO of
  5. Nancy Ruzycki, Director of Undergraduate Laboratories at the University of Florida

Pat Yongpradit, the Chief Academic Officer at, began his speech at the first meeting of the SREB Commission on AI in Education by talking about something unexpected: calculators.

You see, in the 70s and 80s, calculators were just being introduced to schools, and they were causing something of a panic:

  • “If we give these to students, how will they ever learn to do math?”
  • “These things will be the death of thinking!”
  • “What if a student doesn’t have access to a calculator? Will they be at a disadvantage?”

To help address the access concern, states and organizations began to adopt the calculator, making sure students were able to access them in the class and on assessments. At the same time, the technology got better and more affordable.

Why was there such an emphasis on making sure students had access to these tools, though? After all, weren’t the first two concerns about how calculators would affect thinking the most important issues anyway?

No! This is because, as Yongpradit said, schools and the groups tasked with governing schools all saw the same benefit in these new devices: “The calculator could free students from the bondage of calculation in order to engage with math in a very different way than they had learned.”

“The calculator could free students from the bondage of calculation in order to engage with math in a very different way than they had learned.”

In other words, if students were free from having to do the manual calculating for themselves, they would have more time to focus on real-world math and use creative problem-solving skills instead of simple calculation skills.

From Calculators to AI 

 As we look at the emerging generative AI trend, we see a lot of the same concerns that we saw with the calculator debate:

  • “If we give this to students, how will they ever learn anything?”
  • “This thing will be the death of thinking!”
  • “What if a student doesn’t have access to AI? Will they be at a disadvantage?”

Now, though, the fears are even greater because AI’s reach is even greater. Yongpradit pointed out that calculators changed the way we did math education, but “AI is literally for everything.”

Just like calculators, though, AI has the potential for greatness as long as we first address the fears.

Defining AI

“Artificial intelligence refers to programs or machines that simulate tasks that typically require human intelligence, such as making predictions, generating content and providing recommendations.”

To understand the pros and cons of AI in education, you must first understand what AI is. Yongpradit pointed out that artificial intelligence is notoriously hard to define because intelligence intelligence is hard to define.

Still, though, he did his best to give an overview of the term. For starters, AI in general has been around since at least the 60s or 70s. This classic AI relates more to things like online medical forms that have you answer questions leading to a potential diagnosis based off of the answers. In other words, it has one task, and it completes that one task. The human assigns the task and thinks of all the rules the AI must follow to complete said task.

Then, as computer power expanded, so did AI’s power. By the late 70s to 80s, we had machine learning, and with machine learning came the ability to create relationships between algorithms. In this case, you could give the computer rules for learning and it will create suggestions for you. This can be seen in things like auto-complete and video recommendations.

Now, though, we have moved to a new stage, which uses deep learning. Deep learning allows the system to make more complex relationships, so, when it gets fed more data, it develops the large language models we see in things like ChatGPT, which better mimics human speech. This is also called generative AI.

Yongpradit offered up a simple definition to guide the rest of his talk:

“Artificial intelligence refers to programs or machines that simulate tasks that typically require human intelligence, such as making predictions, generating content and providing recommendations.”

Going along with this, Yongpradit also defined AI literacy, saying it relates to our own understanding of how AI works, how to use it responsibly, how it affects society and how we can embrace its potentials while mitigating its risks. This is the focus of the rest of this post.

Embracing the Benefits of AI in Education

It is easy to focus on the risks of AI, but while it important to look at those too, as we will do in the next section of this post, the truth is that AI does not have to be as scary as we sometimes make it.

At the end of the day, AI is just a tool, and like any tool, when used correctly, it can be invaluable. This section is all about honoring the potential of AI in education, which Yongpradit discussed across three educational buckets: school management operations, student learning and teacher support.

Benefit 1: Improving School Management Operations

Yongpradit called the use of AI in school management operations the lowest and safest hanging fruit of the three buckets he discussed, saying “it reflects what’s already happening across all business sectors.”

Let’s think of just some of the ways AI could be or already is being used in school management operations:

  • Bus routing
  • Course scheduling
  • Attendance tracking
  • Enrollment tracking
  • Financial management
  • Communications
  • Library management

Imagine having many of these tasks off of your plate! What would you have time to focus on instead?

Benefit 2: Supporting Teachers

Here are some things that AI can help teachers create/do to simplify their jobs:

  • Lesson plans
  • Rubrics
  • Assessments
  • Grading
  • Content differentiation for students with different needs

Benefit 3: Enhancing Learning for Students

Perhaps some of the most exciting benefits of AI in schools relate to how it can help students learn. From creating personalized learning pathways that let students learn at their own pace and style to giving them new career pathways, here are some of the many potential benefits of AI for students.

Benefit 3a: Unique Learning Paths

Generative AI tools allow teachers to create lesson plans in different languages, at different learning levels and for different learning styles. This can allow students to learn in ways better suited to their learning styles.

Benefit 3b: New Learning Opportunities

AI allows for new learning opportunities for students as learning AI skills can help with any number of other subjects. AI principles can be seen in courses such as statistics, computer science, data science, social studies, math and more.

Benefit 3c: Creative Learning, Critical Thinking, Problem Solving and More

Just like the calculator lets math students move from general calculations to more complex math problems, AI has the potential to get all students thinking outside of the box.

Because AI can help with fact presentation, students may now be free to move past memorization of facts and onto critical thinking and problem-solving.

Benefit 3d: AI Jobs

Along with general AI skills, one benefit of AI is that there will be many new jobs out there that use or create AI systems. Teaching and supporting AI usage through project-based and career and technical education settings can lead to good job opportunities in the future.

Acknowledging the Risks

While there are many benefits to using AI in schools, we cannot accept the benefits without also acknowledging the risks. While some people look at this emerging trend as something akin to the apocalypse, Yongpradit advised us to look at the real and current threats that AI can pose instead.

To him, a big one goes back to the access issue, but in a way directly connected to the classroom. What happens when one teacher lets their students use AI on assignments, but another teacher  teaching the same subject does not?  

This is something he hopes we address as we move forward with the commission. However, it isn’t the only risk.

  • Because AI systems are built by people, and people are inherently biased, the systems themselves will be biased. While some bias is good —after all, we want the system to move us towards a helpful answer — societal bias is bad. How can we control this?
  • AI can be a means of freeing up time for more creative and critical-thinking-based tasks, but it can also lead to over-dependence. How can we make sure students are not outsourcing learning to the AI system?
  • Privacy is always a big issue with any new technology. How can we best ensure that we keep the privacy and security of student data at the forefront of any steps we take either to limit or embrace AI?
  • In search tools such as Google, fact-checking is slightly built-in because as people click on links that make the most sense for certain searches, the search engine recognizes this and moves the better results closer to the top. There is no mechanism for this in large language models. How can we make sure the results given are accurate and trustworthy?
  • As AI automates certain tasks, there is a chance that some jobs will be lost. (See the Humana post for more on this). How can we make sure that graduating students have job opportunities upon graduation?  

Finally, Yongpradit addressed the ease with which AI can facilitate cheating and plagiarism. Luckily, Yongpradit thought this might actually be less than an issue than some people might think.

To demonstrate this, he looked at a recent Stanford survey. In this yearly study, researchers ask kids if they have cheated. The number is usually really high – 60-70% report that they have. However, in the survey that came out after ChatGPT was around, the numbers did not significantly change. The students who were already going to cheat might now use ChatGPT to do so, but those who weren’t going to cheat did not seem more likely to start.

Still, though, using AI as the method of cheating can lead to more advanced outputs, which can then make the cheating harder to detect. Thus, it is still a risk that needs to be considered.

Yongpradit highlighted these risks, but he said that the benefits and the risks often go together. For example, the risk we take of allowing students to stop thinking because of an overdependence on this tool is matched by the potential benefit of allowing students to move from memorization and on to critical thinking.

So, how do we balance these issues? 

Promoting Responsible AI Use in Education

To balance the risks with the benefits, careful planning is necessary. Here are some of the suggestions Yongpradit gave us to do this:

  • Establish clear guidelines and policies.
  • Provide ongoing professional development opportunities. He quoted a survey that showed in June 2023, about six months after ChatGPT was released, only 13% of teachers had had any AI professional development. While that number quickly rose to 29% by March 2024, the number is still way too low.
  • Promote AI literacy in classrooms, both for students and teachers.
  • Build capacity through funding.

How to accomplish these things will be part of the Commission’s goals moving forward.

The Final Post

Join us next week for the final post in this series, where we will look at some of the incredible things Dr. Nancy Ruzycki is doing as the Director of Undergraduate Laboratories at the University of Florida.