Automating Tasks, Not Jobs: 5 Lessons on AI in Education from Humana’s CEO

Blog post Ashley Shaw, SREB Communications Specialist
 

Lessons learned about AI in education from Humana's CEO

This is the second post in a five-part series. You can find past posts and see what is upcoming here:

  1. Meeting Overview, Notes from SREB President Stephen Pruitt
  2. Bruce Brossard, CEO of Humana
  3. Asa Hutchinson, former governor of Arkansas
  4. Pat Yongpradit, CAO of Code.org
  5. Nancy Ruzycki, Director of Undergraduate Laboratories at the University of Florida

In our first post in this series, we discussed the key takeaways from the first-ever meeting of the SREB Commission on AI in Education. This post reflects on the first of four featured sessions at this event.

Our first featured speaker was Bruce Broussard, CEO of Humana, who sat down with SREB President Stephen Pruitt to answer questions about how AI is changing the healthcare field and what he sees as the future of AI in education.

While listening to him speak, I learned five different lessons from Broussard on how the commission should approach the topic of AI in education.

Lesson 1: Develop Policy Around the Use of Technology, Not Around the Technology Itself.

One important thing that Broussard emphasized is that traditionally the technology itself is not regulated. Instead, the usage of AI is regulated.

He used examples of the combustion engine and the airplane. Neither of these technologies themselves are regulated– meaning there are no rules about whether these inventions can be used at all. However, there are plenty of regulations around their usage – meaning how, when and by whom they are used.

His advice to the commission was to apply the same practice to AI. As the commission works to recommend policies, those policies should be about how AI is used instead of about AI itself.

He offered three questions to guide these policy considerations:

Question 1: Is the Data Biased/Accurate?

AI works through input from data sources, which it uses to compile information and suggest solutions. However, if the data is biased or inaccurate, then the answer itself will be biased or inaccurate.

For example, Broussard mentioned that Humana’s governance committee will look at the source data itself to spot things like medical information that may have been pulled only from one region, which would lead to a bias in the answer by leaving out relevant information from missing populations. This bias could lead to skewed results in the analysis. Similarly, data from social media is likely to be less accurate and more biased than data from academic sources.

To help avoid these concerns, they may need to create regulations around data acquisition and usage within AI. Similarly, schools and legislators working on education policy may need to focus policies on data usage more than the tool usage.   

Question 2: How Complex Is the Model?  

How complex is the model and what is its purpose/usage? How did it get developed in the first place? These are questions to ask when guiding policy. For example, can the model be broken down into answers? Can a human recreate the answer? Can you find out how accurate the information is?

To help show how these questions are used in his field, Broussard gave an example of two AI models that Humana uses:

Example 1: Their primary care clinics use a model that acts as an assistant tool. It is only about 70% accurate. However, because it is an aid to the clinician and is independently verified and checked separately from the model, there is no speed element involved. This means that the accuracy level does not have to be as high.

Example 2: On the other hand, they also use a chatbot that helps their call centers answer questions by taking their benefits and calculating answers from this data. This one is close to 90% accurate. However, questions need to be answered within five seconds and there is not much time to verify. In this example, the accuracy of the model becomes really important.

When creating policies on AI in education, it is equally as important to look at these complexities.

Question 3: How Complex Is the Application?

Along with examining the complexity of the model, it is important to look at the complexity of the application. What is it being used to do?

Is it being used for bioscience? Is it helping with complicated cell development? Is it just a basic call center? These are just some of the things Broussard suggested we consider.

According to Broussard, policies should be thought of in these particular buckets.  Instead of just regulating AI as a whole, it should broken down like this, and policies should focus on the various applications and usages it may have.

Outside of policy consideration, another thing that Broussard discussed in his talk was one of the hopes of AI: People are scared of what it will take from humanity, but the truth is it has the potential to give just as much. This was the focus of the next few lessons he gave us.

Lesson 2: It’s Not About the Automation of the Job. It’s About the Automation of the Task.

Perhaps one of the most important lessons Broussard gave us, and one which became somewhat of a theme throughout the rest of the meeting, was the concept that AI is not automating jobs. It is automating tasks.

AI is automatting tasks, not jobs.

To help understand this, take some time to write down all of the tasks you do in a normal day or week. Now, go through the list and see how many could be automated and how many could not. For most people, doing this will help them see that AI is more likely to make your job easier than it is to take it away.

While some jobs might end up being automated or eliminated because of the percentage of automatable tasks within it, ideally, AI will be used to help alleviate some of the tasks that could be automated. This would give employees more time to focus on more important tasks, leading to more fulfilling jobs.

How will jobs become more fulfilling, though? That is the focus of Lesson 3.

Lesson 3: AI’s Capacity to Increase Research and Innovation Is Incredible, But It Still Needs Human Insight.

At Humana, they see AI as having the ability to increase research and innovation potential as it takes off some of the dull work that employees do each day. Working with these tools, Broussard sees us moving farther away from an education system in which students just memorize facts and statistics to one in which AI can provide information, and the human can interpret them.

AI is an information tool, so it can read the data, but it will not form opinions and analyses around it. That is where we come in.

AI is not turning us into robots. It’s elevating our ability to think.

“AI is not replacing us or turning us into robots, it’s actually elevating…the human ability to be much smarter and much more thoughtful, but be much broader in its thought, as opposed to being very narrow in facts and figures,” Broussard said.

Of course, this change in job descriptions will lead to new skills that workers will need. This is the focus of Lesson 4.

Lesson 4: AI Will Change the Skills That Students Need When Entering the Job Market.

Leading back into the discussion on AI in education specifically, Broussard began to show how the changing job scape, by getting rid of some of the easily automatable tasks and opening up more opportunities for innovation and research, will change the skills that students need to learn in school.

For starters, he said, soft skills will become more important than they already are. As he discussed what he sees as the skills students will need going forward, he divided it into two groups: students who will be studying AI and students who will be using AI in the future.

Soft skills, such as pattern recognition, teamwork and critical thinking, will become more important than ever.

The small group of AI creators and engineers, etc., will obviously need more technical AI skills and advanced courses. However, most students’ AI needs will fall under general education. Along with the basics, such as reading and simple math, Broussard sees students needing critical thinking skills more than ever:

  • Pattern recognition: This will help them make connections between data they collect.
  • Complex thinking: How can you deal with ambiguities and inconsistencies, for example?
  • Teamwork: This becomes especially important as AI allows people to move into more complex job functions, where searching for an answer will have them working with diverse teams to find it.
  • Interpersonal skills: This refers to their ability to appreciate diversity and what it brings to the table.

See our report on the success skills employers currently want to learn more.

One of the most important skills students may need to learn, though, is adaptability, and that is the point of the final lesson.

Lesson 5: Continuing Education Is Vital.  

Near the end of his talk, Broussard was asked an interesting question: AI, as we think of it today, didn’t exist when he became CEO of Humana. How was he able to build the knowledge to address it with us?

 “What I have found in my career is curiosity has been a great asset for me,” he said. “And the ability to ask questions…Just learning has really served me well.”

Broussard mentioned just a few of the things he has done to make sure he is prepared for AI:

  • Took a few online courses
  • Learned a bit of statistics to understand things like neural networks and parameters
  • Listened to others and networked
  • Took a course to help his basic understanding as part of his position on the Hewlett-Packard board

He pointed out that this is really just a part of being a human.

If you are not a lifelong learner, you may get left behind.

“The things that have transpired in my life since I graduated college have been immense,” Broussard said. “And if you’re not a lifelong learner, you’d be left behind. I think you have to be that to be in our world today. If not, it’s going to be very uncomfortable, very uncomfortable.”

What’s Next

Join us next time as we gain more insights from the speakers at the first meeting of the SREB Commission on AI in Education. Next up, we will hear from Asa Hutchinson, former governor of Arkansas.