Sheridan's campuses and operations are now closed and will reopen on Thursday, January 2. Classes start on Monday, January 6. Current students can access virtual wellness resources on Sheridan Central. Have a safe and restful break!

SCAET building at Sheridan's Trafalgar Road Campus

How is generative AI changing what employers value in the workforce?

Newsroom authorby Jon KuiperijFeb 22, 2024
Share on social

A graphic explaining Sheridan's Take 5 series in which experts share insightsIn Take 5, Sheridan's thought leaders share their expert insight on a timely and topical issue. Learn from some of our innovative leaders and change agents as they reflect on questions that are top-of-mind for the Sheridan community. 

Headshot of Sheridan professor Wayland Chau, who is wearing a blue button-up shirtArtificial intelligence’s use of predetermined algorithms and rules to perform preset tasks has enhanced our lives for decades, from optimizing manufacturing processes to recommending television series to providing a worthy opponent in chess. But the arrival of generative AI and its ability to create original content — including articles, images, videos, music and computer code — has many wary of its potential to disrupt the ways in which we learn, work and play.

In this edition of Take 5, Pilon School of Business law and ethics professor Wayland Chau discusses generative AI's potential impact on the workforce, what skills are needed to succeed in the future of work, the technology's ethical and legal implications, and more.

1. How will generative AI and increased automation impact what employers value in the workforce?

In an AI economy, certain groups of people will be more highly valued than others. One such group is people whose work involves many physical aspects, including highly skilled and trained professionals such as nurses and tradespeople, as well as low-skilled manual labourers. It's difficult to see how robots could ever fully replace them.

Another group of people who I think will still be highly valued are the people who have traditionally been called knowledge workers... people whose value is based primarily on the output of their minds and their strengths in critical thinking, creativity, communication, leadership and collaboration. The power of those core human competencies will be magnified with the judicious use of AI.

The group that will be left behind will be people who don't have those high-level competencies and merely rely on AI to think and communicate for them.

2. What is the Pilon School of Business doing to prepare students for success in the future of work?

Like the rest of the world, we're still trying to figure things out. However, we are already well-positioned because our Bachelor of Business Administration degrees have always focused on high-level competency development in critical thinking, creativity, communication and leadership — skills our graduates will be able to use to leverage the power of generative AI, rather than simply relying on the technology.

We currently have a committee of faculty developing a list of the most important high-level competencies we'd like to infuse in all our programs, not only through purposeful inclusion in the content of our courses but also in the ways our professors teach the subject matter. In addition to teaching fundamental concepts, we're training students with the ability to look at a situation, analyze it critically and creatively, and be able to communicate their findings effectively to others.

There is ongoing work in determining how best to provide our students with the AI skills that they will need to succeed.

3. On your social channels, you’ve shared how you've observed the full effects of students having easy access to generative AI tools such as ChatGPT, noting the level of inappropriate use was 'disturbing but not surprising.' How should educators balance the value of incorporating the technology into the classroom with the potential for students to abuse it?

Similar to many other breakthrough technologies, generative AI is neutral, meaning that it can be used for good and it can be used for bad. I'm always looking at how I can use generative AI to both improve my teaching and help students learn more effectively.

But I've also seen first-hand the dangers of AI use by our students, specifically when they rely on generative AI to complete an assignment. That's not just an academic dishonesty issue, it's an educational issue. Students are inflicting self-harm when they're not learning how to think for themselves. When they have to take a final exam in which they can't access AI, the results can be tragic — tragic for them, but also for their professors to watch it happen.

As we teach our students how to use generative AI in a positive manner, we also need to educate them about the dangers.

4. Do you think individuals and companies have an ethical responsibility to disclose whether generative AI was used in the creation of something?

There aren't any specific ethical guidelines or guardrails when it comes to the use of generative AI. Everything is on a case-by-case basis. But for now, if you're going to be using AI to help you create something that is presented to the world, the best practice is to be transparent and disclose that AI was used.

A cautionary tale is what happened recently with Sports Illustrated magazine, which allegedly used generative AI to write a bunch of articles with fake names listed as the authors. A number of people were fired because of that incident, telling us that — right now, at least — people want to know if what they're watching or reading was produced with the help of generative AI.

As the use of AI becomes more ubiquitous, an interesting question will be whether we still need to disclose every time that something is produced with the assistance of AI. For example, now that we've used Google searches for decades, I don't think many people disclose the fact that they used Google to help them write or create something. Or now that a thesaurus is commonly used, people don't disclose when they've used one.

Ultimately, I can envision a world in the not-too-distant future where it is just assumed that generative AI was used in the process of creating a work.

5. In your opinion, what are some of the most significant legal developments and debates involving the use of generative AI?

In general, law will always be trying to catch up with technology. That's especially true of generative AI, which is advancing by the day. Governments around the world are scrambling to figure out how to regulate AI, and the big challenge — especially amongst the major economies like Canada, the United States and the E.U. — is to regulate AI while also allowing companies to develop and advance it, rather than stifling its development.

So far, most of the legislative proposals I'm aware of are focused on human rights and preventing or safeguarding AI from imposing unconscious biases and discriminating against certain groups in society. Some proposed legislation also deals with privacy, such as the E.U.'s proposal to prohibit the use of AI for facial recognition. And the U.S. has proposed through the form of a presidential order to prohibit the use of AI to help produce weapons of mass destruction.

One significant and ongoing legal issue is the unauthorized use of copyrighted material by AI. AI learns by “digesting” all kinds of online material, including copyrighted works such as news articles, artistic images and music, and then uses that material to form its responses — sometimes even blatantly plagiarizing copyrighted material. For example, one blogger asked Google’s Bard AI image generator to “draw a video game plumber”, and the AI provided a cartoon drawing almost identical to Mario, the Nintendo video game character. There’s a number of ongoing lawsuits by major copyright holders against AI companies, including a case brought by the New York Times against OpenAI, the creator of ChatGPT.

There’s also a very open question about how copyright and patent applies to work and inventions that have been created with the help of AI. What we know for sure is that something produced 100% by generative AI is not patentable or copyrightable. But when something is produced through human interaction with generative AI, at what point do we say it's something that was created by the human versus the AI, therefore making it patentable or copyrightable? That's a big question that we don't yet have an answer to.


Wayland Chau teaches business law, ethics, sustainability and corporate social responsibility in undergraduate and graduate programs at Sheridan’s Pilon School of Business (PSB). He is also the Collaborative Online International Learning coordinator for PSB. Wayland is an active member of PSB’s Local Academic Council and the Transformative Business Education community of practice. Prior to entering academia, Wayland practiced law for more than 15 years as in-house tax counsel at CIBC and in private practice at several major law firms. Wayland earned his Bachelor of Commerce degree in finance and marketing from McGill University, is a graduate of Osgoode Hall Law School and holds a Master of Laws degree in International Business Law from the University of London. He is a member of the Ontario Bar. He uses both new tech and 'old school' methods in his teaching approach and continually re-evaluates how he engages his students and develops their skills in problem solving and critical thinking. Some of his thoughts and experiences as an educator are chronicled in his blog The Reflective Prof and on LinkedIn.


Interested in connecting with Wayland Chau or another Sheridan expert? Please email communications@sheridancollege.ca

The interview has been edited for length and clarity.

X
Cookies help us improve your website experience.
By using our website, you agree to our use of cookies.
Confirm