Ethics Week Lecture focuses on responsible use of AI 

Arun Rai discusses ethical use of artificial intelligence

With the steadily increasing use of AI in the workplace comes questions of how to use it conscientiously.  

Arun Rai, a scholar who has studied digital innovations in organizations and communities for more than 35 years, spoke on “Responsible AI for the Future of Work” during the University of Georgia’s annual Ethics Week Lecture, held Nov. 7.  

“What does it mean to establish a sensible system and work architecture around that, so you mitigate the risks while harnessing the advantages? I want to acknowledge that I’m presenting the downsides and the challenges, the risks, while recognizing the advantages,” he said. 

Rai serves as a Regents’ Professor and the Howard S. Starks Distinguished Chair in the J. Mack Robinson College of Business at Georgia State University. He co-founded and directs the Robinson College’s Center for Digital Innovation, an interdisciplinary research center on digital innovation that leverages industry-university collaborations. 

His work currently focuses on responsible artificial intelligence usage for the future of work — especially jobs and skills, human–AI augmentation and fairness — while exploring the intersection between research, education and policy. 

Rai explained the three types of AI and shared some of the challenges they present. Predictive AI acts as a forecaster, sharing possible outcomes, but can potentially amplify already known biases. Generative AI acts as a drafter, offering users the opportunity to create something, but could come with what Rai said are “confident hallucinations.” Agentic AI serves as an executor, running formulas and the like, but can find loopholes that produce inaccurate results. 

AI and the responsible use of it creates many paradoxes, according to Rai. And responsible AI is the practice of continuously managing those paradoxes. In the workforce, one notable AI paradox is economic efficiency versus human elevation. Rai said that the responsible path is to use AI automation to free human talent from repetitive, low-judgment tasks, enabling them to spend more time and energy on complex, creative and strategic work. 

“My point is that it’s not either/or. It’s a coexistence,” he said. “Automation frees us toward pursuing other paths, which will allow us to work effectively with AI.” 

According to Rai, there are three basic ways humans can interact with AI. They can be architects, figuring out how to design the rules for human-AI systems. They can be strategists, putting an emphasis on critical judgement and interpretation, looking at AI outputs, checking for assumptions and decision-making with ambiguous information. They can also be guardians with ethical oversight and empathy who manage risk and ensure fairness and compliance. 

“If you want to scale it beyond any kind of experimental sandbox, these things become very crucial,” he said. “This is not about running a successful experiment in a sandbox. You need a solid architecture. You need strategic judgment and interpretation of outputs. And you need a guardrail.” 

Some of the other ethical considerations Rai mentioned include accuracy versus fairness, autonomy versus control, personalization versus privacy, economic efficiency versus social stability and even open innovation versus intellectual property. 

“As universities, we have an ethical obligation … how do we train our students to have the right skills so that they can actually become architects and strategists and guardians of these technologies,” he said. 

The Ethics Week Lecture is part of Ethics Awareness Week, an observance designated by the University System of Georgia across all USG institutions as an important reminder of shared core values of integrity, excellence, accountability and respect. UGA’s observance of the week is part of the institution’s ongoing effort to promote an ethical culture on campus and to raise awareness about ethics resources available at the university.