It’s impossible to avoid the buzz on artificial intelligence. Whether techno-optimists are lauding the forthcoming healthcare revolution, or techno-pessimists are warning of grave environmental and social degradation, AI—and especially Gen AI—is dominating news cycles and boardroom conversations.
At Aspen, we work to support positive outcomes for business and society by connecting business leaders, scholars and thought leaders in productive dialogue. Given the broad consensus that AI will transform multiple dimensions of business and society, it’s only natural that our team has been thinking about creating a new program focused on the leaders making it happen, and the ways in which they can do so responsibly.
In late October I traveled to Silicon Valley to better understand what insiders see as the perils and promise of this new technological era, what it will demand of leaders, and to imagine the possibilities for a new leadership program designed to meet the innovative and ethical challenges of this moment. In partnership with First Mover Fellow Martin Gonzalez, we hosted a “design charrette” with a number of Aspen First Mover Fellows and took a dozen meetings with scholars, tech leaders, and program designers. It was an eye-opening trip.
What did we learn about reimagining leadership in the age of AI, and what comes next?
The Confidence Gap Between Executives and Leaders in the Middle
Are executives and managers on the same page when it comes to AI? Executives were generally confident about their own company’s ethical safeguards. They believed that effective guidelines and protocols were in place to prevent the design and release of problematic tech. The folks lower down the chain—the ones making day-to-day decisions about tech deployment—were less certain. Product managers lacked confidence in their company leadership and in organizational policies and practices. It was clear to them that, when push comes to shove, safety and caution were lower priorities than executive (and market) demands.
Managers attributed their ambivalence to two primary forces. First, the deep culture of rapid technological innovation in Silicon Valley and beyond. Despite growing expressions of public and industry concern, tech companies and their investors are still largely moving fast and breaking things, sometimes with heartbreaking results.
Second, implacable market pressure. While market pressure exists across industries, the pressure to be a first mover is especially powerful in AI (ChatGPT is already on its way to becoming the Kleenex of Generative AI). This isn’t just because the tech is new and the market is accelerating; it also reflects the nature of the technology itself. Since more users means more robust models, there is a perverse incentive for pushing new technology to market before it’s been thoroughly tested. And when bias, for example, has already been baked in, inequality is compounded over time.
Moving Forward
Finding effective ways to navigate the tensions between ethical tech and market pressures will be critical for the industry in general and managers in particular. Most of the leaders in our discussions looked to regulation as an essential counterweight to prevailing cultural and market pressures. And while there was healthy skepticism about the potential of the federal government to offer meaningful guidance, there was cautious optimism that the EU and individual US states—most especially California—would lead the charge.
There was also a deep desire to change corporate practice from the inside. We found a real hunger among practitioners for a more thoughtful way of doing this work. Like their executives, these leaders were wildly enthusiastic about the promise of AI. But they weren’t blind to the dangers of the current model. They largely held a common belief that new technologies could be developed in ways that are safe, useful, and fair—and that a more considered approach would be good for business and society. They were eager for a program that would help develop the skills and capacities technologists need in this moment of rapid change.
What might such a program look like? To be effective, it would have to keep pace with the rapid advancements in technology and the unpredictable nature of this field. Content would need to be responsive to the dynamic experiences of participants. What also came across loud and clear was the need for a space of calm amidst the storm. An effective format would help tech leaders step back, refocus, and engage meaningfully with their peers. In providing such multifaceted support, we hope to create broader learning opportunities for company practice, scholarship, and policy.
We’re still in the early stages of program development, and several key questions remain:
- How can mid-level leaders influence the evolution of tech in significant, positive, ways?
- What support do these leaders need to be effective?
- And is it better focus on those building core technologies, or those implementing them in companies across industries?
We will continue to discuss these issues with practitioners and thinkers to determine what forms of support would be most useful for both business and society.
Have thoughts to share? We welcome your ideas and advice in the comments below.
This blog post was originally published on LinkedIn. Follow Eli Malinsky for more insights on business and society.