Beyond the Buzz: Rethinking AI Adoption Through People, Purpose, and Process

Discussion with Bob Grogan
To harness the real potential of AI, leaders must address long-standing organizational blind spots, redefine team structures, and embed ethics and empathy into every layer of strategy. With a career featuring leadership roles in technology and education, where he helps organizations to transform their technology strategies, Bob Grogan has seen how AI can both accelerate innovation and expose cultural or strategic cracks. Now operating independently as a founder and consultant, he focuses on helping organizations rethink their adoption of artificial intelligence not as a silver bullet, but as a strategic enabler.
Evolving Organizational Structures for an AI-Powered Future
In sectors like healthcare and education—where the outcomes of innovation have inherently human implications—organizations cannot afford to treat AI like a traditional IT roll out. According to Grogan, one of the most important shifts that must occur is structural: organizations must rethink traditional roles and the way teams are designed. "Companies have historically been pretty poor at adopting significant new technology - the failings tend to be more on the personnel and culture front", Grogan says.
To support meaningful AI adoption, leaders must begin with role redefinition. Grogan argues that human resources, for instance, has recently focused on policies and processes at the expense of genuine employee engagement. AI presents an opportunity to reverse that trend by offering personalized employee support to combat burnout and other worrying trends but only if implemented intentionally.
Critically, Grogan warns against viewing AI purely as a tool for automation. "You have to think in terms of augmentation rather than automation", he explains. By focusing on how AI can enhance the work of individuals and teams—not replace it—organizations can begin to dissolve silos, empower departments, and foster a culture of experimentation and resilience.
Aligning Innovation with Long-Term Strategy
One of the most persistent challenges Grogan identifies is the lack of alignment between innovation initiatives and overall business strategy. Too often, strategic planning remains a C-suite activity, disconnected from the operational realities of those expected to deliver on it. “If you're going to have a transformative initiative… the decision makers have to be part of the initiative”, Grogan emphasizes. “Leadership with clarity fosters understanding that leads to alignment”.
Frameworks like value stream mapping or capability models can help, but Grogan cautions that tools are only part of the solution. The root issue is leadership engagement—or the lack of it. When leaders engage directly with AI initiatives, they are more likely to guide the organization toward relevant, scalable, and sustainable implementations.
Many organizations fall into the trap of following industry trends or adopting vendor solutions based on marketing claims rather than internal needs. “They might just pick the wrong piece of technology because the vendor gave the best pitch”, Grogan warns. The alternative is to assemble teams across functions with a mandate to develop solutions that address current challenges while also identifying areas of necessary change.
Building Buy-In in an Era of Digital Fatigue
Skepticism toward AI is not only understandable—it’s rational, especially in organizations that have cycled through repeated waves of digital transformation, layoffs, or restructuring. To succeed, Grogan explains, companies must approach AI adoption with empathy, transparency, and inclusion. “You're already in an environment where introducing significant change is not likely to be successful unless you provide that psychological trust”, he says.
Grogan advocates for an approach grounded in microlearning and peer communities. Leaders should avoid long, disruptive offsite training and instead focus on clear communication, bite-sized education, and sharing practical case studies relevant to each stakeholder’s role. “The best antidote to fear and skepticism is exposure”, Grogan says.
Employees are already experimenting with AI tools independently—organizations should support this by providing guidelines, sharing best practices, and co-creating the future of work with their teams.
Redefining Roles and Teams for Human-Centered AI
AI’s true value emerges not from code, but from how it is incorporated delivering real value for the company. For Grogan, this means adopting a “product mindset” where cross-functional teams are built around business outcomes—not job descriptions.
He recounts a consulting engagement where a company attempted a large-scale data migration with technology that did not know who was going to use the data or why. “They just viewed them as a hundred different things that were all alike”, he recalls. Without insight into the user, the project lacked prioritization, strategy, and impact.
Grogan proposes a more dynamic model: create a product layer—a person or team that understands business needs and translates them into technical requirements. “That product team doesn’t necessarily mean Python developers. It means someone from finance, someone from sales—whoever is appropriate for the function that needs value delivery”.
The goal is to break from rigid handoffs and instead create empowered, accountable teams that have full ownership of value creation. In this model, AI becomes one of many tools used to meet objectives, not a magic solution layered onto outdated structures.
Embedding Equity and Ethics Into AI Systems
As AI becomes more embedded in decision-making, particularly in mission-driven sectors, it also raises urgent ethical questions. Grogan acknowledges the risks of bias and misinformation but reminds us that these challenges existed long before AI. “All the biased decisions, the financial fraud, all of the poor culture development—that took place before AI. It was done by people”, he notes.
Still, Grogan is a strong advocate for ethical design and governance. In education, for instance, he stresses the importance of ensuring training data is safe and representative—especially for children lacking traditional educational resources. “If the alternative is that a child doesn’t learn to read until they’re 10, and the only support they get is from AI, then we have to consider that carefully.”
Similarly, in healthcare, Grogan insists that AI must operate “in partnership” with human professionals. There must always be a human in the loop when decisions have real-life consequences. Encouragingly, he's seen a broad coalition of ethicists, technologists, and domain experts working together—early and thoughtfully—to shape the responsible use of AI.
Keeping Empathy at the Center of Efficiency
Too often, technology is introduced in the name of speed. But when organizations prioritize efficiency without understanding the purpose behind it, they risk accelerating bad processes—and alienating good people.
“Technology is very good at making a terrible process go faster”, Grogan warns. He points to hiring as a key example: overwhelmed by 400+ résumés, companies often raise qualifications or use automated tools to weed out applicants—only to miss out on great talent.
Grogan emphasizes that talent recruitment and retention must be a partnership between leadership and hiring teams—not a siloed transaction. “Everyone should be accountable for creating an environment where people want to work”, he insists.
The key is using AI to support—not replace—human judgment. For example, using AI to screen résumés is useful only when paired with a thoughtful follow-up process and clear expectations. “You're assisting the people doing the work, not automating a bad process”.
Avoiding Blind Spots and Accelerating the Right Way
AI isn’t new—but the current wave of generative tools has brought it into the spotlight. Grogan cautions against being dazzled by marketing. Leaders must ask: What’s behind the AI label? “Is it just a website over ChatGPT? It may still be useful”, he says. “But you want to pay the appropriate price for it”.
Grogan encourages leaders to embrace self-reflection and invite employees into the process. “Let them come up with the use cases. They almost certainly know”, he explains. Rather than dictating solutions from the top down, organizations should pilot small initiatives, share learnings, and scale only when results align with real needs. Ultimately, success in AI isn’t about speed—it’s about substance.
Final Takeaways for Executive Leaders
Grogan closes with three essential lessons for executives guiding their organizations through AI transformation:
- Don’t overlook the fundamentals: Making sure you have logical and fundamental ideas about why implementing any type of system to your organization is essential. Also, it is important to not overlook human talent and put it at the same level as potential technology.
- Assume AI will impact you—plan accordingly: The release of foundation models was surprisingly massive because powerful tools were suddenly accessible to everyone. Much like the cloud, people began to envision businesses that were not possible before. You must assume Generative AI will impact your competitive position, so the best time to start leveraging LLMs is now.
- Focus on process, not just potential: It is essential to focus on what this new technological environment is bringing to the table. Grogan advises that if you think you have a secret formula for success, you are ripe for disruption. New entrants do not have legacy systems and demanding customers, and now they have technology that can power their speed to market.
Anyone can now do what you have always done, so disrupt yourself first.