Community hub
Article
Video

Powering Cyber Resilience: Expert Insights from Energy Sector CISO Vishnu Murali

As part of the Dialectica Executive Community’s ongoing focus on critical infrastructure security, cybersecurity leader Vishnu Murali shared deep industry insights drawn from over 15 years of experience, including multiple CISO and Critical infrastructure leadership roles in both regulated and unregulated industries. Murali currently serves at one of the world’s largest regulated electric companies, where he oversees cyber defensive operations and partners with government agencies on threat assessment and infrastructure protection.

Leadership Spotlight

A Defining Moment: From Stuxnet to Sector-Wide Vigilance

Murali’s cybersecurity philosophy was forged during the Stuxnet shockwave: the 2010 discovery of the worm—and the industry-wide reckoning that followed by 2013—proved air-gapped industrial control systems were anything but safe. While leading incident-response efforts for nuclear and other critical facilities, Murali saw firsthand how Stuxnet slipped past perimeter defenses via infected USB drives, weaponized multiple zero-days, and silently hijacked PLCs to inflict physical damage. The experience cemented his conviction that true resilience demands proactive, defense-in-depth strategies for even the most “isolated” OT environments.

“That was the moment cybersecurity for air-gapped systems evolved from a compliance checkbox to a mission-critical function for the energy sector’s OT and ICS/SCADA systems,” Murali recalls. “Incident response stopped being a contingency—it became the foundation for resilience.”

Stuxnet triggered a paradigm shift in OT security, proving malware could breach isolated systems and cause physical damage through compromised PLCs. For Murali, it was a defining moment—one that still shapes how he builds defense-in-depth frameworks that fuse real-time threat detection, zero-trust segmentation, and operational resilience for critical infrastructure.

When Resistance Meets Results

Working in a regulated industry, especially one tied to critical infrastructure, demands rigorous controls around data classification, compartmentalization, and compliance—particularly when deploying AI initiatives. Ensuring that Bulk Critical System Information (BCSI) is not inadvertently exposed to Large Language Models (LLMs) is both a cybersecurity and regulatory imperative. One of Murali’s most consequential internal initiatives focused on proactively securing BCSI against unintended inclusion in AI training or inference workflows. Early proposals for stricter access controls and AI usage policies were initially met with resistance. However, a targeted audit of shared content revealed a startling reality—highlighting the urgent need for policy, process, and tooling alignment across data governance, cybersecurity, and AI teams.

“Studies indicate that large language models (LLMs) can verbatim memorize 0.5–10% of their training data—including personal and sensitive information—highlighting significant privacy risks and serving as a wake-up call for data governance and model design.”

The outcome was a comprehensive policy overhaul—proving that data-driven advocacy, even in the face of early resistance, can drive meaningful change and significantly strengthen an organization’s governance, compliance, and security posture.

Advice for Rising Cybersecurity Leaders

Murali offers a grounded framework for future CISOs and security leads entering the critical infrastructure space:

  • Enforce Zero-Trust—even on “air-gapped” OT. ICS vulnerabilities exploded from 550 in 2020 to 1,342 in 2022—a 144 % jump—proving perimeter defenses alone are obsolete.
  • Harden the software supply chain. The SolarWinds back-door reached ≈18,000 customers, including multiple U.S. federal agencies, turning a routine update into a nation-state beach-head.
  • Make security a company-wide mandate. The DOJ spotted SolarWinds activity six months before public disclosure; siloed teams missed the signal. Cross-department training and information-sharing cut dwell time and incident cost.
  • Engineer for the digitized smart grid. DOE’s Smart Grid System Report warns that the surge in connected devices and data flows is rapidly expanding the attack surface—segmentation, real-time OT visibility, and secure-by-design procurement are now table stakes. 
  • Audit AI pipelines for data leakage and BCSI exposure. LLMs and intelligent agents embedded in SCADA/HMI layers may inadvertently ingest or infer protected operational data.
  • Secure autonomous agents and self-healing systems. IA-enabled OT tools that take autonomous corrective action (e.g., fault isolation, feeder switching) must be hardened against model manipulation or logic poisoning.
  • Validate synthetic decision-making. IA-driven control logic (e.g., for predictive maintenance or load shedding) must undergo adversarial testing to prevent operational misfires from compromised inference engines.
  • Control AI model updates and drift. Continuous model learning in ICS environments introduces drift risks. Without versioning, rollback, and approval workflows, corrupted IA logic can cascade across control zones.
  • Prepare for human-machine trust failures. Operators relying on IA may experience automation bias—leading to missed overrides or unsafe conditions if models are tampered with or hallucinate state.

Cyber Insights

Cybersecurity Spend Trends

Murali confirmed that cybersecurity budgets are growing significantly across the energy sector, driven by AI threats, regulatory pressure, and modernization demands. The magnitude of increase is doubling in some cases, with investment focused heavily on tooling rather than headcount.

“We’re modernizing faster than ever before—defensive tooling has to scale with that.”

Spending on Cybersecurity for AI Systems

Murali’s organization is proactively investing in cybersecurity tools designed specifically to protect AI and ML environments from misused data, injection attacks, and unintended model training.

One such effort involved deploying document management systems to restrict LLM access to sensitive operational data, especially around grid operations and compliance documentation.

“It’s not about blocking AI—it’s about structuring how it learns and what it sees.”

Vendors Protecting AI Environments

Key vendors cited for protecting internal AI development and mitigating external AI threats include:

  • CrowdStrike – Falcon AI for predictive threat analysis

  • Microsoft Advanced Defense – cloud + endpoint AI security

  • Cisco – AI-enhanced anomaly detection and observability

  • GitHub – secure code cleansing, especially for AI-generated commits

  • Mercury, Tricentis – new entrants for AI-based AppSec

Investing in AI for Cybersecurity Capabilities

Murali also sees growing spend in the reverse direction—applying AI to improve cybersecurity capabilities. Investments are increasing in:

  • Real-time threat detection and behavior modeling

  • Automated response and remediation systems

  • Smart scanning of infrastructure vulnerabilities

The focus is shifting from traditional perimeter defense to intelligent, adaptive controls that evolve alongside threats.

“AI helps us patch faster, see more clearly, and respond in seconds. It’s not just an enabler—it’s essential now.”

Key Vendor Selection Criteria

When evaluating new cybersecurity tools, Murali’s team prioritizes:

  • Breadth of attack surface coverage
  • Proven performance in energy-specific environments
  • Transparent model behavior and explainability
  • Interoperability with regulatory and OEM ecosystems
  • Government-backed threat intelligence collaboration
  • OEM integrations, zero-day disclosure records, and track records with government collaboration—especially important in regulated sectors

These criteria ensure that solutions are not just smart—but battle-tested for the unique realities of critical infrastructure.

Stay current with our latest insights
Let’s stay connected
Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.