AI Is Changing Who’s in Control — InviGrid CEO Yogita Parulekar Explains Why
- Team Invi Grid
- Feb 19
- 5 min read

AI is changing how infrastructure is built, operated, and secured. As organizations run both AI and traditional workloads across multi-cloud environments, control, ownership, and accountability are becoming harder to maintain yet more critical than ever.
In this conversation, InviGrid CEO Yogita Parulekar shares how leaders should think about agentic AI, governance, and the operating realities shaping modern infrastructure.
How do teams lose control over access, ownership, or configuration changes as organizations run AI and non-AI workloads across multiple cloud environments?
Yogita:
As organizations accelerate both AI and traditional workloads across multiple clouds, control over access, ownership, and configuration breaks down, leading to compounding sprawl and operational chaos. Teams are pressured to deliver quickly, so admin access is granted “temporarily” to enable innovation, but those permissions are rarely revisited or revoked, so resources accumulate without clear ownership. Over time, exceptions become embedded in the operating model, reinforced by multiple ways to provision infrastructure: IaC, console, CLI, APIs, and a growing mix of tools. This makes it difficult to maintain a single source of truth. Lacking clear visibility into who deployed what and when, teams are forced to dig through logs, a reactive, time-consuming process that does not scale and offers little context about ownership or intent. As people transition roles or leave, remaining teams hesitate to touch unfamiliar resources, accelerating a downward spiral of sprawl, loss of control, and weakened governance.
How do existing infrastructure processes change when AI workloads are introduced
Yogita:
For AI workloads, provisioning decisions have a direct impact on model and agent performance. Change management must evolve as continuous training and retraining drive constant updates at speed and require controls that can keep pace. Governance and cost management quickly become mission-critical as resource consumption grows less predictable. Identity must also be rethought as AI agents and automated workloads operate with their own identities and make decisions on behalf of users.
Beyond infrastructure itself, organizations must strengthen data provenance and ownership by tracking what data is used and how. Further context and usage have to be tracked to support transparency, explainability and reliability and trust of outcomes and results. This requires new approaches to observability, logging, and monitoring built directly into AI workloads.
What approach can be used to define what actions AI can take, under which conditions, when deploying agentic AI in infrastructure workflows?
Yogita:
When deploying agentic AI, the starting point is recognizing that agents act with real autonomy and on our behalf. Organizations remain accountable for their actions and must design oversight into how they operate.
The concept of “human in the loop” needs to be defined more broadly. In some cases, it means humans are directly approving decisions or actions within workflows. In others, it means establishing the policies, constraints, and guardrails under which agents can operate independently.
Safeguards must exist for when agents behave unexpectedly, including rollback mechanisms and, where necessary, emergency stop capabilities. We want to ensure AI operates within clearly defined boundaries, with accountability, resilience, and human oversight built in.
Could you describe a possible approach to address governance gaps when automation executes faster than controls are reviewed or updated?
Yogita:
When automation moves faster than governance can be reviewed or updated, the sustainable approach is to embed governance directly into the automation itself. Controls need to operate in real time.
The best analogy is everyday automation: a door that locks automatically or a camera built into a doorbell. Security is built into how the system functions. Infrastructure and developer workflows need the same model where guardrails, approvals, and policy enforcement are integrated into the automation so teams can move quickly without losing control.
How do control failures affect efforts to reduce alert fatigue without losing visibility into risk?
Yogita:
Alert fatigue is often the downstream effect of upstream control failure. As alerts multiply, noise rises, teams lose clarity, and it becomes harder to distinguish what truly matters. Critical issues can hide in plain sight simply because the volume becomes overwhelming. Many organizations respond by layering on more tooling or AI to triage the flood, but that approach treats the symptoms rather than the cause.
The sustainable path is prevention. Governance controls reduce the volume of risk entering the environment to keep visibility intact and alerts meaningful. Instead of forcing teams to sift through growing noise, strong preventive controls limit what generates alerts in the first place. That preserves focus, improves response, and allows visibility systems to highlight real threats rather than every preventable misstep. It’s the difference between continuously filtering noise and designing systems that prevent it from building at all.
From a board perspective, what questions should be asked about AI and cloud security risk, including ownership and how exceptions are approved?
Yogita:
Boards have a responsibility to oversee AI and cloud risk on behalf of shareholders by focusing on accountability and value. Boards are expected to maintain “noses in, fingers out” and asking the right questions without operating the business. Two questions matter most.
First: who is accountable for AI? Who owns AI safety, security, trust, transparency, and reliability of outcomes? And how are exceptions handled? AI is easy to demonstrate but much harder to trust in production, and each of those dimensions requires clear ownership.
Second: how are we ensuring AI investments are delivering measurable ROI? Oversight ensures these efforts are disciplined, strategic, and tied to long-term competitiveness rather than short-term experimentation. These questions anchor AI in governance, not hype.
What makes a security leader effective when working with engineering teams?
Yogita:
An effective security leader understands how engineering teams operate: their motivations, incentives, and the business pressures they work under. Engineers are rewarded for shipping features and delivering customer value, so security has to align with that reality rather than compete with it. Most teams want to build things that work and that customers trust rather than ship unstable products or spend time fixing avoidable issues.
The role of a security leader is to reframe security as an enabler of that outcome, not a blocker. Security should be embedded into the development process with clear expectations for how quality is measured.
In the best engineering cultures, security issues are addressed as a normal part of development rather than separate work. This requires leadership alignment and frictionless execution. CEOs and CTOs must reinforce that secure code is a leadership priority and a shared responsibility. Security leaders then reinforce that message through partnership.
Automation, low-noise tooling, and workflows that minimize false positives allow security to operate at engineering speed.
What problem do you think aspiring cybersecurity leaders should focus on addressing?
Yogita:
Aspiring cybersecurity leaders should focus on the scale and trajectory of cyber risk itself. According to Statista, the global cost of cybercrime is now over $10 trillion (larger than the GDP of most countries outside the U.S. and China!) and AI is accelerating both the speed and sophistication of attacks, we risk falling further behind if we continue relying on the same tools and approaches while expecting different outcomes
AI is reshaping how every business operates, and it creates an opportunity to rethink how security operates as well. The next generation of successful leaders will focus on embedding security directly into business processes rather than treating it as a separate function layered on afterward. Preventing risk earlier, reducing friction for builders, and enabling innovation safely will define the future of effective cybersecurity leadership.


