Jay Periq, Microsoft’s EVP of Core AI, discusses the integration of developer and infrastructure teams to create advanced AI tools and platforms like Foundry, emphasizing flexible deployment, security, and collaboration in the evolving AI landscape. He highlights the challenges of AI infrastructure scaling, the importance of diverse and efficient models tailored to enterprise needs, and Microsoft’s partnership with OpenAI to deliver trusted, innovative AI solutions that transform software development and enterprise modernization.
In this interview, Jay Periq, Microsoft’s Executive Vice President of Core AI, discusses the formation and vision of Microsoft’s newly combined Core AI team, which merges the developer division and core infrastructure teams. The team focuses on reinventing software development tools for the AI era, building a platform called Foundry for creating and deploying AI agents, and ensuring security and trust are integral from the start. Jay emphasizes the importance of flexible deployment strategies that span cloud and edge devices to meet diverse enterprise needs.
Jay highlights the cultural shift within Microsoft as his team returns to full-time in-person work, emphasizing that collaboration, mentorship, and rapid learning are crucial in the fast-evolving AI landscape. He notes that AI is transforming how different roles within the company collaborate, blurring traditional boundaries between developers, product managers, and designers. AI tools empower individuals across functions to contribute to software development, fostering creativity and accelerating product iteration through faster prototyping and feedback loops.
Regarding AI infrastructure, Jay discusses the challenges of scaling data centers, particularly around power constraints and hardware evolution. While GPUs are critical, the entire system—including CPUs, storage, and networking—must be optimized to handle the increasing complexity of AI workloads. Microsoft actively manages GPU utilization and power availability, balancing supply and demand while continuously improving efficiency to maximize the value of existing infrastructure.
Jay also addresses the importance of model efficiency and diversity in enterprise AI deployments. He explains that enterprises often use a range of models tailored to specific tasks, balancing cost, latency, and accuracy. Microsoft’s platform supports this diversity through capabilities like model routing, which selects the best model based on workload requirements. He stresses that enterprise data plays a vital role in fine-tuning models to enhance their effectiveness and deliver higher ROI, and that both open-source and closed-source models have their place depending on customer needs.
Finally, Jay touches on Microsoft’s close collaboration with OpenAI, combining their research and IP with Microsoft’s product development to deliver advanced AI solutions. He underscores the critical importance of security and trust in AI, noting that AI agents are designed with built-in compliance, observability, and governance features. When asked about contrarian views, Jay rejects simplistic metrics like lines of code written by AI, instead focusing on the transformative impact AI has on enabling enterprises to tackle previously insurmountable challenges such as technical debt and modernization.