| Page 111 | Kisaco Research
 

Sam Pritzker

Principal
TSG Consumer

Sam Pritzker

Principal
TSG Consumer

Sam Pritzker

Principal
TSG Consumer

In the new era of AI infrastructure, CMOS scaling remains the workhorse for heavy computational workloads. But the need for an energy-efficient solution imposes a paradigm shift at the interconnect level, requiring an intimate 3D co-integration of advanced ASICs and optical connectivity. 

As the architectural complexity of new products increases, relying on state-of-the-art platforms, with a short path to manufacturing. In this workshop, we will highlight how you can access following technologies for your future products:  

  • Advanced-node ASIC down to TSMC N2 
  • Imec’s integrated photonics platforms from 200G up to co-packaged optics 
  • Imec’s advanced 3D packaging technique from interposer to hybrid bonding 

Location: Room 207

Duration: 1 hour

Author:

Philippe Soussan

Technology Portfolio Director
IMEC

Philippe Soussan is Technology Portfolio Director at imec. For 20 years, he has held different positions in R&D management at imec in the field of sensors, photonics, and 3D packaging. Addressing these technologies from R&D up to manufacturing levels.  

His expertise lies in wafer-scale technologies, and he has authored over 100 publications and holds more than 20 patents in these fields. 

Since 2024, Philippe has been in charge of strategy definition within the “IC-link by imec” sector. This imec business line provides access to design and manufacturing services in the most advanced ASIC and specialty technologies.  

Philippe Soussan

Technology Portfolio Director
IMEC

Philippe Soussan is Technology Portfolio Director at imec. For 20 years, he has held different positions in R&D management at imec in the field of sensors, photonics, and 3D packaging. Addressing these technologies from R&D up to manufacturing levels.  

His expertise lies in wafer-scale technologies, and he has authored over 100 publications and holds more than 20 patents in these fields. 

Since 2024, Philippe has been in charge of strategy definition within the “IC-link by imec” sector. This imec business line provides access to design and manufacturing services in the most advanced ASIC and specialty technologies.  

In this session, we will explore the end-to-end workflow of managing foundation model (FM) development on Amazon SageMaker HyperPod. Our discussion will cover both distributed model training and inference using frameworks like PyTorch and KubeRay. Additionally, we will dive into operational aspects, including system observability and resiliency features for scale and cost-performance using Amazon EKS on SageMaker HyperPod. By the end of this hands-on session, you will gain a robust understanding of training and deploying FMs efficiently on AWS. You will learn to leverage cutting-edge techniques and tools to ensure high performance, reliable, and scalable FM development.

Location: Room 206

Duration: 1 hour

Author:

Mark Vinciguerra

Assoc. WW Solution Architect
AWS GenAI

Mark Vinciguerra

Assoc. WW Solution Architect
AWS GenAI

Author:

Aravind Neelakantan

WW Solution Architect
AWS GenAI

Aravind Neelakantan

WW Solution Architect
AWS GenAI

Author:

Aman Shanbhag

WW Solution Architect
AWS GenAI

Aman Shanbhag is a Specialist Solutions Architect on the ML Frameworks team at Amazon Web Services (AWS), where he helps customers and partners with deploying ML training and inference solutions at scale. Before joining AWS, Aman graduated from Rice University with degrees in computer science, mathematics, and entrepreneurship.

Aman Shanbhag

WW Solution Architect
AWS GenAI

Aman Shanbhag is a Specialist Solutions Architect on the ML Frameworks team at Amazon Web Services (AWS), where he helps customers and partners with deploying ML training and inference solutions at scale. Before joining AWS, Aman graduated from Rice University with degrees in computer science, mathematics, and entrepreneurship.

 

Anna Doherty

Partner
G9 Ventures

Anna Doherty is a Partner at G9 Ventures, where she leads sourcing, diligence, portfolio company support, and platform strategy. Since joining the firm in 2019, she has played a key role in shaping G9’s investment pipeline and driving value across its portfolio of next-generation consumer brands. Anna was named to the Forbes 30 Under 30 List in Venture Capital in 2024. Prior to joining G9, Anna was an analyst at Morgan Stanley where she gained foundational experience in both investing and operations. 

 

Anna Doherty

Partner
G9 Ventures

Anna Doherty

Partner
G9 Ventures

Anna Doherty is a Partner at G9 Ventures, where she leads sourcing, diligence, portfolio company support, and platform strategy. Since joining the firm in 2019, she has played a key role in shaping G9’s investment pipeline and driving value across its portfolio of next-generation consumer brands. Anna was named to the Forbes 30 Under 30 List in Venture Capital in 2024. Prior to joining G9, Anna was an analyst at Morgan Stanley where she gained foundational experience in both investing and operations. 

 

Anna graduated cum laude from Princeton University with a B.A. in History and served as the captain of the varsity women’s lacrosse team. Her junior independent work focused on the role of women in late 20th-century finance—a topic that continues to inform her perspective as an investor. 

 

She lives in New York City and brings her competitive spirit to everything she does–from power walks and pickleball to her Oura sleep score. An avid podcast listener (Huberman Lab, The Tim Ferriss Show, Pivot), Anna loves exploring new restaurants, and is a champion of passionate, mission-driven founders who are building brands that matter.

The rapid evolution of high-performance computing (HPC) clusters has been instrumental in driving transformative advancements in AI research and applications. These sophisticated systems enable the processing of complex datasets and support groundbreaking innovation. However, as their adoption grows, so do the critical security challenges they face, particularly when handling sensitive data in multi-tenant environments where diverse users and workloads coexist. Organizations are increasingly turning to Confidential Computing as a framework to protect AI workloads, emphasizing the need for robust HPC architectures that incorporate runtime attestation capabilities to ensure trust and integrity.

In this session, we present an advanced HPC cluster architecture designed to address these challenges, focusing on how runtime attestation of critical components – such as the kernel, Trusted Execution Environments (TEEs), and eBPF layers – can effectively fortify HPC clusters for AI applications operating across disjoint tenants. This architecture leverages cutting-edge security practices, enabling real-time verification and anomaly detection without compromising the performance essential to HPC systems.

Through use cases and examples, we will illustrate how runtime attestation integrates seamlessly into HPC environments, offering a scalable and efficient solution for securing AI workloads. Participants will leave this session equipped with a deeper understanding of how to leverage runtime attestation and Confidential Computing principles to build secure, reliable, and high-performing HPC clusters tailored for AI innovations.

Location: Room 201

Duration: 1 hour

Author:

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Author:

Ofir Azoulay-Rozanes

Director of Product Management
Anjuna

Ofir Azoulay-Rozanes is the Director of Product Management at Anjuna Security. He brings over 30 years of experience in the software industry, including 15 years in cybersecurity. His career spans software engineering and product management leadership roles. Before joining Anjuna, Ofir led security products at Imperva and JFrog.

Ofir Azoulay-Rozanes

Director of Product Management
Anjuna

Ofir Azoulay-Rozanes is the Director of Product Management at Anjuna Security. He brings over 30 years of experience in the software industry, including 15 years in cybersecurity. His career spans software engineering and product management leadership roles. Before joining Anjuna, Ofir led security products at Imperva and JFrog.

Dive into a hands-on workshop designed exclusively for AI developers. Learn to leverage the power of Google Cloud TPUs, the custom accelerators behind Google Gemini, for highly efficient LLM inference using vLLM. In this workshop, you will build and deploy Gemma 3 27B on Trillium TPUs with vLLM and Google Kubernetes Engine (GKE). Explore advanced tooling like Dynamic Workload Scheduler (DWS) for TPU provisioning, Google Cloud Storage (GCS) for model checkpoints, and essential observability and monitoring solutions

Location: Room 207

Duration: 1 hour

Author:

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.

Author:

Don McCasland

Developer Advocate Lead
Google Cloud

Don leads the Cloud Developer Relations team for AI Infrastructure at Google Cloud. A 20-year veteran in Developer Operations, he is focused on empowering the global developer community to build and scale the next generation of AI applications on Google's cutting-edge platforms.

Don McCasland

Developer Advocate Lead
Google Cloud

Don leads the Cloud Developer Relations team for AI Infrastructure at Google Cloud. A 20-year veteran in Developer Operations, he is focused on empowering the global developer community to build and scale the next generation of AI applications on Google's cutting-edge platforms.

 

Julianne Kur

Principal
Alliance Consumer Growth

Julianne Kur

Principal
Alliance Consumer Growth

Julianne Kur

Principal
Alliance Consumer Growth

DataBank, one of the nation’s leading data center operators, with more facilities in more markets than any other provider, has seen the future of enterprise AI infrastructure and knows how to help enterprises get there.   

With a customer base that spans 2500+ enterprises – in addition to hyperscalers and emerging AI service providers – DataBank has a unique perspective on the trends and lessons learned from customer AI deployments to date, which include some of the industry’s first NVL72/GB200 installations.   

In this 60-minute session, John Solensky, DataBank’s VP of Sales Engineering, and Mike Alvaro, DataBank’s Principal Solutions Architect, will share what DataBank has learned from its early GPU installations for hyperscalers and AI service providers, how those lessons were applied to later enterprise installations, the impact that next-generation GPUs are having on data center designs and solution costs, and the lessons for future enterprise deployments. 

Location: Room 206

Duration: 1 hour

Author:

Mike Alvaro

Principal Solutions Architect
DataBank

Michael Alvaro brings over 12 years of industry expertise spanning construction and mathematics to his role as Principal Solution Architect at DataBank, where he serves as technical lead for the data center sales team. Specializing in enterprise colocation solutions, Michael guides organizations through complex infrastructure requirements involving high-performance computing deployments.

As AI workloads rapidly scale across enterprises, Michael has become a trusted advisor for deployments demanding both high-density air cooling and advanced liquid cooling solutions. His unique construction background provides critical insight into physical infrastructure challenges, while his mathematical foundation enables precise optimization of power, cooling, and space efficiency. Michael’s approach centers on translating complex technical requirements into actionable deployment strategies, helping clients understand not just what’s possible, but what’s most cost-effective and operationally efficient.

Mike Alvaro

Principal Solutions Architect
DataBank

Michael Alvaro brings over 12 years of industry expertise spanning construction and mathematics to his role as Principal Solution Architect at DataBank, where he serves as technical lead for the data center sales team. Specializing in enterprise colocation solutions, Michael guides organizations through complex infrastructure requirements involving high-performance computing deployments.

As AI workloads rapidly scale across enterprises, Michael has become a trusted advisor for deployments demanding both high-density air cooling and advanced liquid cooling solutions. His unique construction background provides critical insight into physical infrastructure challenges, while his mathematical foundation enables precise optimization of power, cooling, and space efficiency. Michael’s approach centers on translating complex technical requirements into actionable deployment strategies, helping clients understand not just what’s possible, but what’s most cost-effective and operationally efficient.

Author:

John Solensky

VP of Sales Engineering
DataBank

John Solensky is the Vice President of Solutions Engineering at DataBank, where he leads a team focused on delivering colocation, cloud, and AI infrastructure solutions. With over 26 years of industry experience , John brings deep expertise in helping enterprises design and deploy secure, scalable, and high-performance platforms.

Since joining DataBank in 2020, John has been instrumental in advancing the company’s solutions engineering strategy, enabling customers to modernize IT environments and harness the power of AI-driven applications hosted in DataBank’s facilities. His leadership emphasizes collaboration, technical excellence, and a client-first approach, ensuring that organizations can rely on DataBank for mission-critical workloads and next-generation innovations.

John Solensky

VP of Sales Engineering
DataBank

John Solensky is the Vice President of Solutions Engineering at DataBank, where he leads a team focused on delivering colocation, cloud, and AI infrastructure solutions. With over 26 years of industry experience , John brings deep expertise in helping enterprises design and deploy secure, scalable, and high-performance platforms.

Since joining DataBank in 2020, John has been instrumental in advancing the company’s solutions engineering strategy, enabling customers to modernize IT environments and harness the power of AI-driven applications hosted in DataBank’s facilities. His leadership emphasizes collaboration, technical excellence, and a client-first approach, ensuring that organizations can rely on DataBank for mission-critical workloads and next-generation innovations.

Author:

Greg McNutt

Technical Director
Pure Storage

Greg McNutt is the Technical Director at PureStorage, Inc. where he has spent nearly a decade developing efficient methods of utilizing expensive hardware and limited power and supporting relatively high touch engineering labs.  Over his career has has spent many years working on products from lowest level facilities to cloud and high scale products.  Greg studied dependable computing at Stanford University.  https://www.linkedin.com/in/gcmcnutt

Greg McNutt

Technical Director
Pure Storage

Greg McNutt is the Technical Director at PureStorage, Inc. where he has spent nearly a decade developing efficient methods of utilizing expensive hardware and limited power and supporting relatively high touch engineering labs.  Over his career has has spent many years working on products from lowest level facilities to cloud and high scale products.  Greg studied dependable computing at Stanford University.  https://www.linkedin.com/in/gcmcnutt

Experience the future of GenAI inference architecture with NeuReality’s fully integrated, enterprise-ready NR1® Inference Appliance. In this hands-on workshop, you'll go from cold start to live GenAI applications in under 30 minutes using our AI-CPU-powered system. The NR1® Chip – the world’s first AI-CPU purpose built for interference – pairs with any GPU or AI accelerator and optimizes any AI data workload. We’ll walk you through setup, deployment, and real-time inference using models like LLaMA, Mistral, and DeepSeek on our disaggregated architecture—built for smooth scalability, superior price/performance and near 100% GPU utilization (vs <50% with traditional CPU/NIC architecture). Join us to see how NeuReality eliminates infrastructure complexity and delivers enterprise-ready performance and ROI today.

Location: Room 201

Duration: 1 hour

Author:

Paul Piezzo

Enterprise Sales Director
NeuReality

Paul Piezzo

Enterprise Sales Director
NeuReality

Author:

Gaurav Shah

VP of Business Development
NeuReality

Gaurav Shah

VP of Business Development
NeuReality

Author:

Naveh Grofi

Customer Success Engineer
NeuReality

Naveh Grofi

Customer Success Engineer
NeuReality

Join us in this hands-on workshop to learn how to deploy and optimize large language models (LLMs) for scalable inference at enterprise scale. Participants will learn to orchestrate distributed LLM serving with vLLM on Amazon EKS, enabling robust, flexible, and highly available deployments. The session demonstrates how to utilize AWS Trainium hardware within EKS to maximize throughput and cost efficiency, leveraging Kubernetes-native features for automated scaling, resource management, and seamless integration with AWS services.

Location: Room 206

Duration: 1 hour

Author:

Asheesh Goja

Principal GenAI Solutions Architect
AWS

Asheesh Goja

Principal GenAI Solutions Architect
AWS

Author:

Pinak Panigrahi

Sr. Machine Learning Architect - Annapurna ML
AWS

Pinak Panigrahi

Sr. Machine Learning Architect - Annapurna ML
AWS