Inside an AI Strategy Course: Cases, Tools, and the Logic Behind the Design
Based on the INSEAD "AI Strategy for Startups and C-Suites" Course.
[this post was co-authored with a friendly LLM!]
Seven years ago I designed an MBA elective called AI Strategy at INSEAD. At the time, “AI strategy” was not a mainstream topic. Machine learning was used mostly for click prediction, fraud detection and recommendation engines. Deep learning was beginning to make noise in computer vision but was not yet powering consumer apps. Robotics was still a research frontier. And business schools were only starting to consider that managers might need to understand these developments.
6 years later, the course looked very different. We had Colab notebooks, interactive demos, open-source tooling, due diligence on frontier AI labs and sessions on GPUs, robots and autonomy. But the logic behind the course has stayed the same: help managers, founders and investors reason coherently about AI as a strategic tool.
This post gives an inside look at how the course is structured, why it uses the materials it does, and how it evolved from 2018 to 2024.
The First Version in 2018: AI Strategy in the Pre-Generative Era
The 2018 syllabus was titled “AI Strategy for Startups and C-Suites.” It revolved around four pillars that still anchor the course today:
Objective. Identify where AI can actually create business value.
Data. Understand what data is needed and how it is collected or generated.
Intuition for algorithms. Develop non-technical intuition for how predictive models work and fail.
Managerial judgment. Recognize where automation ends and human oversight must begin.
The case set reflected the state of AI in 2018. Some examples:
Recruit Holdings Co. Japan: Harnessing Data to Create Value (IMD Case).
A case on how a large Japanese conglomerate with businesses in HR, travel, real estate and media created an AI Lab in Silicon Valley to centralize data science. Students analyze why Recruit created the lab, how data should flow across 200 business units and what makes corporate AI initiatives succeed or fail.
Evaluating the Cognitive Analytics Frontier (Kellogg Case).
Honeywell Aerospace must choose between partnering with IBM or a startup called SparkCognition to build a natural language search engine for aviation manuals. The case forces students to compare partnering with a large incumbent versus an emerging AI company.
GROW: Using Artificial Intelligence to Screen Human Intelligence (Harvard Business School Case).
A startup founded by an MBA alum uses machine learning to assess candidates. Students debate where automation improves hiring, where it introduces risk, and how algorithms change the behavior of applicants and recruiters.
Data-Driven Pricing (Harvard Business School Case).
A fashion e-commerce retailer wants to use machine learning to set prices, but pricing interacts with inventory, coupons, search ranking and merchandising. Students grapple with the practical challenge of integrating algorithms into a multi-team retail operation.
Autonomous Vehicles: Technological Changes and Ethical Challenges (USC Case).
A senior executive must decide whether to take responsibility for an accident caused by an AI-driven vehicle. The case brings out issues of safety, failure modes and accountability.
There were no coding requirements in 2018. Everything was conceptual, strategic and case-driven. The ambition was simply to demystify AI for business leaders.
The 2024 Version: An Applied, Tool-Driven, Global View of AI Strategy
The 2024 syllabus reflected the current AI era. Students now interact with Python notebooks in Google Colab, use Gemini, Claude, ChatGPT and Stable Diffusion, and explore models and datasets through HuggingFace. The course has become more hands-on, though still non-technical.
Some concrete additions:
A Colab notebook that visualizes data and shows how predictions change when variables shift.
A hiring prediction notebook where students alter input features to see how a model behaves.
A pricing notebook that introduces transformers trained on tabular data.
A neural network notebook that students use to classify images and understand how layers operate.
Open-source model exploration where students load and interact with small LLMs or diffusion models.
The course also expands geographically: five sessions on North American companies, five on Japanese companies and the rest on Southeast Asia and Europe. This reflects the global nature of AI ecosystems today.
Why These Cases
1. Corporate Data Strategy and Infrastructure
Case: Recruit Holdings Co. Japan: Harnessing Data to Create Value (IMD).
Students evaluate how Recruit’s AI Lab in Mountain View operates, why it was created, and how it supports hundreds of business units. It is a realistic example of the complexity behind enterprise AI adoption.
2. Partnership Decisions in AI
Case: Evaluating the Cognitive Analytics Frontier (Kellogg).
Honeywell must choose between IBM Watson and a startup to build a natural language search tool for engineers. Students compare the tradeoffs: data access, speed of innovation, cost, integration risk and long-term lock-in.
3. Algorithmic Prediction in Human Resources
Case: GROW: Using Artificial Intelligence to Screen Human Intelligence (HBS).
Students analyze a startup that evaluates job seekers with predictive models. They discuss fairness, edge cases, the risk of over-automation and how algorithms shape behavior.
4. Interdependencies in Retail Systems
Case: Zalora: Data-Driven Pricing (HBS).
Pricing algorithms interact with buying, search ranking and coupon systems. Students learn why optimizing one function in isolation often creates second-order problems.
5. Research-Heavy AI Companies Facing Product Pressure
Case: Preferred Networks: A Deep Learning Startup Powers the Internet of Things (INSEAD).
This case is especially relevant in 2025. PFN built industrial robotics systems, autonomous mini vehicles and even its own supercomputer. Students contrast PFN’s research-driven approach with the modern landscape of OpenAI, Anthropic, Mistral and HuggingFace. They examine how a research lab tries to become a real company.
6. Deep Learning Hardware
Case: NVIDIA: Winning the Deep-Learning Leadership Battle (IMD).
Students examine GPU leadership, supply chain risk, competitors and why hardware matters as much as algorithms.
7. Self-Driving and Robotics
Case: Autonomous Vehicles: Technological Changes and Ethical Challenges (USC).
Students evaluate responsibility, safety and go-to-market choices in a domain with very real consequences.
The cases are ordered so students move from prediction and data to organizational integration, then to deep learning, hardware and autonomy.
Why Tools Matter in a Strategy Course
The biggest shift from 2018 to 2024 is the incorporation of tools. Students do not code, but they manipulate real models. For example:
They run a visualization notebook that reveals how a model’s predictions become unstable when variables drift out of distribution.
They use a hiring model to experiment with borderline candidate profiles.
They explore a neural network in Colab to see how changing a training parameter alters accuracy.
They load a small open-source LLM to see how responses change under different prompts.
This gives students an intuitive feel for what AI can and cannot do. It helps them ask better questions in real-world settings, such as:
Does this model have enough data? What happens when the inputs shift? How will this integrate with my operations team? What is the bottleneck: data, compute or incentives?
The Project: From Action Plans to Real AI Due Diligence
The final project shows the clearest evolution in the course.
2018 Project: AI Action Plan.
Students wrote a short report proposing an AI strategy in a domain of their choice. They described the market, listed competitors, summarized methods and created a plan for data collection and deployment.
2024 Project: Technical Due Diligence.
Students now choose one company from a curated list that includes AMD, Meta, Waymo, Mistral, Anthropic, Apple, HuggingFace, Anduril, Scale and Boston Dynamics. They study:
the founding story
the primary business model
experimental or emerging revenue streams
the data the company collects
the hardware stack
a technical deep dive into one AI product, including the model architecture
valuation and stock metrics if public
macro risks
technology-specific risks
They create a highly visual presentation with diagrams, product walkthroughs and Colab-based examples. Then they give the company an “investability rating” from the perspective of a VC, employee or strategic partner.
The shift reflects how the industry has changed. Managers today must evaluate AI vendors, assess model claims and understand the risks of integrating AI systems. The due diligence model trains them for exactly this.
Why This Approach Works
The combination of cases and tools gives students a realistic sense of how AI interacts with organizations. They learn:
why high-quality data is the true bottleneck
why partnership decisions shape the entire AI roadmap
why organizational incentives often determine whether an AI project succeeds
why predictive models fail in unexpected ways
why hardware matters
why research labs struggle to become product companies
why autonomy and robotics require different strategic thinking
These lessons do not change even when the underlying technology does.
A Final Reflection
Every year the examples and tools evolve. The goal is not to teach a static set of technologies. It is to teach a way of thinking that helps managers navigate uncertainty. The next wave of AI progress may involve agents, robotics or something entirely new. What matters is that students learn to reason through the value, the data, the incentives, the risks and the organizational design behind any AI system.
If they can do that, they will be prepared for whatever the next seven years look like.




