McKinsey’s new "AI Interview" requires candidates to collaborate with AI assistants.
The new assessment for consultants prioritizes prompt crafting and ethical human-AI collaboration over raw analytical skills.
January 15, 2026

The introduction of an AI chatbot into the early stages of graduate recruitment at McKinsey & Company marks a watershed moment, signaling a fundamental shift in how elite professional services firms evaluate their future talent. This pilot program, which incorporates the firm’s proprietary AI assistant, Lilli, moves beyond the traditional reliance on interviews, written tests, and human judgment alone to explicitly assess a candidate’s capacity to collaborate with artificial intelligence. The experiment is not merely a tool for initial high-volume screening, but a strategic measure to future-proof the firm's workforce by making AI fluency a baseline competence for entry-level roles.[1][2][3][4]
The new assessment, often referred to as an "AI interview," is being piloted in select final rounds for business school and graduate applicants in the United States and North America, and is designed to mirror the reality of modern consulting work.[5][6][7][8] In this component, candidates are presented with a practical consulting scenario and are required to use the Lilli chatbot as a support tool to analyze the case study, explore information, and refine their conclusions.[9][2][10] The focus of the evaluation is not on technical AI expertise, but on a set of higher-order cognitive skills: the ability to craft effective prompts, critically review the AI's output, and apply human judgment to synthesize the information into a clear and structured response.[2][11][3][10] This emphasis on "collaboration and reasoning" over raw data analysis positions the human candidate as the crucial "pilot," responsible for challenging the algorithm and putting its output into the specific context of a client’s requirements.[5][11][3] The underlying goal is to filter for candidates who can achieve what the firm calls "Superagency," where human talent is amplified by, rather than overshadowed by, machine capabilities.[11][12]
This recalibration of the recruitment funnel directly reflects the deep integration of AI into the firm’s operating model. McKinsey’s CEO has publicly stated that the firm already operates a substantial "workforce" of approximately 20,000 AI agents alongside its 40,000 human staff, with ambitious plans to scale this ratio to nearly one AI agent per human employee in the near future.[5][6][3][13] As AI takes over routine tasks like research, benchmarking, and early-stage analysis, the value of the consultant shifts from pure analytical prowess to skills that are inherently AI-proof: judgment in complex scenarios, leadership, and creative problem-solving that transcends predictable data patterns.[12][4] Consequentially, the firm is also reportedly re-evaluating its traditional hiring preferences, expressing a renewed interest in candidates with liberal arts degrees. This cohort is valued for their potential to bring "truly novel" ways of thinking and make the "discontinuous leaps" in logic that generative AI models currently struggle with, compensating for the algorithms’ limitations.[9][14][13]
McKinsey’s move sets a powerful precedent for the entire professional services sector. Competitors are expected to follow suit, with other prestigious firms like Boston Consulting Group, which uses a tool called Deckster, and Bain & Company, with its Sage tool, likely to incorporate similar AI collaboration assessments into their hiring processes.[9][8] The shift solidifies AI fluency as a competitive necessity in top-tier employment. For the AI industry itself, this represents a significant validation and a new commercial frontier. The demand will surge for sophisticated HR technology solutions that can move beyond simple keyword-matching and personality testing to reliably and scalably assess human-AI interaction.[15][16][17] Companies that specialize in skills assessments and simulated work environments, such as those that already offer technical or game-based evaluations, are now positioned to develop a new generation of products focused on evaluating prompt quality, output verification, and collaborative reasoning.[6][18]
However, the rapid adoption of AI in high-stakes recruitment is inseparable from its ethical challenges. The introduction of AI assessment tools, regardless of their intended function, immediately raises concerns about algorithmic bias, data privacy, and transparency.[19][20][21] AI systems trained on historical data, which may reflect past human biases regarding a candidate’s background or school, risk inheriting and amplifying those same inequalities, even when the goal is to assess a new skill.[20][22] Moreover, the "black box" nature of many complex AI models can make it difficult to explain to a candidate *why* an assessment resulted in a particular score, which complicates the process of challenging or appealing an outcome.[19][21] For companies utilizing these tools, ensuring fairness and ethical compliance requires a commitment to transparency—clearly communicating to candidates that AI is part of the process—and maintaining robust human oversight over the final decision-making.[19][23] Regular, independent audits of the AI model's performance and its impact on diverse candidate populations will be critical to mitigate unintended discriminatory outcomes and ensure the technology reflects a commitment to fairness and inclusion.[20][21] McKinsey’s pilot, while a technological advancement, underscores that the ultimate success of AI in talent acquisition will be measured not just by efficiency gains, but by its capacity to uphold ethical standards as it reshapes the path to professional employment.[19][23]
Sources
[3]
[4]
[7]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[19]
[21]
[23]