ARTICLE
3 Common mistakes using algo wheels and how traders can drive performance
Bloomberg Professional Services
- While wheels began as a way to remove broker bias, it has now transitioned to create a fairer distribution of flow.
- Clean, comparable data is one of the biggest benefits, helping firms make better decisions when executing a trade.
- Human judgment still matters, especially for large or complex trades, with AI expected to support decision-making rather than replace traders.
Algo wheels are increasingly embedded across electronically active buy-side equity desks. But as adoption has matured, the conversation has shifted. The question is no longer whether to use wheels, but how to design, govern and evolve them without breaking the very data they are meant to improve.
At a recent Finance Hive discussion, members compared notes on what is genuinely working, where expectations still exceed reality and how data, intent and human judgement continue to shape outcomes.
PRODUCT MENTIONS
“Wheels are no longer just governance tools. They are key to the full execution lifecycle, from pre-trade intent, through execution, to post-trade evaluation. It has shifted from distribution to decision-making intelligence,” according to Bloomberg’s Christopher Clodius, Global Head of Trade Automation.
“Modern wheels don’t just allocate flow. They standardize how comparable orders are executed, improve data consistency and enabling more robust transaction cost analysis. This creates a feedback loop where execution outcomes directly inform future routing decisions, align execution more closely with investment intent and continuously improving performance over time.”
In summary
- Algo wheels are widely used, but performance gains depend on design and iteration, not automation alone.
- While removing trader bias was the original driver, clean, comparable data is now the main value.
- Narrow, objective-led wheels outperform broad, static setups.
- High ADV orders remain the hardest challenge, where data quality matters most.
- Fully dynamic routing is still aspirational but constrained by data sufficiency and explainability.
- AI is expected to augment analysis and review rather than replace decision-makers.
How wheels are being used
Most desks in the room currently route the majority of their flow through automated or semi-automated processes, with wheels handling the majority of order flow and notional coverage varying more by strategy and order difficulty.
For many, the original motivation was behavioural, as wheels removed the tendency to default to legacy dealer relationships and created a fairer distribution of flow. That behavioural fix unlocked something more valuable: usable data. Once desks could compare brokers on a like-for-like basis, wheels stopped being just simple distribution tools and became execution frameworks.
Performance, however, is not driven by automation alone. Participants stressed that outcomes do not improve simply by ‘putting trades through a wheel’. The turning point comes when wheels are built around a clearly defined objective function. Liquidity seeking, implementation shortfall and passive execution behave differently but grouping similar strategies together delivers cleaner signals and more consistent outcomes than broad, catch-all structures.
Common mistakes and the optimal approach
A common mistake is treating the wheel as a single-purpose workflow within the execution process. Firms typically fall flat in the wheel design rather than automation itself: too many providers for the available sample, too many different execution objectives being mixed in a single wheel or allowing harder flow to bias outcomes and distort conclusions.
“Some go too narrow, using it only for small, low-touch flow. Others go too broad, overloading it with too many brokers, mixing different execution objectives and comparing unlike orders,” said Clodius.
The optimal approach is to step back and look across the full execution process.
“Wheels are designed to be flexible and applied across multiple workflows, from slicing large orders to managing commissions to routing different types of flow. Their edge comes from using them dynamically, rather than confining them to one part of the process,” said Clodius.
Ongoing iteration is also central to this approach. Most desks revisit wheels quarterly, using post-trade TCA to make incremental adjustments rather than reacting to short-term noise.
Minimum data thresholds per broker are increasingly common, ensuring comparisons are statistically meaningful before conclusions are drawn. Maintaining a small allocation for testing new providers or strategies is also seen as important to avoid overfitting to incumbent performance.
Why high ADV remains a key execution challenge
There was broad agreement that low ADV trades are rarely the issue. Instead, high ADV orders are typically the ones that expose weaknesses in both wheel design and data quality far more quickly.
Large trades often require trader discretion to manage risk, timing and market impact. To address this, desks are moving beyond ADV-only thresholds and layering in additional dimensions such as liquidity, spread, volatility, urgency and time of day.
The most effective setups however preserve discretion while still using the wheel to allocate flow, benchmark performance and remove bias. In practice, traders may slice a large order manually, then use the wheel to distribute portions of that flow across brokers and strategies.
Increasingly, pre-trade TCA is being integrated directly into the wheel. Rather than relying on static rules, the wheel can reference TCA to determine expected cost and participation rates, and map those metrics to the relevant broker algos.
“This enables a more flexible model: low-impact flow can be automated directly, while higher ADV or complex flow can be sliced, optimised and selectively routed,” explains Clodius. “The model is no longer based on static buckets, rather it is based on dynamic classification, with the wheel acting as the control layer.”
Data, TCA and the limits of prediction
Post-trade TCA is now widely viewed as a decision tool rather than a scorecard. Normalizing outcomes for order difficulty and peer context allows desks to refine wheels over time without overreacting to outliers.
Pre-trade models, by contrast, are treated with caution. Few participants see them as reliable predictors, but many use them as safety nets, flagging conditions where automation should slow down or escalate. Combined with real-time liquidity signals, pre-trade inputs can help protect against extreme outcomes without dictating execution.
Accessing multiple liquidity pools also adds another layer of complexity. Some desks use wheels to prioritise ELPs or midpoint venues before cascading orders into more passive strategies. While this can expand liquidity access, it requires careful governance to avoid adverse selection further down the chain. IOIs remain uneven in quality and classification, making blind trust risky.
That governance extends beyond performance. Clear objective functions, minimum data thresholds, exception handling and structured review processes are increasingly required to evidence best execution to internal committees and regulators.
Dynamic wheels, AI and what comes next
Fully dynamic wheels that adapt intra-trade remain more ambitious than reality. The constraint is not technology alone. Meaningful models require deep, clean data sets and decisions that can be explained to regulators and investment committees alike.
“Machine learning is already improving the wheel itself, selecting brokers, algos and parameters dynamically based on order characteristics and market conditions, within defined guardrails. This represents a controlled and explainable evolution,” said Clodius.
“Generative AI represents the next layer. Its role is as a workflow interface, spanning pre-trade, execution, liquidity discovery, analytics and post-trade review, surfacing insights and explaining outcomes across systems. It is expected to enhance trader decision-making rather than replace it,” said Clodius.
Several participants also noted that being able to interrogate historical trade data alongside market conditions would materially improve learning and accountability. This near-term impact is expected to create better segmentation of flow and faster, more explainable decision support, rather than fully autonomous execution.
Conclusion
At Bloomberg, we don’t see a future where AI replaces the trader, rather we believe as GenAI continues to orchestrate workflows and ML optimises the decision engine, and where the trader will still own the exceptions that matter.
This article is based on discussions with Finance Hive buy-side members across European equities. Insights have been anonymised and reflect peer-led debate rather than vendor perspectives.