Lorem

Bob Dylan famously asked, “How many roads must a man walk down, before you can call him a man?”. The power of the question is that there is no answer – and comparing the tribulations of one person’s journey through life to another’s defies any attempt at simple quantification. 

In much the same way, the EU’s AI Act (Regulation 2024/1689) struggles with a similar philosophical dilemma: when is a system AI? Or as Dylan might more poetically frame it: how many neurons must a system compute before you can call it AI? 

The recently issued Commission Guidelines on the Definition of an AI System attempt to draw a line between traditional software and AI systems by “clarifying” the Act’s definition. This feels rather like an attempt to answer Dylan’s road-walking metaphorical question with a specific number (“You’re a man if you walk down 67 roads or more, but not 66 or fewer”), and the distinctions that the Commission guidelines draw are far from convincing.

Where do we draw the line?

The guidelines heavily focus on distinguishing different information processing techniques. The trouble is that there is no inherent distinction between the basic computing operations that underpin the techniques that the guidelines designate as AI and those described as traditional software. All are based on the same core operations performed by computation hardware. 

Most of us would expect a neural network to be ‘AI’. Consider though that a single neuron in a neural network performs nothing more than a basic mathematical operation: multiplication, addition, and normalisation. A simple linear regression model does the same – applying weighted sums to input variables to produce an output. But the AI Act would classify the latter as traditional software, while a sufficiently large network of interconnected neurons suddenly becomes an AI system. Why?

The guidelines (inevitably) cannot sensibly specify when exactly a system crosses from basic computation to AI-driven inference. Is it when a model moves from a single-layer to a multi-layer neural network? Is it when it stops using predefined rules and begins optimizing for performance? If a traditional optimization algorithm is augmented with a machine-learning-based approximation, does it suddenly become AI? To any such question, the guidelines provide no clear answer.

The “inference” problem: AI vs. not-AI

The guidelines define inference as the key characteristic separating AI from traditional software. However, many non-AI systems also “infer” in meaningful ways:

  • rule-based expert systems derive conclusions from encoded knowledge;
  • Bayesian models update probabilities dynamically; and
  • regression models predict outcomes based on training data.

For reasons that are far from clear based on the core functioning of these different models, the guidelines exclude these systems from the AI definition, while including deep learning models that perform essentially the same function using similar techniques on a larger scale. This creates an arbitrary classification problem: a system with a simple statistical model does not count as AI, but one with a neural network performing nearly identical computations does.

This distinction is not based on any underlying technical reality. It suggests that AI is defined not by what it does, but by how complicated it appears – an approach that seems likely to lead to inconsistent regulatory enforcement.

Adaptiveness vs pre-trained models

Another criterion in the AI Act definition is “adaptiveness” – the ability of a system to learn or change behaviour after deployment. Even here, there is no ‘bright line’ between the latest AI techniques and older methods of information processing. For example:

  • many modern ‘machine learning’ AI systems do not adapt after deployment (e.g., a frozen deep learning model in production); and
  • other more traditional systems do adapt dynamically to the data being processed (e.g., optimisation algorithms that refine parameters over time).

If a static neural network trained on past data is considered AI, but a dynamically updating non-ML system is not, then the guidance fails to capture what truly makes a system adaptable.

A focus on form over function

The guidelines also attempt to separate AI from traditional software based on techniques rather than functionality. They classify:

  • machine learning, deep learning, and logic-based AI as AI; and 
  • classical statistical methods, heuristics, and certain optimization techniques as non-AI.

However, in real-world applications, these techniques are often blended together. Why should an advanced decision tree classifier be AI while a complex Bayesian network is not? The distinction creates a regulatory ‘cliff edge’ imposing significant burdens on the developers and users If certain techniques while entirely excluding other extremely similar approaches that could easily have similar real-world impacts and outcomes.

A more practical approach: AI as a spectrum

Rather than attempting to define AI through specific computational techniques, a more effective regulatory approach might focus on functional characteristics – specifically, the level of adaptability and autonomy a system exhibits. This approach borrows much from the UK Government’s 2022 proposal for defining regulated AI, which suggested that AI systems should be assessed based on two key qualities:

  1. Adaptability – The extent to which a system can change its behaviour over time, particularly in ways that may be unpredictable or difficult to control.
  2. Autonomy – The degree to which a system can operate without direct human oversight, particularly where its decisions have real-world consequences.

Under this model, the greater a system’s adaptability and autonomy, the greater the regulatory concern. At the highest-risk end of the spectrum, a system that is highly adaptable (and therefore unpredictable) and highly autonomous (and therefore able to act without oversight) poses the greatest potential mischief – it could make decisions that evolve beyond human intention or control, with no immediate means of intervention. Conversely, a system with limited or no adaptability (meaning it behaves in a predictable, rules-based manner) and low autonomy (meaning it operates in a supervised or constrained setting) presents minimal risk, requiring little or no regulatory intervention.

This framework lends itself to a four-quadrant model:

Regulation could then be calibrated according to risk:

  • Low-adaptability, low-autonomy systems (e.g., simple automation, rules-based expert systems) could remain unregulated or subject to light-touch requirements.
  • High-autonomy but low-adaptability systems (e.g., pre-trained models running automated functions in critical settings) might require oversight measures but not extensive restrictions.
  • Highly adaptable but low-autonomy systems (e.g., machine learning models that improve over time but operate in a controlled setting) could require transparency obligations to ensure they remain safe.
  • Highly adaptable, highly autonomous systems (e.g., self-improving AI deployed in high-stakes environments) would warrant the highest levels of scrutiny and regulatory intervention.

Such an approach would align regulation with actual risk, rather than imposing arbitrary thresholds based on the number of neurons in a neural network or the use of specific techniques. It would also avoid the current ‘regulatory cliff’ all-or-nothing classification problem under the AI Act, where systems that operate in vastly different ways are either lumped together as “AI” or excluded without clear justification.

Conclusion: the answer is still blowin’ in the wind

Before the release of the Commission’s guidance, one could have arguably summarised the AI Act’s definition of an AI System as probably capturing any reasonably complex large-scale computing system. While broad, this at least provided a straightforward interpretation: if a system processes data in a way that generates outputs beyond simple rule-following, it likely falls under the Act.

After the guidance, however, the situation is arguably less clear, not more. The definition still seems to encompass any reasonably complex computing system, but now with a series of apparently arbitrary exceptions based on specific techniques rather than fundamental capabilities. Linear regression is out, but deep learning is in. Bayesian inference is ignored, but logic-based AI is included. Traditional optimization methods are exempt—unless they look a little too much like machine learning.

Instead of providing the necessary clarity, the guidance raises more questions than it answers. What truly distinguishes AI from non-AI? Where is the line between “basic data processing” and “inference”? At what point does an optimization algorithm become AI? Rather than resolving these ambiguities, the Commission’s attempt to define AI feels like an exercise in drawing boundaries where none naturally exist.

Or, as Bob Dylan might put it: What’s a AI System … the answer is still blowin’ in the wind.