Up until Manchester City's loss last weekend, there seemed to pervade a tacit acceptance that the Premier League title was theirs to lose. The slender 2 point margin they enjoyed over Liverpool, was superciliously dismissed as something of an illusory mathematical state that didn't really add up to anything concrete; least of all competition.

Yet, as soon the final whistle sounded in the Chelsea game, something changed, and what had until then been dismissed as illusory was, all of a sudden, accepted as irrefutable evidence of whatever we wanted it to prove. It was as if popular opinion had neatly pirouetted 180°, to come face to face with everything it had previously turned its back on, to hide in a ghetto of repressed thoughts and dispossessed ideas and to count itself among their number.

The backing track to such attitudinal acrobatics was the churning drone of hype machines being cranked into top gear, the oily drip of cogs being freshly greased with sensationalism, and the thought crushing stamp of printing presses inking identikit news. As so often, the media orchestra swelled to a crescendoing ear worm; burrowing into the collective subconscious so to be helplessly hummed as a half forgotten thought.

Hoping to plug my ears against the cacophonous consensus, I've looked at 2 separate statistical models in an effort to drill deeper into the underlying performance levels of the league's 2 main protagonists; Liverpool and Man City. Both models are reflexive and respond to events to a degree predetermined by the mathematical assumptions which serve as their theoretical underpinnings. Consequently, both paint an evolving picture, which seeks to depict the sometimes opaque composition of performance levels in the form of something other than a traditional league table. One of the models additionally includes a long range forecasting tool and it is there that we shall start.

FiveThirtyEight's forecasts are based on a SPI rating, which reflects the percentage of points a certain team would win if they were to play against a statistically average side repeatedly. Furthermore, the model provides offensive and defensive evaluations, which again are calculated with respect to a hypothetical side composed of statistically average attributes. More explicitly, these measures enumerate how many goals a certain team could expect to score and concede against the control side on average.

SPI is a fluid rating which responds according to how a team performs against expectation. It does not necessarily follow therefore that a win will improve a team's rating, in fact it could do the opposite if the margin of victory falls short of what the underlying model predicted beforehand. Since SPI forms the basis of FiveThirtyEight's predictive model, the fact that SPI itself is not a fixed value and rather is responsive to form necessitates that season-long simulations are repeated regularly. The frequent updating of long term projections ensures that the model can adapt to changes as and when they occur and alter its predictions accordingly.

Man City's current SPI rating of 93.9 is the highest in the league, with Liverpool's rating of 91.0, somewhat predictably being second best. Tellingly, ratings for both teams have improved since the start of the season, rising from original ratings of 92.1 in Man City's case and 88.9 in Liverpool's. However, whereas Liverpool's rating currently sits at a season high, Man City's has gradually fallen away from a peak rating of 95.0 recorded in early November, signalling a slight downturn in form.

Obviously, given that the SPI ratings are in essence a derivative of how many goals a given team could expect to both score and concede against the statistically average control side, that's how you win a match after all, it follows that City again hold a slight overall advantage in terms of the interaction between offensive and defensive capacities.

City's offensive rating of 3.1 is again the best in the league and comfortably outmatches Liverpool's rating of 2.7. However, defensively, both teams record a league best rating of 0.3. If these figures are stretched out across an entire Premier League season, the model projects that Liverpool will amass 90 points and fall agonisingly short of City's estimated total of 93.

Aware, that such projections are exactly that, projections, the model includes a margin of error, expressed as a probability, which seeks to account for unpredictable variability and moreover updates this calculation after every gameweek. So, as far as FiveThirtyEight see things at the moment Man City have a 62% chance of winning the league, making them, in their eyes at least, nearly twice as likely to be crowned champions as Liverpool, whose chance they rate at 32%.

As previously stated the model is ever-evolving so those are the probabilities as calculated from today. By way of comparison, at the start of the season the probability of Man City retaining their crown was rated at 45% and in the interim that figure has risen as high as 74% in mid-November. On their part, Liverpool's chances were initially rated at 24% but slipped as low as 19% in mid-November, before they rebounded strongly to today's figure. Extrapolating from this data set, it would appear that, to date, both teams have succeeded in separating themselves from the rest of their competitors to emerge as the only viable contenders for the title. Man City, however, remain clear favourites, despite the fact that over recent weeks their form has regressed from its peak.

Consistent with the SPI paradigm, the expected goals model corroborates the statistical argument promoting City as likely champions. While in reality, which, as we shouldn't forget, is ultimately all that really matters, Man City trail Liverpool by a single point, the expected goals algorithm reverses their respective standings and elevates the Cityzens to the top of the table. Seen through this interpretive lens, Man City should have accumulated 39.74 points in comparison to Liverpool's 34.47 and as such should enjoy a relatively comfortable cushion at the head of affairs.

With both teams' attacks exceeding their expected goals totals by broadly equivalent amounts, the most significant single factor in explaining why Liverpool are performing ahead of the statistical curve, would seem to originate in their defense. Intriguingly, It's not, according to the model, that their defense is outperforming City's, on the contrary with respect to expected goals conceded, it's the Manchester side's rearguard who emerge as the more parsimonious of the two, with 11.17 goals conceded, in comparison to Liverpool's 12.92. No, it's the degree to which those respective figures diverge from, and in both instances exceed, the actual number of goals conceded, +6.92 for Liverpool and +2.17 for Man City, that serves as the basis for something approaching a cogent explanation of a real world scenario at odds with statistical modelling.

Generally, if there is a scenario wherein the number of actual goals conceded falls short of the number of expected goals conceded it can can be accounted for by one of two factors, either lax finishing by the opposition or excellent goalkeeping. Delving into the second factor, it does appear that Alisson is marginally outperforming Ederson in the defensive aspect of the goalkeeping art, even when you allow for the fact that he has made one major error leading to a goal, whereas the Man City man has yet to make any. Across other key performances indicators however, Alisson comes out ahead, with the Liverpool netminder making more saves per game, 2.25 to 1.75, at a higher rate, 85.71% to 75.68%, than Ederson.

Again, it is worth acknowledging that the expected goal metric does come with something of a health warning. Across the four complete seasons which this particular tool has been in operation it has successfully mirrored real world outcomes on three occasions. The exception came in Leicester's title winning season of 2015/16 when the model identified Arsenal as the expected points champions.

Additionally, it must also be accepted that there is typically a quite marked divergence between the expected points total of each year's eventual champions and the actual number of points they accrue. Interestingly, the nature of the divergence is in itself seemingly predictable, as to date each recorded year's actual points total has comfortably exceeded its shadow version of expected points. Expressly, Chelsea's 2014/15 and 2016/17 points totals were underestimated by 11.68 points and 17.26 points respectively, Leicester's fell short by 12.06 points, while for last year's Man City side the deficit was 8.91 points. Of course, by the very nature of the model's underlying mathematical principles which are derived from average performance levels it would be anticipated that any side exhibiting close to title winning form should overshoot the model's projections.

I don't think it would really surprise anyone that Man City, according to both models, remain the best team in the league. Put simply, their SPI rating is higher, the probability of them winning the league is significantly higher, and their expected points total is higher. Moreover, Man City's form against the top 6, irrespective of what happens at Anfield this weekend, is also better than Liverpool's. Pep's side have, courtesy of victories over Arsenal, Tottenham, and Manchester Utd, an away draw with Liverpool, and last weekend's loss to Chelsea, secured 10 points from a possible 15. Liverpool meanwhile have collected 6 points from a possible 12, having drawn against Chelsea, Man City, and Arsenal, and defeated Tottenham. Should Liverpool defeat Manchester Utd, that would bring their top 6 total to 9 points from a possible 15.

If, as expected, Liverpool do handily dispatch Utd, there would be a certain symbolism in the top two's respective records against the other elite clubs; almost evenly matched, but with City just that fraction ahead. Of course, it shouldn't be ignored that the league table suggests otherwise and moreover that statistical modelling isn't a crystal ball, it's more Nostradamus with a calculator.