Sunday, March 28, 2010

MAFL 2010 : Round 1 Results

It's hard to make money when you're betting solely on favourites.

The Heuristic-Based Fund landed 3 bets from 4 this weekend but the one it lost was the longest-priced and therefore most vital one from a profitability viewpoint, so it finished down 0.6% on the weekend. This leaves those with the Recommended Portfolio down 0.06% for the season - a mere flesh wound at most.

That means strike one for the Shadow Fund. Two more consecutive losses will see it hand over control of the Heuristic-Based Fund to whichever heuristic is best performed at the end of Round 3.

The fate of the Heuristic Fund notwithstanding, favourites generally fared well this weekend, winning six of eight games though covering the spread in only four.

Here are the details of the round:

(Three of this weekend's games were won by 56 points, which surely must be some sort of record, albeit a boring one.)

On the tipping front there's not much to report. The surfeit of victories by favourites resulted in most tipsters bagging 6 from 8 for the round, the only exceptions being Chi, who managed 5, and Home Sweet Home, who managed just 4.

Next we turn our attention to the performance of our margin-tipping heuristics where we find that BKB has started the season exceptionally well, producing the round's lowest mean and median absolute prediction errors.

Chi fared next best on mean APE, registering 27.63, and LAMP finished 2nd on median APE, recording a very respectable 19.5, its mean APE suffering significantly at the hands of the Dogs' and the Crows' losses.

Finally, let's take a look at the performance of the HELP algorithm this weekend.

It recorded 5 wins from 8 on line betting, which is barely better than chance.

As well as measuring HELP's win-loss record this season, I'm also going to assess how well it assigns probabilities to its predictions by using using what are called probability scoring metrics. There are three such metrics that are commonly used to measure accuracy of a forecaster's probability assessments in light of actual results:
  • The logarithmic score, which assigns a forecaster a score of 2+log(p) where p is the probability that he or she assigned to the winning team.
  • The quadratic score, which assigns a forecaster a score of 1-(1-p)^2 where p is again the probability that he or she assigned to the winning team.
  • The spherical score, which assigns a forecaster a score of p/sqrt(p^2 + (1-p)^2), where p is yet again the probability that he or she assigned to the winning team.

  • In a future blog I'll talk a bit more about the relative merits of each of these scoring approaches, but for now I'll just note that a naive forecaster, assigning a probability of 50% to every forecast, will return a score of 1 if the logarithmic measure is used, a score of 0.75 if the quadratic measure is used, and a score of about 0.71 if the spherical measure is used. These scores can be considered the minimum acceptable if we're to say that the HELP model is providing any guidance superior to coin-tossing.

    Looking then at the scores for HELP in the last three columns of the table above, we can see that it performed across the weekend slightly worse than a naive forecaster would have, regardless of the scoring approach adopted. What hindered HELP this weekend was its assignment of an 87% probability to St Kilda winning on line betting and of a 90% probability to Adelaide doing the same. Its other incorrect forecast was that the Dogs would win on line betting, but it assigned a probability of just 59% to this outcome and so was not as severely punished for this error.


    No comments: