• Welcome to the Cricket Web forums, one of the biggest forums in the world dedicated to cricket.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join the Cricket Web community today!

    If you have any problems with the registration process or your account login, please contact us.

Question on Standardizing batting and bowling averages

Days of Grace

International Captain
Been working a lot on standardizing averages for test batsmen and bowlers. But I am always getting stuck on one point:

For example, Bowler A bowls in two innings.

One is against a strong batting lineup with let's say a team average of 40 (I also take into account match conditions). Against the overall RPW average of 32 across the entire history of test cricket, any runs conceded against this batting lineup will be multiplied by 0.80 (32/40).

Let's say the bowler takes 3/120 against the strong batting lineup. Thus, his runs conceded would drop to 96 (120*0.80).


In another innings, the same bowler plays against a weak batting lineup and/or in conditions good for bowling. The batting lineup/conditions have an average of 26.67. That's 32/26.67 = 1.20.

The bowler takes 3/60, thus getting his runs conceded adjusted up to 72 (60*1.20).

Without adjusting, the bowler across the two innings took 6/180. Adjusted, he takes 6/168.


Now, here's the problem. When a bowler plays a tough opposition or in tough conditions, he normally concedes more runs. Thus, his bowling average is adjusted downwards. However, as in the case above, if he plays an equal number of matches against strong and weak opposition, then his average will still be adjusted downwards because the more runs you concede, the greater the affect of the adjustment.

In the above example, the bowler should have an adjusted average equal to his original average.

Likewise, for batsmen, they would also see their averages drop since they would typically score more runs against weak opposition/in benign conditions.

Thus, bowlers are rewarded but batsmen are punished. This does not seem fair. Is there a way to factor something into this equation?


I can upload some spreadsheets if anyone is interested.

Hope I explained my conundrum well enough.
 

Lillian Thomson

Hall of Fame Member
Your entire problem is that you're trying to rate players based on something that hasn't happened. Not even Pythagoras could have helped with this. The stats are facts and no amount of bastardisation is ever going to be flawless.
 

Spark

Global Moderator
Your entire problem is that you're trying to rate players based on something that hasn't happened. Not even Pythagoras could have helped with this. The stats are facts and no amount of bastardisation is ever going to be flawless.
"Look how clever I am for not actually reading the post"

---

To be honest the problem seems to stem at least partially from the fact that the adjustment varies with the number of runs conceded which to me seems... unjustified at best. Is there a way to weight simply by the number of matches?
 

indiaholic

International Captain
Been working a lot on standardizing averages for test batsmen and bowlers. But I am always getting stuck on one point:

For example, Bowler A bowls in two innings.

One is against a strong batting lineup with let's say a team average of 40 (I also take into account match conditions). Against the overall RPW average of 32 across the entire history of test cricket, any runs conceded against this batting lineup will be multiplied by 0.80 (32/40).

Let's say the bowler takes 3/120 against the strong batting lineup. Thus, his runs conceded would drop to 96 (120*0.80).


In another innings, the same bowler plays against a weak batting lineup and/or in conditions good for bowling. The batting lineup/conditions have an average of 26.67. That's 32/26.67 = 1.20.

The bowler takes 3/60, thus getting his runs conceded adjusted up to 72 (60*1.20).

Without adjusting, the bowler across the two innings took 6/180. Adjusted, he takes 6/168.


Now, here's the problem. When a bowler plays a tough opposition or in tough conditions, he normally concedes more runs. Thus, his bowling average is adjusted downwards. However, as in the case above, if he plays an equal number of matches against strong and weak opposition, then his average will still be adjusted downwards because the more runs you concede, the greater the affect of the adjustment.

In the above example, the bowler should have an adjusted average equal to his original average.

Likewise, for batsmen, they would also see their averages drop since they would typically score more runs against weak opposition/in benign conditions.

Thus, bowlers are rewarded but batsmen are punished. This does not seem fair. Is there a way to factor something into this equation?


I can upload some spreadsheets if anyone is interested.

Hope I explained my conundrum well enough.
Don't think I understand this. 40 is 25% more than 32, no? Then 25% less than 32, will be 24?
 

Lillian Thomson

Hall of Fame Member
"Look how clever I am for not actually reading the post"

---

To be honest the problem seems to stem at least partially from the fact that the adjustment varies with the number of runs conceded which to me seems... unjustified at best. Is there a way to weight simply by the number of matches?
I did read the post. If he's doing something other than artificially adjusting performances please explain.
 

Days of Grace

International Captain
Your entire problem is that you're trying to rate players based on something that hasn't happened. Not even Pythagoras could have helped with this. The stats are facts and no amount of bastardisation is ever going to be flawless.
Of course it's not going to be flawless. But it's an interesting exercise. Otherwise I guess you're accepting that George Lohmann is the greater bower of alltime?
 

Lillian Thomson

Hall of Fame Member
Of course it's not going to be flawless. But it's an interesting exercise. Otherwise I guess you're accepting that George Lohmann is the greater bower of alltime?
No. But nor would I accept a spreadsheet full of arbitrary fannying around with the stats. Anyway if you enjoy it and other people enjoy reading it it's all in a days work for bicycle repair men.
 

cnerd123

likes this
No. But nor would I accept a spreadsheet full of arbitrary fannying around with the stats. Anyway if you enjoy it and other people enjoy reading it it's all in a days work for bicycle repair men.
> Sees thread asking for help regarding statistical analysis
> Posts in thread to decree statistical analysis meaningless
> Rides off on high horse into the sunset

GG
 

Lillian Thomson

Hall of Fame Member
> Sees thread asking for help regarding statistical analysis
> Posts in thread to decree statistical analysis meaningless
> Rides off on high horse into the sunset

GG
Not to worry though. My posts will soon be swallowed up by all the posters queuing up to help him. :xmas:
 

Prince EWS

Global Moderator
This is one of those times where I have the answer in a conceptual blob in my head but am not sure if I possess the command of language to actually explain it.

There are actually lots of ways to get around this if you think it's a problem; you could standardise wickets taken instead, or find the average standard of opposition a player faced per match across his career and standardise it all in one go instead of doing it match by match, or even find the average standard of opposition a player faced per over and standardised it all in one go that way, or probably lots of other things I've never thought of.

However, I actually think what you've identified is a problem with how raw averages are calculated. Standardising them 'distorts' this phenomenon of players usually ending up bowling more when they play against strong batting lineups, but this distortion is actually good because that phenomenon was a distortion in the first place. In standardising the runs conceded you actually mitigate an existing problem rather than creating a new one, assuming you've figured out the standard of opposition perfectly (which you never really can, but you always have to assume you have).
 

Days of Grace

International Captain
This is one of those times where I have the answer in a conceptual blob in my head but am not sure if I possess the command of language to actually explain it.

There are actually lots of ways to get around this if you think it's a problem; you could standardise wickets taken instead, or find the average standard of opposition a player faced per match across his career and standardise it all in one go instead of doing it match by match, or even find the average standard of opposition a player faced per over and standardised it all in one go that way, or probably lots of other things I've never thought of.

However, I actually think what you've identified is a problem with how raw averages are calculated. Standardising them 'distorts' this phenomenon of players usually ending up bowling more when they play against strong batting lineups, but this distortion is actually good because that phenomenon was a distortion in the first place. In standardising the runs conceded you actually mitigate an existing problem rather than creating a new one, assuming you've figured out the standard of opposition perfectly (which you never really can, but you always have to assume you have).
Thanks PEWs for your advice. I may have made a breakthrough of sorts today.

I have been rating test batting lineups (and bowling lineups) based on the individual ICC ratings of the players. For batting teams, I average out the ratings of the top 7. Some adjustment upwards is made for teams with little experience since the ICC ratings only give a full rating to a player after 40 innings.

Anyway, a very strong batting lineup has an average rating of 750. A very weak lineup has an average of 250. I applied my own intuition and decided that a rating of 750 would be 1.33 times the overall era RPW average and the opposite for teams with 250.

For example, in the era from 1920 to before the 2003 world cup the RPW is exactly 32.00. I use the 2003 world cup as a stopping point because after that event, we entered the current era of flatter pitches, bigger bats, etc. afaic.

Therefore, a team with a 750 rating would have an RPW of 42.67. A team with a rating of 350 would have an RPW of 26.67 and so on.

After adding in match/pitch conditions, one can work out the average rating of teams/match conditions a bowler faced over his career.
.
Grimmett, for example, faced an average rating of 532. If we apply this rating of the overall RPW of the entire history of test cricket, rounded down to 32.00 again, then 532 = 33.38 (32*(532*-0.001333*0.333333)). Grimmett's average then would be multiplied by 32/33.38 (24.22 adjusted down to 23.22).


However, another method I have come up with is to weight Grimmett's bowling performances, i.e. the analysis of each of his bowling innings, with the rating of the batting lineup/match conditions. If I do that, then it comes to 479 (31.12).

If I weighted runs conceded or balls bowled by ratings, then of course the average rating would be high since a bowler typically bowls more overs and concedes more runs against strong opposition. I'm not sure if that is the way to go here.

Not sure which is the best method. I will attach or email my Grimmett spreadsheet if anyone is interested.

Again, hope I explained well enough.:)
 
Last edited:

Top