Okay so I've done this and I'm pretty surprised by the results. They seem to back up your feeling that the difference between 0 and 35 is actually the biggest difference of 35 that exists.
Here's how we measure contributions currently through raw averages: completely linearly.
View attachment 22069
Here's the function Spark designed largely to measure big scores non-arbitrarily.
View attachment 22070
But here's the actual historical correlation between each score and not losing (I chose not losing rather than winning because I figured the difference between drawing and winning is almost always bowling rather than batting).
View attachment 22072
To explain that graph a bit, I'll give some examples. At 50 on the x-axis, the y-axis reads 0.672. That means that 67.2% of the scores of 50 (actually, 40-60, as I took in wide births to avoid anomalies on the graph, but that's not really important as it didn't change the overall shape of it) are made in teams that don't go on to lose. Given it's actually 52.04% at 0, it's basically showing that, historically, there's greater value in getting to 50 than converting a 50 into a ton (79.5% at 100).
It backs up the Spark function idea that there's little material difference between 250 and anything higher than that, but the way sub-150 scores work seems to be the opposite of the Spark function, which surprised me greatly. Avoiding low scores has historically been a better way to not-lose than having a player make a really big one. I do wonder if that's being thrown off by tailenders (particularly given they bat less in general in winning teams) or pre-war cricket in some way.. I might try to exclude them and see what happens to it.