Have Gauntlet report actual Critical % chance
I tested 5%, 25%, 45% and 65% crew by battling crew with the same % chance. In other words, 5% battled 5%, 65% battled 65%, etc.
I also only battled when there was 2 skills being paired, so 6 chances for a critical each battle, for both my character and the opponent. I re-rolled opponents using merits to get these parameters in place. I did not control for fatigue, as there is no reported indication that character fatigue negatively impacts critical % chance.
Results:
5% battles- 360 battles (6 chances per battle, so 2,160 reps)
My crit count 59 opponents crit count 123
My observed crit 3%, oponents observed crit 6%
25% battles- 180 battles (6 chances per battle, so 1080 reps)
my crit count 163, opponents crit count 223
my observed crit 15%, opponents observed crit 21%
45% battles- 70 battles ( 420 reps)
my crit count 152, opponents crit count 171
my observed crit 36%, opponents obs crit 41%
65% battles- 50 battles (300 reps)
my crit count 178, opponent crit count 198
my observed crit 59%, opponent obs crit 66%
Please fix these issues so that we can more clearly understant what is going on. I did not record data for the streak number, however, as I was doing this I also saw a clear indication that when you are in a battle that will result in a prize chest if you win, the critical count was always skewed in the opponents favor.
Again, this data is not about wins and losses, merely the expected and observed critical chances, and the indications are that the gauntlet numbers are not accurate.
2
Comments
https://forum.disruptorbeam.com/stt/discussion/11125/2500-rounds-of-gauntlet-data
Also, I am not sure that they followed the same type of controls that I did.
Interesting dataset, but not really relevant to what I was looking at.
Besides, in my dataset it shows that both the player and opponent numbers are wrong, the player is a bit lower than the opponent, but neither is correct.
edit: I also noticed that the intention of the data collection for that group sourced data was to test whether or not there was any malfeasance or deception on the part of the game developer....just playing devils advocate, but since it was voluntary data submission, if there WAS in fact malfeasance by the developer, they could have simply submitted skewed data into the collection.......
on that note, you should be aware that there is no such thing as a true random number in programming. there is always a bias.
Some issues with the discussion cited above:
1. it was designed to test for malfeasance on the part of the developer
2. It was open to anyone for data submission
3. data has ceased to be collected since 2019
This is problematic for hypothesis testing, because if the hypothesis is that the developer was manipulating the code in their favor, it stands to reason that they could adjust that code. A public announcement on their forum may have been reason enough for that adjustment. In other words, the test may have influenced the results it was seeking to gather.
Additionally, the data submitted is not reported by user tag. This means that it would be very easy to submit skewed data without repercussion. In the original thread, you can see a post from barkley that discusses someone submitting 8000 rounds when it would have been impossible to do so. Clearly, there was some interest (or obliviousness) to influence data analysis.
data has not been collectd since 2019, so they could have switched it back or changed the code again....resulting in what is observed today.
My test is repeatable by anyone. I clearly explained the set up. Anyone can do it. And i stand by my results. I have continued compiling data and the numbers are not improving. I will update the post in a few months.