Verity harnesses the Wisdom of Crowds, but what if the crowd is wrong?
Over the last six months, thousands of users have helped us test the many different use cases of the Verity platform
But perhaps the most interesting aspect of these tests so far has been the ability to observe the behavior of humans (our data providers) when different rewards structures are used to incentivize them.
Early on in our alpha testing programme, our reward algorithm heavily incentivized the speed of response. If you were quick to submit your data and you were in the consensus, you would get a larger share of the reward than those who submitted correct answers more slowly.
This worked well for events where the required data was immediately observable, such as our fake news or counterfeit product experiments. However, it also meant that people were more inclined to rush their answers in the hopes of getting more reward rather than taking their time to ensure their answers were 100% correct.
This situation came to a head during one of our world cup football test events. Users were asked to supply Verity with the full-time score of the match, along with the number of extra minutes of time announced, after the initial 90 minutes had been played.
At Verity, we watched both the match - Germany Vs Korea - and our community, with great interest to see how people would behave with their data submissions.
Would they wait until the very end of the game to ensure they supplied the correct score, or would they gamble, submitting a risky prediction rather than the actual end-score?
It was the perfect match for such a test because no one would ever have predicted that the defending World Cup champions — Germany — would be knocked out by South Korea (a team not even in the FIFA Top 50 ranking), and certainly not by South Korea scoring two goals during extra time!
Speed Vs. Accuracy
The unexpected final result (South Korea: 2 Germany: 0) highlighted people’s tendency to gamble; ‘final score’ data that would become inaccurate due to the late goals, started pouring in before the end of the match in such numbers, that eventually, no consensus was formed.
Because the reward structure incentivized speed, people were willing to risk being wrong in the hopes that the score would not change, and they’d be amongst the fastest data providers, hence get a larger slice of the pie.
For the second match that evening, (Serbia Vs. Brazil), we completely removed the incentive for speed and implemented a flat reward structure, meaning that providing you were part of the consensus group, you would get the same reward as everyone else. No extras for being fastest.
The change in behavior was instantly apparent; the vast majority of people waited until the end of the match, and an accurate consensus was reached. Although, remarkably, despite there being NO incentive at all to supply data prematurely, some people still gambled, submitting (incorrect) scores and stoppage time before the end of the game, and getting zero reward for their troubles. Perhaps they just hadn’t read the memo.
Nevertheless, any system that relies on a human element will be subject to some unpredictability, and that’s why it’s been so interesting for us to observe how people act when incentivized in different ways.
As Martin Mikeln, Verity co-founder explained to our telegram community of 22k people after the Germany Vs. Korea test:
“Our prototype, Verity Alpha, doesn’t yet have the dispute and review mechanism built-in, and there is no staking or reputation effect which puts the whole system in a multi-Nash-equilibrium state, instead of being in a single Nash equilibrium state. In other words, it is much more susceptible to “gambling”.
Even though not many people gambled early in the game, there were still many that voted a few minutes before the match had ended, because they had “nothing to lose”. And a crazy game like tonight proved that without those mechanisms implemented, consensus might not be reached.
Future events will take in to account reputation score (already being calculated but not yet implemented), a basic form of staking, and a dispute mechanism.”
We think that both accuracy and speed will be valued by developers using Verity to collect data, but that accuracy will almost certainly be valued over speed. We’ll continue to test reward structures, and ultimately, allow developers to select the best reward structure for the data they need.
And once our planned ‘reputation’ and ‘staking’ functionality is in place, supplying inaccurate data will become severely penalized, eliminating the incentive to do so, and allowing Verity to harness the wisdom of crowds, and avoid the madness of mobs.