Contemporary freelancing—at least, pre-COVID—has been enabled by digital technology which enables remote work and collaboration. Freelancers also participate in online labor markets, such as Task Rabbit and Fiverr. These services matchmake between workers and hirers at a relatively fine grained, task-by-task level.
This technology automates and standardizes the hiring process, collecting descriptions, histories, and evaluations of workers to feed the hiring decision. In principle, these datasets are level playing fields, with every worker treated the same, and uniform processes across all gigs.
What could possibly go wrong?
(“People” is what could possibly go wrong.)
This month, Nahla Davies writes about her own experience on these platforms which is not that different from a lot of other work and hiring experiences . <<link>> The technology may be “color blind”, but the people using it sure aren’t.
Gosh. Who’d a thunk it?
There isn’t a lot of solid research on these platforms, but some studies show that hiring, pay, and evaluations may well be skewed in favor of white males . The evaluations matter a lot, because these can strongly influence getting future gigs. And while the differences may be small, in a crowded market even a slight disadvantage can be disastrous. And with pay at the edge of a living wage, even slight pay discrepancies can make the difference between success and failure.
Davies backs up the limited research with her own observations. She perceives that non-white workers receive lower evaluations, which ultimately hinders their ability to get gigs, and the pay offered.
In addition, hirers act like bosses everywhere. They talk down and man-splain to experienced professionals and, worse, hire, as Davies puts it, as “part of a reputation management attempt”.
“I’ve been asked, for instance, to be “the face” of some employers … so they can fairly transparently prove their progressive credentials.”
Now, as Davies says, we can’t really expect online platforms to cure racism, sexism, and blockheadedness.
But I agree with her that it’s not OK for the platforms to wash their hands and do nothing to protect their workers from patently unfair (not to mention unreasonable) outcomes. If nothing else, this is a waste of human resources, which is the main product of these platforms.
Davies suggests collecting demographic information to document this kind of bias. She also speculates on using some kind of algorithmic corrections. The former would make it easier to document the outcomes, if nothing else. I’d be surprised if the latter would actually make things better, and could easily make things much worse.
She also suggests changes to the decision making, including more transparency about pay and wider participation in the hiring decisions. These are probably good ideas in any case. And it seems to me that a digital hiring platform is well placed to enable such modifications. In fact, why doesn’t the platform offer an array of decision-making processes, in the same way that it offers an array of gigs and workers?
I would add a suggestion that the platform should let workers rate the platform results, similar to how employers are allowed to rate workers. I.e., if the platform is giving biased outcomes, the workers should be able to ding it, or its processes. Maybe this should trigger lower fees to the platform, or something like that. If the platform does nothing to help workers, the fees should be lower than when it serves their interests, no?
Freelancing it hard enough, I hate to see these “level playing fields” making things even harder for some workers.
- Nahla Davies, Black freelancers face discrimination on online hiring platforms, in Freelancers Union Blog, August 24, 2020. https://blog.freelancersunion.org/2020/08/24/black-freelancers-face-discrimination-on-online-platforms/
- Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson, Bias in Online Freelance Marketplaces: Evidence from TaskRabbit and Fiverr, in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2017, Association for Computing Machinery: Portland, Oregon, USA. p. 1914–1933. https://doi.org/10.1145/2998181.2998327