I first heard about data annotation while scrolling TikTok. A woman wearing a fuzzy beige cardigan appeared on my screen with a clanging Stanley cup in hand to tell me about how she has been on a quest to make $100 each day, before and after her 9 to 5. Naturally, my ears perked. There’s a feral little puppy in the pit of my gut, born out of the hellscape of American capitalism, that is always yapping at me to work harder, do more, live differently. Had this seemingly polished, cardigan wearing 20-something hacked the system and figured out a way towards financial stability, the thing I increasingly yearn for while feeling increasingly diametrically opposed to the means of achieving it in an unbalanced, ruthless economy?
I crunch down on a handful of chips and keep watching, the glow of my phone casting over my face, a nimbus of false enlightenment.
The TikToker’s name is Jackie. I quickly begin to covet her life. I assume she is better than me in every way. She explains that she has begun to work as a data annotation tech as a remote side hustle. When I check in on her again, months later, to write this article, I find that she is on day 78 of 100 trying to make an extra $100 a day with this gig. I am impressed by her stamina and also fearful of becoming her. I do not want to work 12+ hour days for 78 days, if I can help it. Her bio says she is 26, my age.
I immediately google “data annotation jobs remote” and almost find myself inputting personal information to apply for a job. I catch myself before pressing submit and open a new tab. This time I google “data annotation jobs legit?” and click on a Reddit thread. On r/WFHjobs, user jonbestinsnow asks what I was wondering: “Is Data Annotation a scam? They have projects you work on for money. I can’t remember if I gave them my venmo username or not.”
Coffeenebulamom replies: “I am happy to report that it’s definitely not a scam! I have been working for them for a couple of months now and made a couple thousand bucks. They test and train all kinds of AI, and they look for people who can write pretty clearly and read instructions very well.”
I navigate back to the home page of Google and do some more clicking around. It’s worth noting that I use Google everyday, as a writer who uses Google Docs both personally and professionally for my projects. I like Google suite – it’s free, collaborative, and for the most part pretty intuitive to use. What I don’t like: Recent updates to their company’s privacy policy that allow them to scrape web data in order to train their AI products. It is unclear whether this policy applies to Google Docs content, with the policy stating they use “publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.” Google states they will not use Docs content without users’ permission, but are vague about the form that permission takes.
I find this move on Google’s end to be morally unacceptable. Even if a user were to take the time and migrate their work onto a different program, any work up until that point could have been co-opted presumably to help train Google’s models. This brings up a larger point in the conversations around data annotation and AI that I soon discovered through (sigh) googling the topics – AI models rely on human work now to function and will for the foreseeable future. AI models will not erase grunt work, they will just change the nature of lower-paying positions while turbo-charging the capacities of those with institutional seniority. In short, the working class will become increasingly isolated from white collar executives.
In “AI is A Lot of Work,” Josh Dzeiza writes for The Verge about data annotation and the future of human labor necessary to keep AI running. He sums it up by saying: “You might miss this if you believe AI is a brilliant, thinking machine. But if you pull back the curtain even a little, it looks more familiar, the latest iteration of a particularly Silicon Valley division of labor, in which the futuristic gleam of new technologies hides a sprawling manufacturing apparatus and the people who make it run.”
Do you know what helps ease the sensation of being a cog in a very vast, ugly machine, hyped up by tech bros and mega corporations alike? A nice, fuzzy beige cardigan. I wrap one tighter around myself as I sit down at my aesthetically pleasing at-home work station, furbished with a freshly lit candle and soothing music. I can almost convince myself that I am okay. I open Remotasks, one of the major annotation sites, to create an account. I hear a faint buzzing outside over the piano music for stress relief. I ignore it and continue. The site is asking for a photo of my driver’s license. It scans my face with my phone camera, having me turn my head fully to the left, back to center, slightly to the right. It tells me I have been verified to be real.
After some training videos, I am ready to start working, or as they call it, “tasking.” The buzz outside has grown into a drone. It sounds like a ship is hovering over my apartment building, or perhaps, a portal.
I complete one task, then the next. I am categorizing the emotions of people in videos. I am clicking on what is yellow. I am listening for human voices and distinguishing them from all other sounds.
I find my own glance reflected back by my monitor and catch myself off guard. My face is something new. I am mirroring the facial expressions in the videos I label for emotional content. The monitor reflects a smile, then a frown, then something in between. My mouth quirks weirdly, like I am suppressing some intolerable feeling. I label it and continue with the task at hand, trying to get to some unforeseen end.