Monsters, superheroes and video games: Is AI even real?
Just like the Cookie Monster of Sesame Street, AI is a data monster, which can devour data (and process it) at breakneck speeds. It takes thousands of cat pictures for a deep learning algorithm to understand what is a CAT or to rephrase the Greek philosopher Plato, the “catness” of the cat. And even after ingesting these thousands of pictures, our data monster still gets confused (I am sure Cookie Monster would never be confused about recognising Oreos).
Search online for images of “Blueberry Muffin or Chihuahua” or “Sheepdog or Mop” and you will get the hint. Coming to think of it, and I never did before, the face of a Chihuahua often freakishly looks like a blueberry muffin! But, even then, if a Blueberry Muffin and Chihuahua were in a police line up and you were expected to identify the suspect, I am guessing you would be able to tell the difference. AI just might fail to do so and yet governments ask us to trust AI in police work!
Before we move on to what gives, let us, however briefly, think of the poor soul who had to look through 10,000 cat photographs and label them ‘cat’. Think about the sheer boredom of the man or woman who does this for a living…labelling photos as ‘cats’ or ‘dogs’ or really anything else. Repeating ‘meow’ and ‘bow wow’ for ones child is one thing but labelling photograph after photograph for some machine… I mean, these are the secret superheroes who are allowing the next generation of AI to work.
Let us think of something worse. We do not want extreme violence to be a part of video streaming services. We do not want terrorist ideals to be propagated through microblogging sites. We do not want the electorate to be brainwashed with fake news on social media. But how would the microblogging platform, the video sharing service or the social media site know that they are being used for something as nefarious? And no, the answer is not necessarily AI. It is the work of human beings, still. When users flag videos or posts often enough, human intervention is used to check if that content needs to be taken down or not (which in turn can train AI in the future). Let us briefly consider the mental health of the highly paid FAANG employee in Silicon Valley (or, in reality, the extremely lowly paid FAANG contractor sitting in Gurgaon), whose job is to label these videos and who are continuously being forced to watch matters which are not even suitable for the cesspool of the World Wide Web. Can AI help her? Or, on a more positive note, can AI start driving cars, if these unsung superheroes label enough street videos to help AI differentiate between a human and a rock, or a cat crossing a street vs. a blueberry muffin which some rogue kid has thrown out of the window? In short, can AI help create data to support the future of AI?
The answer to this might be video games. Think about a gaming platform where you can create near perfect racing games. Let us say that a large pothole has been placed on your track or a cat is crossing the street, and you want to avoid them. What is a signal for your brain to turn sharply right may also be used to pass that same signal to the algorithm of a self driving car. Of course, the computer knows that the pothole is a pothole in a game, because it is already labeled in the background (unlike in real life where it has to be manually labelled). Similarly, in the game, the cat has been developed in 3-D where different aspects of catness has already been captured. Thus, when you are training the AI algorithm on driving in these streets, you will not need a human to label anything at all. Imagine a virtual reality flight simulator being used to teach AI how to fly. This is the synthetic data that is now the next big thing in this never ending search for evermore data to feed the AI monster.
And remarkably, now the creation of this synthetic data is getting automated. In April 2022 NVIDIA (the ones who make graphics cards for gamers, which are now used as much in deep learning) came up with NERF. NERF can use a small set of photographs of a cat (or any other thing, really) and create a synthetic realistic cat which is in 3 dimensions. So, when training AI algorithms you can have a cat running towards a car or away from it, a cat which looks like a blueberry muffin, or a side profile of a cat, but all of them will be automatically labeled correctly because it has been created by an algorithm itself, and the AI will thus know that it is to be avoided. Once it learns this in the streets of Need for Speed, it can replicate its effectiveness in real life as well.
Effectively, the monster is fed, our secret superheroes can find more useful missions (and ways to earn a living) and our cats can cross the street in relative safety because of video games.