ChatGPT won’t help Bill Gates’ philanthropic endeavors
Stakeholders are overstating the societal impact of an otherwise useful tool
A few days ago, Bill Gates announced the beginning of the age of AI, comparing ChatGPT to the revolution brought by the first graphical user interface. This isn’t an unprecedented claim by stakeholders and the media. It begs the question as to what makes this time different from the past breakthroughs that elicited similar reactions.
For all intents and purposes, the hype around AI isn’t new: it’s been here well before the ChatGPT craze. In 2017, Jeff Bezos declared a new golden age for AI, as did many others in the past few years — notably Knowledge at Wharton, the World Economic Forum, Deloitte, Chinese giant Baidu, and Forbes. VC funds have long favored AI-driven companies, even before the recent interest in generative AI, with some wondering if these investments were excessive.
ChatGPT also isn’t the first hyped-up AI technology to enter the mainstream. People have been consciously interacting with opaque AI systems for years. For better or worse, some of these have been all the rage in past media cycles. Remember Facebook’s algorithm, the one that was going to single-handedly destroy democracy? Or YouTube’s radicalizing recommendations? Those were also AI.
What makes the ChatGPT hype different? One thing for sure is that conversational AI is intrinsically fascinating to humans because it speaks our language. We’ve been long interested in developing and interacting with chat bots, from SmarterChild on AIM, Yahoo Messenger, and MSN (its Italian counterpart was Doretta, with Doriana being her funny, potty-mouthed sister), to the current Alexa, Siri, and Google Assistant.
Most importantly, ChatGPT took conversational AI to a level that its better-funded, longer-running competitors have never reached before. It is a powerful tool to assist with tasks such as coding and text editing. In his post, Bill Gates identifies prominent opportunities in productivity aid and novel drug discovery.
But ChatGPT is really bad at what it’s bad at: from hallucinating facts, to generating harmful and biased content (or an unsettling combination of both).1 Given the state of the art, claims of ChatGPT’s role in philanthropy seem overstated at best, if not disingenuous.
Bill Gates was reportedly convinced of the chat bot’s potential because it was able to solve an AP Bio test. But it is unclear how this relates to reducing health or education inequities. More egregiously, doing so well on the exam may be due to testing on training data, which makes this benchmark worthless when trying to assess the model’s results in unforeseen circumstances.
Gates’ suggestion to use ChatGPT to answer the poorest’s health questions sounds like letting peasants eat cake when they have no bread. Another subpar healthcare tool won’t move the needle and, as Gates himself admits, the bottleneck will always be granting the poorest access to high quality solutions. Similarly, education inequities won’t disappear by throwing shiny new tech at them. Bill Gates fails to explain how AI will reduce disparities in education when personal computers didn’t deliver on the same promise.
It is always worth considering who is advancing these over optimistic statements about what complex problems AI will solve. It is impossible to ignore Bill Gates’ affiliation to Microsoft and the company’s partnership with OpenAI. Sam Altman himself has obvious incentives to overstate the role of ChatGPT in the quest to AGI, despite sounding more realistic.

Time will tell if this is indeed the age of AI. For now, all I see are overblown claims about another otherwise useful AI tool. Gates will probably have to wait a long, long, while before ChatGPT can help with his philanthropic endeavors. In the meantime, we could bring the focus back onto the more attainable things that ChatGPT can do.
OpenAI has since been implementing reasonable guardrails, but users keep circumventing them.