Meredith Whittaker is a research scientist at New York University and Co-Director of the AI Now Institute (hope to apply there some day!), which is dedicated to understanding social implications of AI.
Jack Clark is OpenAI's Strategy & Communications Director. He maintains the awesome Import AI newsletter. Check it out here!

Just as I was applying to the MSc in Urban Science, Policy and Planning at SUTD, I came across Jack Clark’s tweet (above), as well as Meredith Whittaker’s response (below). The tweets resonated strongly with me, an AI research assistant applying to a humanities graduate program.


I think there are two main parts to both tweets. For conciseness and due to personal experience, I will refer to Meredith’s tweet. However, I believe that the underlying issues are mirrored in Jack’s question. Of course, the following is based on my personal experience and your mileage may vary.

Why don’t AI practitioners engage with the domain that they are designing AI systems for?

I worked as an AI researcher at a cybersecurity lab. Prior to that, I worked also worked as an AI engineer at a chatbot company. In both cases, the extent of my knowledge in the subject domains (cybersecurity and conversation) was superficial and limited to what was strictly necessary.

Many commentary pieces warn against using AI as a hammer and treating every problem as a nail (just Google ‘AI is not a hammer’). However, I find that AI engineers tend to exhibit a more subtle version of this. The AI engineer begins with a bag of AI tools, some tried-and-tested and others bleeding-edge. Faced with the problem, the AI engineer immediately throws out a variety of tools. When asked which one is likely to perform well, the engineer might hazard a guess. But the truth is, we are not that certain either.

It’s all rather empirical I’m afraid.

Missing from all this is the initial stage to understand the problem from first principles. There is a general sentiment that AI is a magic tool that just works, which is frequently reinforced by news articles reporting new breakthroughs in hyperbolic fashion. Unfortunately, I find that such sentiments are shared by many AI practitioners themselves.

Related to the religious faith in AI, companies looking for AI solutions underestimate the amount of information and analysis required to design a great system. Problem settings are often variations of “Okay I’ve got a bunch of data. Can you feed it into the neural network?” Often there is little time allocated for interdisciplinary investigation and a virtual wall between AI practitioners and domain experts.

Another aspect of this is AI researchers often prefer working on more general innovations rather than problem-specific implementations. The motivation behind this being that general innovations are more publishable at AI conferences and journals, while problem-specific implementations are more likely to end up in domain-specific journals.

Finally, it is simple human nature to be lazy. Furthermore, when companies are demanding superficial AI solutions (just for the looks of it), general algorithms do work good enough. In such cases, the AI engineer has little incentive to really understand the problem at hand.

Should AI practitioners engage with the domain that they are designing AI systems for?

This question is somewhat rhetorical, since the answer (duh, yes) is implied in both Meredith and Jack’s tweets. But I think it might be worth thinking about on a deeper level.

We do not expect electrical engineers to be well-informed about foundational architectural studies. Neither do we expect architects to be experts at designing full circuit diagrams for buildings. A well-designed building relies on both parties (and many more) doing their job and, just as importantly, communicating with each other clearly.

Likewise, I think it is impractical and inefficient to have AI practitioners be well-versed in the subject domain. Especially for cases where there is limited time to design the solution and the subject domain changes frequently, in which the engineer acts more in the capacity of an AI consultant.

But that is not what the question means. We are talking about whether AI practitioners should engage with the subject matter, not whether they should be proficient. The implicit assumption in Meredith’s question is that many practitioners avoid engaging with the subject matter. To use the architecture analogy, it is definitely a problem if an architect is unaware of or dismisses the importance of electrical engineering.

We can probably take that analogy further. Building design requires an interdisciplinary approach and the collaboration of experts across domains. Similarly, designing great AI solutions should require the collaboration of experts across domains. The collaboration acknowledges the importance of different disciplines and is the perfect opportunity for practitioners to understand each other’s demands, concerns and contributions.

So yes, I think AI practitioners should definitely engage with the domain - not just through reading but through active collaborations with domain experts. It is important for companies, research labs and researchers to understand and make space for such collaborations to happen.


My personal response to Meredith’s tweet takes the form of my enrolment in the MSc in Urban Science, Policy and Planning. I had originally ventured into AI hoping to one day be able to apply AI research to helping solve social problems. Yet admittedly, I did not attempt to understand the nature of such problems.

On the bright side, with my (albeit limited) experience as an AI researcher, I am now far better equipped to think about how AI can be used. Keeping that in mind, I am excited to learn and understand the social problems that afflict the world and design solutions for fixing them. Looking forward to engaging with my lecturers and peers!