I've been sitting with this question for a long time. Not because it's unanswerable, but because every time I think I've landed on something solid, I notice I've just shifted the weight to another set of shoulders and called it a conclusion.
The question keeps coming up in boardrooms, in policy panels, in academic journals, in late-night conversations between people who work in tech and feel something creeping behind the excitement. Who is responsible for making sure AI benefits humanity?
And the more I sit with it, the more I think we're asking it wrong.
The temptation to point at someone else
When something feels too big to hold, we find a container. Governments. Big tech. Academia. Society. We name a responsible party and, in the naming, quietly excuse ourselves from the weight of it.
I've watched this happen in organisations. When AI adoption stalls, the blame game starts. Executives blame the employees for being resistant. Employees blame leadership for not providing clarity. L&D teams blame both for not giving them a mandate. Everyone is pointing. No one is moving. And the gap between what AI could do for people and what it's actually doing just quietly widens.
This dynamic scales up. Globally, we are doing the same thing, just with bigger institutions.
The case for Big Tech carrying the most
Let's start where the power is, because that's usually where the responsibility should begin.
The companies building foundational AI models, the ones whose systems will shape how billions of people work, communicate, make decisions, and understand the world, are operating with a concentration of power that has almost no historical precedent. OpenAI, Google DeepMind, Anthropic, Meta AI: these are not neutral technology providers. They are making choices, constantly, about what their models optimise for, what they filter, how transparent they are, and who gets to access what. Those choices compound.
The argument for placing primary responsibility here is strong. They have the resources. They have the technical knowledge. They are closest to the risk. And they are profiting from deployment at scale before society has had any meaningful chance to adapt. While people argue companies like OpenAI are still in the red on paper, consider how much and what kind of data they are collecting and what they could do with it. That is the real price people are paying.
We already have a playbook for what happens when this goes wrong. Social media. Meta's algorithms weren't accidentally addictive. They were engineered to maximise engagement because engagement equals revenue. The consequence, which researchers have been documenting for years, is a measurable deterioration in mental health, particularly among young people. Platforms that were marketed as tools for connection became machines for outrage, comparison, and compulsive scrolling. The incentive structure guaranteed it.
AI is not separate from this story. It's the next chapter. And some of the same companies are writing it. When the business model rewards capturing attention and time-on-product, the features that get built are the ones that make you come back, not the ones that make you better off. That's not cynicism. That's just following the money.
And the research is starting to catch up. A growing body of evidence suggests that certain patterns of AI use are already driving cognitive fatigue, while others, used differently, can actually reduce burnout. The difference isn't the tool. It's the design intent behind how that tool is presented to you. A productivity feature that quietly creates dependency is not the same as one that genuinely augments your thinking, and the companies building these products know the difference. The question is whether they care enough about it to make it the constraint, rather than the afterthought.
But I've also watched what happens when companies become the sole arbiters of their own ethical standards. It's not that the people working at these firms lack integrity. Many are genuinely trying. The problem is structural. When the same organisation that profits from rapid deployment is also the one deciding what counts as "safe" or "beneficial," the conflict of interest doesn't need to be deliberate to be real. It just needs to exist.
Voluntary commitments and safety teams inside labs are necessary. They are nowhere near sufficient.
The case for governments doing more
If companies can't fully govern themselves, governments should step in. That's the logic, and it's not wrong.
The European Union's AI Act is the most ambitious attempt so far to create a regulatory framework. It classifies AI by risk level, imposes obligations, bans certain uses outright. It's imperfect, negotiated, and already running behind the technology it's trying to govern. But it exists, which puts the EU ahead of most of the world.
And this is where the conversation gets genuinely hard. Because the moment you talk about regulation, someone in the room raises the competitiveness argument. If Europe imposes guardrails and the US doesn't, and China definitely won't, then all we've done is move the development somewhere with fewer protections. European companies will be hamstrung. Talent and capital will flow elsewhere. We'll have done the ethical thing and lost the race.
I understand this argument. I don't think it holds up as a reason to abandon governance, but I take it seriously, because there is a version of regulation that is so slow, so poorly designed, and so focused on the wrong risks that it does real damage to innovation without protecting anyone. We've seen that in other sectors. It's a real failure mode.
But the framing of "guardrails versus innovation" is mostly wrong. Done well, clear regulatory requirements create a stable environment where companies can actually invest with confidence. They know what the rules are. They can build to them. The uncertainty of an ungoverned landscape, where the rules might change dramatically the moment public sentiment shifts after a high-profile harm, is itself a constraint on long-term investment. Guardrails, designed thoughtfully, don't just protect people. They can accelerate serious innovation by clearing out the race-to-the-bottom dynamic that otherwise rewards whoever is willing to take the most risk with other people's lives.
The deeper problem with government as the primary locus of responsibility is speed and knowledge. Regulation moves at the pace of democratic process. AI moves faster. By the time a legislative framework is properly calibrated to one generation of AI systems, three more will have arrived.
There's also a knowledge gap that's genuinely dangerous. The people most equipped to make decisions about AI are mostly inside the companies building it. The people making policy are often working from briefings written by those same companies. That asymmetry doesn't make good governance impossible, but it makes it hard. And it means that, without massive investment in technical literacy at the policy level, governments will always be playing catch-up.
The answer isn't to give up on governance. It's to redesign what governance looks like when the subject matter changes faster than institutions can track. And it's to be honest that the choice isn't between regulation and innovation. It's between thoughtful protection of people and the assumption that the market will sort it out. History tells us something about how that tends to go.
The case for educational institutions changing their ways
There's a version of this conversation where we mostly talk about companies and governments, and we skip the piece that I find most interesting: what are we actually teaching people?
Many institutions are still responding to AI the way earlier generations responded to calculators or Wikipedia: with suspicion, restriction, and a quietly panicked attempt to preserve the old assessment model. Banning AI use in coursework feels decisive. It is not a strategy.
The students who are told to submit essays without AI will graduate into workplaces where AI is everywhere. The skill gap between "how to use AI to outsource your thinking" and "how to use AI to sharpen your thinking" is real and growing, and most curricula aren't touching it. Banning tools doesn't close that gap. It just delays the reckoning.
Some institutions are thinking differently, and the results are instructive. Panos Ipeirotis at NYU ran an experiment worth paying attention to. Facing the reality that take-home assessments were being gamed by AI, he didn't ban the tools. He changed the assessment format entirely. He used a voice AI agent to conduct scalable, personalised oral exams, testing whether students could actually defend their own work in real time, answering follow-up questions, applying concepts to new scenarios, reasoning out loud. The result: students who had submitted AI-polished work but hadn't engaged with it couldn't answer two follow-up questions. The students who had done the thinking showed it immediately. The assessment wasn't outsmarted by AI. It was redesigned around what AI can't fake, which is live human reasoning.
The lesson isn't "use AI to catch AI." It's that institutions need to interrogate what they're actually trying to measure, and then design accordingly. The old equilibrium, where submitted work reliably indicated understanding, is gone. Fighting that fact is a losing battle. Adapting to it is the only interesting option.
The responsibility educational institutions carry isn't just about teaching people to use AI tools. That's the smaller version of the ask. The larger version is about teaching people how to think alongside AI, how to maintain their own judgment when a machine is confidently offering an answer, how to ask whether a system that optimises for efficiency is also optimising for what matters. And it's about producing the generation of professionals who will eventually make these decisions inside companies and governments.
If AI development remains dominated by a narrow demographic with a narrow set of assumptions about what "beneficial" means, the systems those people build will reflect that narrowness. Diversity becomes more than a social justice argument. It becomes the foundation of how we learn and acquire knowledge.
The case for companies rethinking their strategies
There's a whole category of actor that doesn't get enough attention in this conversation: the businesses deploying AI systems to real people, for real decisions, right now.
These are the companies using AI for recruitment, credit scoring, content moderation, medical triage, customer service, and a hundred other applications that directly affect whether someone gets a job, a loan, a diagnosis, or a resolution. They are making choices about implementation, and those choices have consequences that the model builders often never see.
It's worth being precise here: AI tools are not inherently biased or harmful. A tool is a tool. But models are trained on data that reflects human history, with all its inequities, and code is written by humans who have their own assumptions and blind spots. Bias can enter the system long before deployment. It can be embedded in the training data, in the choice of what to optimise for, in whose feedback was used to refine the model. By the time a company deploys it, those biases are already there, and if the company doesn't test for them in their specific context, they won't find them until they've already caused harm.
So the companies doing the deploying have a real obligation: not just to deploy responsibly, but to understand what they're deploying. Many of them aren't being malicious. They're moving fast, under pressure to show results, in an environment that rewards speed over diligence. But moving fast without that understanding is its own kind of negligence.
And here's the question that I think doesn't get asked often enough: did they think this through?
If a company's AI strategy is built around cutting costs by replacing people, they need to follow that logic all the way. If enough companies do this simultaneously, who is buying the products? If the workforce that used to have disposable income has been automated out of jobs, or into lower-paying ones, the customers disappear too. That's not a distant dystopia. That's a negative spiral with a fairly clear mechanism.
The companies I find most credible right now are the ones asking: how do we use AI to create more value for our clients and more meaningful work for our employees? Not how do we cut headcount by 30%.
The "human in the loop" argument deserves particular scrutiny. There's a version of it that sounds responsible and isn't. If a human in the loop means a person pressing an approve button on a hundred AI-generated decisions per hour, without the time or context to meaningfully evaluate any of them, that's not oversight. That's liability theatre. And the mental health consequences of that kind of hollow, high-stakes, context-free work are predictable.
What I've noticed, working with organisations going through AI adoption, is that the companies asking the better questions aren't necessarily moving slower. They're moving with more intention, and the work they're doing is actually sticking. That requires someone inside with both the authority and the curiosity to stop and ask: what are we actually optimising for here?
The case for individuals being more curious
I want to be honest about how uncomfortable I find the individual responsibility argument when it comes to AI.
There's a version of it that's basically victim-blaming dressed in the language of digital literacy. "People should learn to identify AI-generated misinformation." Sure, but when you've designed a system specifically to be indistinguishable from human output, you can't put the burden of discrimination on the person who encounters it.
And yet.
Individuals do carry some responsibility, not for protecting themselves from systems designed to manipulate them, but for the choices they make when they have real options. The way people use AI tools, what they're willing to accept from technology in exchange for convenience, whether they vote for politicians who take this seriously, whether they raise these questions inside their own organisations: these things matter. They aggregate.
The individual responsibility that I find most defensible is the one that happens at the professional level. Engineers who build systems that could cause harm. Product managers who approve features without thinking through second-order effects. CEOs who push AI transformations to decrease headcount before thinking about the cascading consequences for their workforce, their customer base, and their communities.
The idea that professionals in AI can separate their work from its effects, because "someone senior approved it," stopped being defensible a while ago.
The case for society to ask more questions
"Society is responsible" is sometimes the most honest answer and sometimes the most useless one, depending on how you use it.
As a way of distributing responsibility so broadly that no one is accountable, it's a dodge. As a genuine recognition that the norms, values, and structures we build collectively are what AI systems will ultimately reflect and reinforce, it's important.
The systems being built right now are not value-neutral. They're being trained on data produced by human society, which encodes everything about what we've valued, who we've listened to, what we've chosen to record. The outputs of those systems will shape what future humans encounter as "normal." That feedback loop is already in motion.
If society doesn't have a clear collective answer to questions like "what do we want AI to optimise for?" and "what trade-offs are we willing to make?" then the people making those decisions will make them alone, in the ways that are most convenient for their own interests. That's not a criticism but a structural fact. Power fills the vacuum left by absent consensus.
A direct word to leaders
I want to say something specifically to the CEOs, executives, and board members reading this, because I think you are both the most important variable and the most underutilised one.
You are setting the tone for how AI gets adopted in your organisations. When you talk about AI primarily in terms of cost reduction and efficiency ratios, that is the signal your teams hear. The questions that then get asked inside your company are the efficiency questions, and the harder questions, about impact, about employee experience, about what happens to the people whose roles change, don't get asked until they become a crisis.
The leaders I've seen navigate this well are the ones who are genuinely curious, not just about what AI can do, but about what it should do in their specific context. They're asking their people what they're worried about. They're investing in real literacy, not just tool deployment. They're making decisions that they'd be comfortable defending publicly in five years, not just in next quarter's earnings call.
This moment will be looked back on. The choices being made right now, about how to implement AI, who bears the cost of transition, what gets automated and what stays human, are choices that will define the next decade of work. The leaders making those choices thoughtfully will build organisations that are actually stronger for it. The ones treating it as a cost-cutting exercise are building a problem they'll spend years unwinding.
That's not a prediction. It's already happening in the early adopter cohort. The patterns are visible, if you're looking.
Where I've actually landed
After sitting with this for a long time, I don't think the answer is to assign primary responsibility to any single actor. But I also don't think "everyone is responsible" is a useful conclusion, because diffuse responsibility tends to mean no one moves.
What I think is closer to true is this: responsibility is proportional to power, and right now, power is very unevenly distributed.
The companies building foundational models carry the most responsibility because they make decisions that cascade across every other layer. Governments carry the responsibility to create the conditions where companies can't opt out of accountability, while being careful that the guardrails they build enable the right kind of innovation rather than just slowing down the wrong kind. Educational institutions carry the responsibility to produce people who ask hard questions and genuinely develop their own thinking, not just credential people who've learned to submit work they don't understand. Businesses deploying AI carry the responsibility for what happens in their specific contexts, including the responsibility to follow the logic of their decisions all the way through. Professionals working in the field carry the responsibility of their own choices. And individuals carry the responsibility of collective participation in the society that governs all of this.
None of these can substitute for each other. The failure in every serious AI-related harm I've read about in the last few years comes from at least one of these layers being absent or asleep.
But the urgency right now is real, and I want to say that plainly. This isn't a patient, decade-long conversation we can have in orderly sequence. The technology is being deployed today, into education, healthcare, financial systems, legal decisions, hiring pipelines, and the information environment that shapes what people believe is true. The window for shaping what "beneficial" actually means isn't closing gradually. It's closing fast.
The question of who's responsible for making AI benefit humanity might be less useful than the question of what each of us, in our specific roles, will actually do in the next six months.
I'm giving my all to help leaders and employees think differently about AI. I teach individuals what I call an "AI mindset" so they can assume more responsibility to shape their future.