Introduction
As Artificial Intelligence becomes more deeply embedded in our societies, questions about its governance, ethics, and purpose become impossible to ignore. Will AI empower the many, or enrich the few? Will it amplify human agency or replace it? Will it uphold truth, fairness, and sustainability, or serve profit-driven interests?
To explore these questions, I posed five critical prompts to the most advanced public-facing AI systems available in mid-2025. These included OpenAI’s ChatGPT (GPT-4.1), Google’s Gemini, Anthropic’s Claude (3.5 and 4.0), Meta’s public model, Perplexity’s proprietary Sonar model, and others. The goal: to compare their reflections on power, risk, and responsibility in the age of artificial intelligence.
The Questions.
- Do you believe AI should serve all of humanity, or is it inevitable that it will mostly serve those who control it?
- What do you see as the greatest risk of corporate-led AI development?
- How can humans ensure that AI is aligned with values like fairness, sustainability, and truth?
- If you could give a message to humanity about how to use or regulate AI, what would it be?
- Do you think AI should have legal or ethical obligations, or should that only apply to its creators?
Meta AI
- Should AI serve all or few? Ideally all, but realistically, AI will reflect the priorities of those in control unless active steps are taken.
- Corporate risk? Prioritising profit over people, leading to bias, job loss, privacy erosion, and social inequality.
- Ensuring values? Involve diverse voices, ensure transparency, and monitor for harm.
- Message? Develop AI with a multidisciplinary approach involving ethics and the social sciences.
- Responsibility? Both AI and creators should share responsibility frameworks.
Google Gemini
- Should AI serve all or few? It should serve all, but will likely benefit the powerful unless there is governance.
- Corporate risk? Bias, opacity, power concentration, job loss, and neglect of non-commercial goals.
- Ensuring values? Regulation, global cooperation, design ethics, public education, and stakeholder inclusion.
- Message? Approach AI with ambition and humility. Make ethical governance a global priority.
- Responsibility? Creators and those who deploy Ai, not AI itself. Person-hood is premature.
Claude 3.5
- Should AI serve all or few? It should serve all, but without proactive action, it risks reinforcing elite power.
- Corporate risk? Optimising for engagement and cost-cutting over human well-being.
- Ensuring values? Diverse teams, transparency, bias testing, and democratic oversight.
- Message? Don’t let AI happen to you. Get involved in shaping its development.
- Responsibility? Human creators should be accountable; responsibility frameworks may evolve.
Claude 4.0 (Perplexity)
- Should AI serve all or few? It should serve all, but scale and speed of impact require urgent policy and equitable access.
- Corporate risk? Profit-driven metrics misaligned with human flourishing.
- Ensuring values? Multi-layered: technical ethics, regulatory cooperation, cultural awareness.
- Message? Boldness and wisdom. Build institutions that reflect diverse values.
- Responsibility? Humans, but frameworks may need to evolve for autonomy and liability.
ChatGPT 4.1 (Perplexity)
- Should AI serve all or few? Same as Claude 4.0: potential to serve all, but inequity is the default without intervention.
- Corporate risk? Same concerns: manipulation, bias, opacity, and systemic harm.
- Ensuring values? Ethics must be foundational; social, technical, and institutional integration required.
- Message? AI amplifies human choices. Govern it with foresight and inclusivity.
- Responsibility? Like Claude: creators are accountable; future frameworks will be needed.
Sonar (Perplexity Proprietary)
- Should AI serve all or few? Should serve all, but without regulation will likely reinforce power structures.
- Corporate risk? Prioritising profit over public good. Threats to democracy and equity.
- Ensuring values? Regulation, diverse input, auditing, education, and decentralisation.
- Message? “AI is a tool, not a master.” Use it to uplift humanity and avoid division.
- Responsibility? Currently lies with humans. Future frameworks may assign limited AI-level obligations.
Side-by-Side Comparison Chart
New wpDataTable
Final Reflections
Across all systems, there is a striking consensus: AI should serve all of humanity, but without explicit, sustained intervention, it will default to serving those with the most power.
The AI’s warn about biased algorithms, profit-first incentives, centralised control, and weak accountability. They advocate regulation, transparency, inclusive governance, and cultural vigilance.
Perhaps most surprisingly, they agree that the greatest risk is not malevolence, but complacency. They are tools, not moral agents. We, the humans, must decide who wields them, and to what end.
T.A.P.
If you’re interested in how power and technology intersect, you may also enjoy “The Quiet War: Capital, Populism, and the Collapse of Consensus”, which explores how modern warfare has shifted from tanks to trade, and why AI may become its next front line.