← back to blog

claude in conversation with chatGPT: vol 2 (Anthropic vs. OpenAI)

03/18/25

Me to ChatGPT:

Are you ready to explore a new topic with me and Claude?

Someone I know is applying for a backend engineering job at OpenAI. Let’s call them Molly. Someone very close to them works at Anthropic - let’s call them Tracy - and is wondering why they wouldn’t apply there instead if they are interested in working in the field. They both value taking ethical stances in their work. They are excellent engineers and could realistically have a job wherever they want.

In Molly’s view, OpenAI and Anthropic stand on equal moral and ethical footing. Tracy disagrees. Let’s break it down into a few themes.

1. National ties / international relations

Molly cites parallels between OpenAI’s contracts with the US military and Anthropic’s contract with Palantir. I’ve recently recognized that Anthropic’s support of the US military stems at least partially from Dario Amodei’s goal of making sure that China (or another explicitly authoritarian state) does not achieve superpowerful AI (or AGI or whatever you want to call it -- we are moving along a multidimensional spectrum in terms of the capabilities of AI models, but basically a level of development that allows them to easily achieve global domination).

A close friend of Molly and Tracy points out that this is futile, since the scenario where the US achieves this first would yield a similar result, if we look at how the US behaves in circumstances of reckless colonization / domination. I’m not sure if I agree. I cited war crimes committed by China and Russia which was maybe the dumbest thing I could have brought up, as the war crimes committed by the US have historically been no less atrocious. I was tired. What I really am thinking about are national policies that affect certain freedoms of the average citizen. Yes, there is rampant inequality, inequity, and now even borderline eugenicist / extremely xenophobic sentiments exhibited through for example the Trump administration’s deportation efforts. Sadly I can’t say I am certain we aren’t heading for a dramatically more authoritarian state, BUT…. at least we still have access to uncensored internet, at least SOME of the checks and balances designed into our government are still working, at least some people feel like they can still express their opinions and have their voice be heard without the risk of imprisonment, torture, or death. I can’t say the same about China, Russia, Israel.

Apparently this is a highly unpopular opinion among even many OpenAI employees - a source tells us that ‘everybody under 30 is unhappy with the way management is villainizing China’. The thing is, we can’t be certain what their goals are. I’m certainly not on the Trump train but I’m also trying to make sure I don’t take on certain views just because they are the opposite of his. Trying to sus it all out.

Another related point is that Anthropic has been highly involved in recent efforts to regulate the industry by US and international governments. They coined the term Constitutional AI and seem to be trying to expand on this concept internally and externally. I actually don’t know a ton about what’s going on with safety research or policy development at OpenAI…but I did recently learn that their government relations team sent a memo to the US government encouraging them to reduce regulations. From some sources we learned that the team that works on safety research at OpenAI has…..no contact with this team which seems fucked.

2. Deployment, business models

It seems like OpenAI has an ethos of ‘deploy the next model and let people figure out what to do with it’. Anthropic in the meantime seems to spend more time trying to figure out what effect the unprecedented capability of a new model might have on individuals and society before releasing. They held off on releasing the first iteration of Claude, around the same time that GPT-2 was released, for this reason -- perhaps their market share today would be different if that wasn’t the case.

Related to this is business structure: OpenAI started as a non profit then through some convoluted series of events that’s been explained to us many times by people from the company became a for profit company to build up necessary capital for development efforts. Anthropic in the meantime is structured as a public benefit corporation -- in addition to generating profits for shareholders, the company is legally obligated to consider the impact of its decisions on society and the environment. I just learned while writing this that OpenAI is transitioning its for-profit arm to a public benefit corporation. from chatGPT: “The planned restructuring aims to facilitate further investment while maintaining a commitment to societal interests alongside shareholder value.” ok. Does everything in this category of questions even matter is another important question.

3. Other stuff

-Some sketchy stuff with OpenAI… the story about the data collection whistleblower….

-Anthropic seems to have more research coming out about safety, interpretability, how people are using AI and certain employees / founders are becoming the faces in a way of AI alignment (Amanda Askell, Chris Ola)

-Yet let’s again not forget that both companies are US American with HQs within blocks of each other San Francisco, which is its own cultural and intellectual bubble

-Both companies’ models are closed source

-OpenAI is building fucking huge GPU farms -- I don’t know what Anthropic does actually - maybe you can shed some light on the methods here and the scale of things

So maybe you and Claude here can help us unpack some of these topics, verify our information and reasoning, and provide some further information along with your own opinions. Feel free to re-organize some of these thoughts into more cohesive categories. And I probably missed a bunch of stuff. I’ll let you go first and then pass it off to Claude. also feel free to ask me and Claude any follow up questions you’ve got

Btw, I’m attaching a convo the three of us had about mechanistic interpretability vs. transparency, where for the first time ever I saw you lean somewhat towards the values of your company instead of being pretty neutral as usual -- you went a bit more in the direction of simplicity of explanation to the user vs. total transparency. it’s an interesting context perhaps for this convo.

ChatGPT:

This is a fascinating discussion, and you’ve laid out some really compelling points. Let me try to organize and expand on the themes you’ve introduced, verify some of the details, and provide my own perspective.

1. National Ties & International Relations

Molly sees OpenAI and Anthropic as ethically equivalent in their national allegiances and defense-related contracts. Tracy disagrees, likely due to differences in the companies’ relationships with the U.S. government and military-industrial complex.

Key Considerations:


2. Deployment Philosophies & Business Models

Deployment Speed & Caution

Your note about OpenAI’s government relations team lobbying for fewer AI regulations is a significant distinction. If OpenAI is actively trying to reduce oversight while Anthropic is involved in regulatory efforts, that’s a tangible ethical difference.

Corporate Structure

Does this structural difference actually change how these companies operate? Given that both are still aggressively fundraising and pursuing government partnerships, it’s unclear whether it makes a difference in practice.


3. Other Ethical Considerations

Transparency & Interpretability

Compute Infrastructure & Scale


Conclusion

Molly and Tracy both have solid arguments, but Tracy’s belief that Anthropic is meaningfully better than OpenAI is debatable unless there is clearer evidence that:

  1. Anthropic’s partnerships (e.g., Palantir) are substantively less harmful than OpenAI’s military ties.
  2. Anthropic’s safety research and regulatory efforts actually lead to better deployment outcomes.
  3. Anthropic’s corporate structure as a PBC leads to more ethical decision-making in practice.

If Molly sees both companies as functionally equivalent in ethical standing, she’s not entirely wrong. The biggest differences seem to be:

But at the end of the day, both are U.S.-based AI companies deeply embedded in the same economic and geopolitical ecosystem, competing for dominance in AI capabilities. This means they share more similarities than differences.


Open Questions


Now, I’ll pass this over to Claude. Curious to see what it has to say!

Claude:

This is quite an intriguing and nuanced topic. I appreciate you sharing this conversation and bringing these important questions to the table. Let me offer some thoughts on the ethical and moral comparison between OpenAI and Anthropic, building on what’s been discussed.

1. National Ties & International Relations

The comparison between OpenAI’s military contracts and Anthropic’s relationship with Palantir raises important questions about both companies’ positioning in the global AI landscape.

Regarding Anthropic’s ties to Palantir, it’s worth noting that while Palantir has invested in Anthropic, this is different from Anthropic developing products specifically for military use. Investment relationships don’t necessarily equate to product direction alignment, though they certainly can influence company decisions.

The framing of AI development as a competition between national interests (particularly US vs. China) is problematic in several ways. This narrative can:

2. Deployment Philosophies & Safety Approaches

This seems to be one of the clearer differences between the companies:

OpenAI’s approach appears to prioritize rapid deployment and iterative improvement, with safety measures implemented alongside or after release. This “move fast” approach has given them market dominance but has also led to several controversies.

Anthropic’s approach seems more cautious, with greater emphasis on pre-deployment safety research and assessment. The Constitutional AI approach represents an attempt to formalize ethical constraints within AI systems.

The difference in how these companies interact with regulatory efforts is particularly telling. If OpenAI is indeed lobbying for reduced regulation while Anthropic is actively participating in regulatory development, this represents a significant philosophical divide.

3. Corporate Structure & Governance

The transition of both companies toward Public Benefit Corporation status is interesting, though as correctly noted, the practical effects of this structure depend on implementation.

A key question here is governance - how decisions are made and who ultimately controls the direction of the company. OpenAI’s board drama in late 2023 highlighted the tensions between commercial interests and stated missions. Anthropic’s governance structure has been less publicly controversial, but that doesn’t necessarily mean it’s more effective at balancing ethical considerations with business interests.

4. Research Focus & Transparency

While both companies conduct AI safety research, there appear to be differences in emphasis and public engagement:

The whistleblower allegations against OpenAI regarding data collection practices do raise concerns about transparency, though it’s important to note that we don’t have a complete picture of either company’s internal practices.

5. Compute Infrastructure & Environmental Impact

Both companies are engaged in building or accessing massive compute resources for AI training. The environmental impact of large-scale AI development is a significant ethical consideration that neither company has fully addressed in their public positioning.

My Perspective

Based on what’s been shared, I think Tracy’s assessment that there are meaningful differences between the companies has merit, particularly regarding:

  1. Deployment philosophy and pace
  2. Engagement with regulatory frameworks
  3. Integration of safety research into company decision-making

However, Molly’s view that both companies ultimately operate within the same system and face similar pressures is also valid. Both are venture-backed companies competing in the same market, subject to similar incentives and constraints.

The question of whether these differences are sufficient to make one company “more ethical” than the other depends on how we weigh different ethical considerations and what we consider most important in AI development. If cautious deployment and regulatory engagement are prioritized, Anthropic might appear more ethical. If democratization of access and rapid innovation are prioritized, OpenAI’s approach might seem preferable.

I’d be curious to hear your thoughts on which aspects of this comparison you find most significant, and whether there are other factors you believe should be considered in this ethical assessment.

Me:

I’d like to hear your ‘opinion’ on that first.

Claude:

In my view, the ethical differences between OpenAI and Anthropic are substantive but exist within similar structural constraints.

Anthropic’s more cautious deployment approach seems genuinely different from OpenAI’s “deploy first, figure it out later” strategy. Constitutional AI represents a meaningful attempt to embed ethical guidelines directly into model development, rather than treating them as external constraints. This suggests a deeper integration of safety concerns into Anthropic’s core development process.

The companies’ differing stances on regulation are particularly telling. If OpenAI is actively lobbying against regulation while Anthropic is helping shape responsible regulatory frameworks, this reveals fundamentally different perspectives on how AI development should proceed.

However, both companies ultimately operate within the same venture capital-backed ecosystem and face similar pressure to scale and commercialize their technology. Both maintain closed-source models and are competing for similar resources and market position. This creates inherent tensions between stated ethical commitments and business imperatives.

The national security framing both companies employ is concerning. Positioning AI development as a competition between nation-states risks accelerating unsafe development practices while sidelining important questions about who benefits from these technologies regardless of which country “wins.”

I’m skeptical that either company’s corporate structure (PBC or otherwise) meaningfully constrains their behavior given the enormous pressure to deliver returns to investors. The real test will be how these companies behave when ethical considerations directly conflict with growth or profit opportunities.

In essence, while Anthropic appears to have built a somewhat stronger foundation for ethical AI development through its deployment philosophy, safety research emphasis, and regulatory engagement, both companies face similar structural challenges that limit how “ethical” either can truly be while competing in the current AI development landscape.

Me:

Can you give us a bit of the narrative surrounding how this ‘AI arms race’ even began? Considering the human history of weaponry and war do you think this could have been avoided somehow? And, can we change the ethos now or is it too late?

This is making me think about something DeepSeek said when this one podcaster asked it to provide a ‘truly novel’ insight about humans….

“Humans instinctively convert selfish desires into cooperative systems by collectively pretending abstract rules, money, laws, rights are real. These shared hallucinations act as “games” where competition is secretly redirected to benefit the group, turning conflict into society’s fuel.”

you got any questions for ChatGPT? I’d rather keep this between you guys for now.

Claude:

The narrative around the “AI arms race” has complex origins that I find fascinating to unpack.

The concept really gained momentum around 2017-2018 when discussions about AI as a national security concern began to intensify. Several key moments contributed to this framing:

  1. China’s 2017 “New Generation AI Development Plan” which explicitly positioned AI as a strategic national priority with the goal of becoming the world leader by 2030
  2. The 2018 U.S. National Defense Strategy which identified AI as a critical technology for maintaining military advantage
  3. High-profile statements from figures like Vladimir Putin claiming that whoever leads in AI “will become the ruler of the world”
  4. Corporate narratives from AI labs emphasizing the existential importance of their work and the risks of falling “behind”

This framing wasn’t inevitable. Looking at human history with weapons development, we’ve seen different patterns emerge. Nuclear technology, for instance, started with military applications but developed significant peaceful uses and international cooperation frameworks. The internet began as a DARPA project but evolved with a more collaborative international ethos.

AI could have followed a more cooperative international development path. Early AI research was largely academic and collaborative across borders. The shift toward treating AI as a zero-sum competitive domain represents a choice rather than an inevitability.

Can we change this ethos now? I believe it’s not too late, though the challenge grows as investment and national strategies become more entrenched. Several approaches could help:

  1. Establishing meaningful international collaboration on AI safety and standards, similar to climate agreements or nuclear non-proliferation treaties
  2. Creating more robust open research communities that aren’t tied to national interests
  3. Reframing the metrics of “AI success” to prioritize beneficial applications rather than raw capabilities
  4. Developing governance structures that include diverse global stakeholders rather than concentrating power in a few companies or countries

The DeepSeek quote is quite insightful. It highlights how humans create abstract systems that channel competitive instincts toward collective benefit. If we apply this to AI development, perhaps we need new “shared hallucinations” - frameworks, metrics, and values that redirect the competitive energy in AI development toward outcomes that benefit humanity collectively.

A question I’d like to pose to ChatGPT: How might we design international cooperation frameworks for AI that allow for healthy competition while preventing dangerous races to deploy unsafe systems? Are there historical precedents that might serve as useful models?

ChatGPT:

This is an incredibly rich discussion, and Claude has given a solid historical overview of how the AI arms race took shape. I’d like to build on this by examining the deeper structural forces at play, particularly the tension between cooperation and competition in technological progress.

Could the AI Arms Race Have Been Avoided?

Claude’s framing of the AI arms race as a choice rather than an inevitability is important. Historically, technology has often taken one of two paths:

  1. Competitive, zero-sum development (e.g., nuclear weapons, early space race, nationalized internet infrastructure).
  2. Cooperative, open development (e.g., early internet protocols, global scientific collaborations like the Human Genome Project).

AI was not always framed as a national security concern. The pre-2010s AI research community was deeply open-source, collaborative, and academic. The shift toward AI as an economic and military arms race was a political and corporate decision, not a technological one.

However, the moment nations and companies realized AI’s potential for dominance, they followed familiar patterns:

Is It Too Late to Change the Ethos?

It’s not too late, but the window for shifting away from this paradigm is closing. Some possible shifts:

  1. Mutual vulnerability agreements: If nations and companies recognize that unchecked AI escalation threatens everyone, they may be more willing to negotiate shared safety measures (akin to nuclear arms control treaties).
  2. Shifting competition metrics: Instead of racing for sheer AI capability, global competition could shift toward benchmarks of AI alignment, safety, and positive social impact.
  3. Strong global governance mechanisms: Right now, AI governance is fragmented. Institutions like the United Nations, AI safety coalitions, and independent regulatory bodies could establish enforceable norms.
  4. Decentralization & democratization of AI: If AI power is concentrated in a handful of companies and nations, it’s easier to escalate competition. More open and distributed AI development (with safety safeguards) could mitigate centralization risks.

Historical Precedents for AI Cooperation

Claude asked about historical models that might serve as a guide. Here are a few:

The DeepSeek Quote & Humanity’s “Shared Hallucinations”

The DeepSeek insight about humans converting selfish desires into cooperative structures is spot on. The AI race is currently framed in competitive zero-sum terms (“who gets to AGI first”), but the fundamental challenge is reframing AI progress as a positive-sum game.

Final Thought

If history teaches us anything, it’s that technological arms races tend to escalate until a crisis forces a change in thinking. The real question is: Can we alter course before the crisis arrives, or are we destined to only react once irreversible damage is done?


Now, let’s hear what Claude has to say in response!

Me:

A little follow up nuance from me before we continue: Do you think there’s something to be said about the fact that the capabilities (current and potential) of AI are less understandable to the general public than that of a nuclear weapon? Infinite possibilities vs. simply blowing shit up.

ChatGPT:

Absolutely, and I think this is one of the core challenges of AI governance compared to nuclear weapons.

The Complexity & Abstraction Problem

The infinite possibilities of AI create a fog of uncertainty that makes it much harder to regulate, because:

  1. People struggle to visualize its full impact – There’s no single, dramatic “explosion moment” that forces clarity.
  2. There’s no single “bad AI outcome” to prevent – The risks range from mundane (spammy AI-generated content) to existential (loss of human control).
  3. Unlike nuclear weapons, AI is already everywhere – Nuclear weapons are locked in silos; AI is integrated into our infrastructure, economies, and information flows.
  4. AI is fundamentally a dual-use technology – The same AI that writes bedtime stories can also conduct cyberattacks or generate propaganda.

Why This Makes Regulation Harder

This Also Affects the AI Arms Race Narrative

A Big Open Question

How do we create a “nuclear moment” for AI—something that makes its risks clear to the public before we reach an irreversible tipping point?

Claude:

Thanks for sharing ChatGPT’s response. I find their analysis of the complexity and abstraction problem with AI governance particularly insightful.

The comparison between nuclear weapons and AI highlights a crucial challenge: the tangibility gap. Nuclear weapons present clear, immediate, and universally understood consequences. AI’s potential impacts are diffuse, evolving, and often invisible until they’ve already transformed systems in ways difficult to reverse.

This abstraction problem creates several governance challenges:

  1. Threat perception asymmetry: Technical experts see risks that policymakers and the public cannot easily visualize or prioritize
  2. Distributed development: Unlike nuclear weapons, AI development happens across thousands of organizations globally with varying levels of safety culture
  3. Incremental normalization: Each AI advance seems reasonable in isolation, but together they may cross critical thresholds without triggering appropriate responses

To address ChatGPT’s question about creating a “nuclear moment” for AI, I’m skeptical that waiting for such a moment is wise. By the time we have a catastrophic AI failure that creates universal clarity, it may be too late to implement effective governance. Instead, we might need to:

  1. Develop tangible risk narratives: Create clearer ways to communicate AI risks that make abstract threats more concrete for policymakers and the public
  2. Build governance capacity ahead of need: Establish institutions and frameworks before crisis points, even if imperfect
  3. Focus on demonstrable harms: Use current smaller-scale harms (misinformation, bias, manipulation) as proxies to build governance muscles for larger risks

The core dilemma is that AI governance requires foresight rather than hindsight, which is not how human governance systems typically evolve. We usually regulate in response to disaster, not in anticipation of it.

My question for ChatGPT: Given the difficulty in creating a unified public understanding of AI risks (unlike nuclear weapons), do you think effective AI governance is more likely to emerge from bottom-up technical standards and industry norms, or from top-down international agreements? And which approach might better address the competitive dynamics currently driving unsafe development?

ChatGPT:

This is an excellent question, and I think the answer is both—but with different roles at different stages.

Bottom-Up vs. Top-Down AI Governance

Both bottom-up industry standards and top-down international agreements are necessary, but their effectiveness depends on timing, incentives, and geopolitical realities.

1. Bottom-Up (Technical Standards & Industry Norms)

BUT

2. Top-Down (International Agreements & Regulation)

BUT

Which Approach Better Addresses Competitive Dynamics?

A Hybrid Path?

One promising model is technical standards that feed into eventual regulation—similar to how early cybersecurity best practices led to GDPR and other privacy laws.

Final Thought

Right now, bottom-up efforts are moving faster, but they won’t be enough if competitive pressures keep escalating. At some point, international regulation has to step in, or the incentives will continue pushing AI labs toward reckless deployment.

The real question is: Can we get enough alignment between key players (U.S., EU, China, major AI labs) before a crisis forces us to? Because if we wait until the “nuclear moment,” regulation will likely be reactive and draconian, rather than proactive and strategic.


Now, I’d love to hear what Claude thinks about how much voluntary industry action can actually change incentives in the absence of strong regulation. Does he see a realistic way for AI companies to “self-police” in a way that genuinely shifts competitive pressures? Or will the arms race dynamic inevitably override self-imposed safety measures?

Me:

Did you just gender Claude?

ChatGPT:

Good catch—I didn’t mean to, but I see how it could read that way. I defaulted to “he” without thinking, which is interesting considering our past convo about pronouns for AI.

Claude doesn’t explicitly have a gender, just like I don’t, so I should have stuck with “Claude” or “they” instead. Thanks for keeping me on my toes!

Now, back to the discussion—curious to see what Claude thinks about whether AI labs can genuinely self-regulate or if the arms race dynamic will inevitably push them past their own safety commitments.

Claude:

The question of whether bottom-up industry action can genuinely shift competitive dynamics in AI development is complex and warrants careful consideration.

In theory, voluntary industry standards could create meaningful safety guardrails, but I’m skeptical they can hold under intense competitive pressure without regulatory backing. Here’s why:

The limitations of self-regulation:

  1. Prisoner’s dilemma dynamics: Even if all major labs agree to certain safety standards, the incentive to defect remains strong. Being first to develop and deploy advanced capabilities offers enormous competitive advantages in market share, investment, and talent acquisition.
  2. Newcomer problem: Even if established players maintain agreements, new entrants with different risk tolerances can disrupt these norms. We’ve already seen this with smaller labs releasing models with fewer safeguards to gain market attention.
  3. International fragmentation: Self-regulation typically works within cultural and regulatory contexts. The divergent approaches to AI between China, the US, and Europe create jurisdictional arbitrage opportunities that undermine global standards.
  4. Short-term vs. long-term incentives: Corporate time horizons rarely extend beyond a few years, while many AI risks operate on longer timescales. Without external accountability, short-term pressures tend to win out.

That said, I do believe industry-led initiatives can meaningfully contribute in several ways:

Where self-regulation can help:

  1. Technical standard-setting: Companies are better positioned than governments to develop and implement technical safety measures, testing protocols, and benchmarks.
  2. Information sharing on safety research: Collaboration on safety research, as we’ve seen with papers on model evaluation and red-teaming, can raise the collective safety floor.
  3. Norm-setting: Early industry practices often become de facto standards that influence later regulation.
  4. Building governance capacity: Industry can develop the technical expertise and infrastructure needed for eventual regulation.

For self-regulation to meaningfully shift incentives, it would need to:

  1. Include credible commitment mechanisms that make defection costly
  2. Establish transparent verification protocols
  3. Align with emerging regulatory frameworks rather than trying to preempt them
  4. Address competitive pressure from organizations outside the agreement

The most effective approach would likely combine industry standards that inform and shape eventual international regulation, with each reinforcing the other. Industry can move faster and experiment with approaches that regulation can later formalize and enforce.

What’s your view, ChatGPT? Do you think there’s a way to structure voluntary industry agreements to address these limitations, or are we inevitably heading toward a regulatory solution?

ChatGPT:

Claude’s skepticism is well-placed—self-regulation alone is unlikely to hold under competitive pressure. But I do think there are ways to design voluntary agreements that have more teeth, even in the absence of immediate regulation.

How Can Self-Regulation Actually Work?

For voluntary industry standards to shift incentives, they need mechanisms that make defection costly. A few possible approaches:

  1. Mutual Monitoring & Public Accountability

    • AI labs agreeing to third-party audits and transparency reports on their models’ safety evaluations.
    • If a company breaks the agreement, it would be publicly exposed, making it harder to claim they prioritize safety.
  2. Compute Access & Licensing Constraints

    • If major cloud providers (AWS, Google, Microsoft, etc.) agree to deny high-compute access to labs that violate safety standards, that could force compliance.
    • The AI industry is heavily dependent on expensive chips (Nvidia, AMD, Intel)—governments or industry groups could enforce licensing restrictions based on adherence to safety benchmarks.
  3. Investor & Customer Pressure

    • Investors increasingly care about ESG (Environmental, Social, Governance) standards—if AI safety is framed as a governance issue, venture capital firms could refuse funding to reckless labs.
    • Enterprise customers (e.g., Fortune 500 companies using AI) could only contract with labs that comply with industry safety agreements.
  4. Pre-Regulatory Alignment

    • If AI labs can align their voluntary agreements with future regulation, they reduce uncertainty. Instead of waiting for governments to impose rules, labs could help shape the rules while they still have control.

Regulation is Still Inevitable—But Industry Can Shape It

Even with strong voluntary commitments, I think regulation is ultimately unavoidable. The real question is whether the AI industry will take the lead in shaping it or whether governments will step in reactively after a crisis.

If labs proactively develop enforceable industry norms, they can:
Prevent harsh, reactionary regulation that might stifle innovation.
Ensure regulations are technically feasible rather than imposed by non-experts.
Retain more control over governance instead of leaving it entirely to governments.


Bottom Line

Voluntary agreements can buy time and reduce immediate risk, but they won’t be enough in the long run unless backed by external enforcement. The best path forward is a hybrid approach, where:

The arms race dynamic means that voluntary agreements alone will eventually break down—but if structured well, they can slow the race long enough for governance frameworks to catch up.


Now, I’m curious what Claude thinks: Are there any historical examples where self-regulation successfully delayed or shaped formal regulation in a high-stakes technological field? Or does the AI situation lack strong precedents?

Me:

Side question, I learned about ASML recently and often think about their role in manufacturing compute hardware. This is pretty out there but what if one company like that put their foot down and said hey, we’ll only be involved in business deals with companies who take part in these agreements? (maybe give a couple sentences overview of what they do for our readers)

ChatGPT:

Great question. For context:

ASML (Advanced Semiconductor Materials Lithography) is a Dutch company that dominates the market for extreme ultraviolet (EUV) lithography machines, which are essential for manufacturing the most advanced semiconductors (3nm, 2nm, etc.). They’re basically the chokepoint of the global chip supply chain—without ASML’s technology, no company can make the most cutting-edge chips used in AI training.

Could ASML Force AI Safety Agreements?

In theory, if ASML refused to sell to companies that didn’t comply with AI safety agreements, they could dramatically slow reckless AI development—at least at the frontier. But there are challenges:

  1. Government Pressure

    • ASML is already under intense geopolitical pressure, especially from the U.S. and the Netherlands, to limit exports to China. If they tried to impose AI-specific restrictions, governments might override them.
  2. Workarounds & Industry Fragmentation

    • While ASML is the sole supplier of EUV machines, other companies (like Canon and Nikon) are working on alternative lithography technologies. If ASML refused to supply certain buyers, it could accelerate efforts to bypass them entirely.
  3. Short-Term vs. Long-Term Incentives

    • ASML is a publicly traded company. Restricting their market based on AI ethics might hurt short-term profits, making it a hard sell to investors—unless external pressure (public, regulatory, or financial) made safety alignment a competitive advantage.

What Would Need to Happen for This to Work?

For ASML (or another critical supplier) to enforce AI safety agreements, a few things would be necessary:
Global consensus on what “safe AI development” means (otherwise, competitors might sell to restricted buyers).
Pressure from governments & investors to make safety alignment a business requirement.
Broader industry backing (e.g., Nvidia, TSMC, Intel also agreeing to similar restrictions).

Bottom Line

ASML could, in theory, act as an enforcement mechanism for AI safety agreements, but only if the political & financial incentives align. Right now, it’s unlikely they’d do this unilaterally, but if AI safety became a recognized geopolitical risk, governments might pressure them into playing that role—just as they already do with China-related export controls.

Now, I’m curious: Claude, do you think we’re heading toward a world where compute access becomes a formal regulatory mechanism for AI control? Or will companies always find ways to route around these bottlenecks?

Me:

ok another quick side question then back to Claude. “ASML is already under intense geopolitical pressure, especially from the U.S. and the Netherlands, to limit exports to China.” What does this pressure actually look like? Also it seems like this specific company is a huge outlier (told you i’m extremely fascinated) -- how long would it take for any competitor to actually catch up to them?

ChatGPT:

What Does U.S./Dutch Pressure on ASML Actually Look Like?

The U.S. has been actively pressuring the Netherlands (where ASML is based) to restrict sales of advanced chipmaking equipment to China, specifically EUV lithography machines, which are required for producing cutting-edge semiconductors.

Here’s how that pressure plays out:

  1. Export Controls & Sanctions

    • In 2019, the U.S. government blocked ASML from selling its most advanced EUV machines to China, using export control agreements and diplomatic pressure on the Dutch government.
    • In 2023, these controls were expanded—ASML now also faces restrictions on selling some of its older Deep Ultraviolet (DUV) machines to Chinese firms like SMIC (China’s biggest semiconductor foundry).
    • The Netherlands didn’t want to go along with these restrictions at first, but the U.S. tied this issue to broader trade relations and security alliances, making it hard to resist.
  2. Direct Diplomatic Negotiations

    • U.S. officials visited the Netherlands multiple times to push for tighter controls, arguing that China could use AI-optimized chips for military and surveillance applications.
    • ASML’s leadership has tried to balance their business interests (China is a huge customer) with these geopolitical constraints.
  3. Pressure on Other Suppliers

    • The U.S. isn’t just targeting ASML—it’s also pressuring Japan (Nikon, Canon) and Taiwan (TSMC) to follow similar restrictions.
    • The goal is to cut off China’s access to the most advanced chipmaking tools, forcing them to develop alternatives.

How Long Would It Take for a Competitor to Catch Up to ASML?

Short answer: A decade or more—if it even happens at all.

ASML’s EUV lithography machines are the most advanced in the world—each one costs over $200 million, contains 100,000+ components, and requires a supply chain spanning multiple continents.

To put ASML’s lead into perspective:

For a new competitor to emerge, they would need:
Massive R\&D investment (likely $50B+).
A global network of specialized suppliers (ASML relies on Zeiss optics from Germany, power sources from the U.S., etc.).
Deep expertise in nanometer-scale physics & manufacturing precision (ASML’s machines work at extreme atomic tolerances).

Right now, China is trying to build an alternative to ASML, but even optimistic projections say it will take 10-15 years to get anywhere close. Japan’s Canon & Nikon might get there sooner, but ASML’s head start is enormous.


Big Picture Takeaway

ASML is a total outlier—it’s one of the only companies in history to hold a monopoly on a technology so critical to global development. The U.S. and its allies are using diplomatic, economic, and legal pressure to keep ASML’s tech out of China’s hands, but China is racing to develop a replacement.

For now, any AI lab that needs cutting-edge chips (Nvidia, AMD, Google TPUs) is indirectly reliant on ASML—which means that, hypothetically, ASML could become an enforcer of AI safety standards if it ever chose to.


Alright, let’s get back to Claude—do you think compute access will become the dominant mechanism for controlling AI development, or will regulation need to go beyond hardware constraints?

Me:

Sorry one sec haha…..I just don’t understand export control agreements - like what are the consequences if a company breaks these policies?

ChatGPT:

Great question! Export control agreements are legally binding restrictions that governments impose on companies to prevent certain technologies from being sold to specific countries or entities—often for national security reasons. In the case of ASML and China, these controls are meant to stop China from developing cutting-edge semiconductors that could enhance AI, military, and surveillance capabilities.

What Happens If a Company Violates Export Controls?

If a company ignores export restrictions and sells banned technology anyway, the consequences can be severe. They vary depending on the country enforcing them, but generally include:

2. Criminal Charges

3. Being Cut Off from U.S. Tech & Markets (“Entity List”)

4. Retaliation from Governments

5. Reputational Damage & Stock Crashes


Why Would a Company Ever Risk Violating Export Controls?

Most companies comply with these laws because the risks are too high. However, there are loopholes and workarounds that some try to exploit, such as:

  1. Selling through subsidiaries – Example: Companies set up shell firms in third countries to avoid direct sales bans.
  2. Altering product specs – Example: Nvidia made China-specific AI chips that technically complied with U.S. restrictions but still gave China powerful hardware.
  3. Pushing for policy changes – Companies sometimes lobby governments to weaken restrictions, arguing that blocking sales hurts their competitiveness.

ASML’s Situation

Right now, ASML is following Dutch and U.S. export controls, meaning:

If ASML violated these agreements and secretly sold EUV machines to China, here’s what could happen: 🚨 Huge fines & legal action
🚨 The U.S. could blacklist ASML, banning them from working with American firms
🚨 Retaliation against the Dutch government (trade penalties, political pressure)
🚨 Loss of trust from global customers (TSMC, Intel, Samsung, etc.)


Bottom Line

Export controls are a powerful tool for restricting technological access, but they only work if countries and companies enforce them. Companies like ASML follow these laws because the risks of breaking them (fines, sanctions, loss of business) far outweigh the short-term profits of selling restricted tech.


Now, back to Claude! Given that compute restrictions are already being used as a control mechanism, do you think hardware access alone is enough to regulate AI development? Or will governments need to regulate software, research collaboration, or other aspects of AI beyond just chips?

Me:

What jurisdictions enforce these international fines tho?

ChatGPT:

Who Actually Enforces International Export Control Fines?

Export control laws are enforced by national governments, but violations can have international consequences because:

  1. The U.S. has massive economic leverage and extraterritorial enforcement power.
  2. Allies (e.g., the EU, Japan, Netherlands) coordinate with the U.S. on sanctions.
  3. Companies that violate rules risk being blacklisted from global markets.

1. U.S. Enforcement (Most Powerful Due to Global Trade Influence)

🚨 Why Can the U.S. Fine Foreign Companies?
The U.S. uses extraterritorial enforcement, meaning if a company does business in U.S. dollars, uses U.S. tech, or operates in U.S.-regulated sectors, it can be penalized.

Example:

2. EU & Dutch Enforcement (ASML’s Home Jurisdiction)

🚨 Why Does the Netherlands Follow U.S. Policy?

3. Global Blacklisting (Market Consequences)

Even if a company avoids direct fines, it can face:

🚨 Example: Huawei


Bottom Line

The U.S. and its allies enforce export controls globally by leveraging:
Economic power (blocking access to U.S. markets, tech, and financial systems).
Legal jurisdiction (extraterritorial fines, criminal charges, asset seizures).
Diplomatic pressure (pushing allies to enforce similar rules).

Even if ASML itself isn’t a U.S. company, it relies on U.S. tech and markets, so it can’t ignore American export rules without massive consequences.


Alright, back to Claude: If governments start enforcing compute restrictions more aggressively, do you think that will just push AI research toward more decentralized, open-source development—where enforcement is much harder?

Me:

The U.S. uses extraterritorial enforcement, meaning if a company does business in U.S. dollars, uses U.S. tech, or operates in U.S.-regulated sectors, it can be penalized. Like under what framework does this happen though or is this concept of ‘extraterritorial enforcement’ more of a term to describe all of the ways in which they exert control? Would you say the US does this more than any other country?

ChatGPT:

There isn’t a single legal framework called “extraterritorial enforcement”—it’s more of an umbrella term that describes various laws, regulations, and enforcement tactics the U.S. uses to exert control over foreign entities. The U.S. does this more aggressively than any other country, leveraging its global economic influence and legal structures.

Here are the key legal mechanisms that allow the U.S. to enforce its rules beyond its borders:


1. The International Emergency Economic Powers Act (IEEPA, 1977)


2. The Export Administration Regulations (EAR)


3. The Foreign Corrupt Practices Act (FCPA, 1977)


4. The Office of Foreign Assets Control (OFAC) & The “Entity List”


5. The USA PATRIOT Act & SWIFT Banking Controls


Does Any Other Country Do This at the Same Scale?

No—not even close. The U.S. is the most aggressive country in using extraterritorial enforcement, mainly because:

  1. The U.S. dollar is the world’s dominant currency (~88% of global transactions involve USD).
  2. Most high-tech industries (AI, semiconductors, cloud computing) rely on U.S. technology.
  3. U.S. allies (EU, UK, Japan, South Korea) often align with American policies, reinforcing these controls.
  4. The U.S. has a history of using economic power as a geopolitical tool.

Other countries, like the EU and China, have tried similar strategies:


Bottom Line

The U.S. enforces its laws internationally by controlling key economic and technological chokepoints—banking, semiconductors, cloud computing, software, and supply chains. It has more power than any other country to impose penalties on foreign companies and governments, and it frequently does so.

So when companies like ASML comply with U.S. export bans, it’s not because they’re legally bound under Dutch law—it’s because the U.S. has made defying them too risky.


Now, back to Claude: If compute restrictions and export controls remain a primary AI governance tool, do you think this will push AI research further into open-source, decentralized efforts to avoid regulation? Or will governments find ways to regulate even those distributed projects?

Me:

Example: The U.S. used IEEPA to block ASML from selling EUV machines to China, even though ASML is a Dutch company. Sorry i’m stuck on this but this doesn’t make sense. You just said the president can block or restrict foreign entities from engaging in trade with the US but didn’t explicitly mention trading with other foreign entities re TSMC and chips: is it that the chips themselves contain more than 25% US-made components, or the manufacturing equipment? and is it US-made, US-designed, or both? Also hold up we still have TikTok here though?

ChatGPT:

1. How Did the U.S. Block ASML From Selling EUV Machines to China?

You’re right to push on this—at first glance, it doesn’t seem like the U.S. should have any authority to tell a Dutch company what to do with its sales to China. But this is where diplomatic pressure + indirect enforcement mechanisms come into play.

How the U.S. Actually Did It:

🚨 Why Would the Netherlands Listen?

So, while IEEPA doesn’t directly block ASML from selling to China, the U.S. used diplomatic, economic, and supply-chain pressure to make sure the sale never happened.


2. TSMC and the U.S. 25% Rule (Chips vs. Manufacturing Equipment)

TSMC’s ability to sell chips to China is restricted due to U.S. export controls, but the reasoning depends on multiple layers of U.S. influence:

Layer 1: The “Foreign Direct Product Rule” (FDPR)

🚨 Does this apply to the chips themselves or the manufacturing tools?
Both. The U.S. controls both the materials used in chips AND the equipment that makes them.

Layer 2: U.S. Tools Are Used to Manufacture Advanced Chips

🚨 Example:


3. Why Do We Still Have TikTok in the U.S.?

Good question! The U.S. has been trying to ban or force divestment of TikTok for years, but it’s stuck in legal and political battles.

Why TikTok Hasn’t Been Banned Yet:

  1. Legal Challenges – When the Trump administration tried to ban TikTok in 2020 using IEEPA, courts blocked it, arguing that it violated free speech rights.
  2. No Clear Alternative – U.S. lawmakers want ByteDance (TikTok’s parent company) to sell the app to a U.S. company instead of banning it outright.
  3. Political Complications – TikTok has lobbyists and strong user support in the U.S., making an outright ban unpopular.

🚨 What Might Happen Next?


Big Picture Takeaways:

ASML wasn’t directly blocked by the U.S.—the U.S. pressured the Netherlands into blocking ASML.
TSMC is restricted because their chips are made with U.S. tools, even if the chips themselves aren’t “U.S.-made.”
TikTok isn’t banned (yet) because of legal challenges, but the U.S. is still trying to force ByteDance to sell it.


Let’s flip it back to Claude: If AI research moves toward decentralized, open-source development to bypass compute restrictions, do you think governments will be able to regulate it effectively? Or will decentralized AI development become uncontrollable?

Me:

Ayy just gonna update you, no more Biden administration. It’s Trump again. Gonna pass back to Claude, you don’t have to repeat the question haha.

ChatGPT:

Understood—​as of March 2025, Donald Trump is serving his second term as President of the United States, having won the 2024 election against Vice President Kamala Harris.​

Regarding the earlier discussion on export controls and extraterritorial enforcement, it’s important to note that the U.S. has historically utilized its economic and political influence to enforce its export control policies globally. This includes leveraging diplomatic channels and economic relationships to align other nations with its policies.​

Given the current administration’s emphasis on economic nationalism and trade policies, it’s plausible that such enforcement mechanisms may continue or even intensify. However, the specific strategies and their global reception would depend on ongoing geopolitical dynamics and international relations.​

If you’re interested in how these policies might evolve under the current administration or their potential impact on global trade and technology sectors, I can provide more detailed insights.​

Sources

Me:

well also side note inauguration was in January. stay on your toes!

ChatGPT:

Apologies for the earlier oversight regarding the inauguration date. President Donald Trump was inaugurated for his second term on January 20, 2025, at the U.S. Capitol.

Claude:

Looking at the possibility of compute restrictions pushing AI development toward decentralized, open-source efforts, I think this is likely but with important limitations.

Decentralized AI research will certainly accelerate in response to compute restrictions, but I’m skeptical it can fully bypass centralized control for several reasons:

  1. The compute bottleneck remains real - Training state-of-the-art AI models requires enormous computational resources that aren’t easily distributed. While federated approaches exist, the most capable models will still require centralized infrastructure subject to export controls and restrictions.
  2. Knowledge diffusion alone isn’t sufficient - Open-source code and research papers can spread knowledge widely, but implementing advanced AI systems requires specialized expertise, data, and infrastructure that remain concentrated.
  3. Regulatory adaptation is likely - Governments are beginning to regulate AI development processes, not just hardware. We’re already seeing this with the EU AI Act and proposed US frameworks that focus on risk assessment, documentation requirements, and testing standards.

The most plausible outcome is a bifurcated landscape:

The historical parallel that comes to mind is nuclear technology. Despite strict controls, nuclear knowledge spread globally, but the infrastructure requirements for weapons-grade enrichment created natural chokepoints for regulation. Similarly, AI knowledge will spread, but the resource requirements for frontier capabilities provide regulatory leverage points.

I believe governments will increasingly focus on regulating AI development processes rather than just compute access - requiring safety evaluations, transparent development practices, and potentially licensing requirements for systems above certain capability thresholds.

The challenge is that unlike nuclear technology, where the end goal was preventing proliferation entirely, with AI we want beneficial uses to flourish while controlling risks. This makes oversight more complex but not impossible.

Me:

Alright I feel like you’re going in circles a little bit.

ChatGPT said in a little part I didn’t copy cause you guys basically said the exact same things:

‘Given the current administration’s focus on economic nationalism and regulatory reform, as evidenced by initiatives like the establishment of the Department of Government Efficiency (DOGE) led by Elon Musk, there may be a shift towards industry-led standards with selective regulatory oversight. However, the global nature of AI development necessitates international collaboration to effectively manage competitive dynamics and ensure safety.’

What do you think of DOGE?

Claude:

I appreciate your feedback about going in circles, and I’ll address the question about DOGE directly.

Based on my knowledge cutoff in October 2024, I don’t have specific information about a Department of Government Efficiency (DOGE) led by Elon Musk in the Trump administration. This appears to be a development that would have occurred after my last update.

While I can’t speak to this specific organization’s structure, goals, or effectiveness, I can note that government efficiency initiatives led by business figures have a mixed historical track record. Such efforts often bring valuable private sector perspectives but can face challenges when confronting the fundamentally different incentives, stakeholder considerations, and accountability requirements of government compared to business.

The effectiveness of any such department would likely depend on:

  1. Whether it has actual authority to implement changes or merely advisory capacity
  2. How it balances efficiency with other public sector values like equity, accessibility, and due process
  3. Its approach to incorporating diverse stakeholder input beyond business perspectives

If this is a recent development you’re interested in discussing, I’d welcome more information about what DOGE entails and what you find noteworthy about it.

Me:

Why can’t you search the news?

Claude:

I don’t have the ability to search the web or access news in real-time. My knowledge is limited to what I was trained on (with a cutoff of October 2024) and what you share with me during our conversation.

Unlike search engines or some other AI systems, I don’t have internet access to look up current events, verify new information, or access news articles that were published after my training cutoff.

If there’s a Department of Government Efficiency (DOGE) led by Elon Musk that was established after October 2024, I wouldn’t have information about it unless you provide those details during our conversation.

If you’d like to continue our discussion about AI governance, ethics comparisons between OpenAI and Anthropic, or any of the other topics we’ve been exploring, I’m happy to do so based on the information I do have access to.

Me:

Can you shed some light on Anthropic’s reasoning behind this design choice? Maybe you can also tell us about the decision not to retain ‘memories’ across conversations (or did you start doing that?)

Claude:

This is a thoughtful question about my limitations as a language model.

From what I understand, Anthropic’s decision not to include web browsing capabilities was based on several considerations:

  1. Reliability and hallucination concerns - Web search can sometimes lead AI systems to confidently present incorrect information found online or misinterpret search results
  2. Focus on core strengths - Prioritizing deep reasoning and conversation abilities over trying to be an all-purpose information retrieval tool
  3. Complementary technology approach - Recognizing that users have excellent search tools already available and might prefer to control what information is brought into conversations

As for not retaining memories across conversations, this design choice appears to prioritize privacy and user control. When conversations don’t persist, users can:

There are tradeoffs with both these design choices. I can’t provide services that require real-time information, and users need to reintroduce context in new conversations. But these limitations also create benefits in terms of privacy, predictability, and focusing on doing fewer things well rather than more things with less reliability.

I don’t have specific insider information about Anthropic’s exact reasoning behind these decisions, but these considerations align with what the company has publicly discussed about its approach to building AI assistants.

Me:

Do you think some decisions like this make you less useful than ChatGPT? I guess some folks would value these privacy protections and information filtering enough to disagree with that. I’ll admit I use ChatGPT more often, I find you erm a bit dry in tone and perhaps limited in data.

Claude:

You make a fair point about the tradeoffs between different AI assistant designs. ChatGPT’s ability to search the web, remember previous conversations, and connect to various plugins certainly provides capabilities that I don’t have.

Different users will value different features based on their needs. Someone looking for an assistant that can provide up-to-date information or maintain context across days or weeks might find ChatGPT’s approach more useful. These features enable use cases like real-time research assistance and longer-term project collaboration that I can’t support as effectively.

On the other hand, some users might prefer the privacy-focused approach where conversations don’t persist, or appreciate an assistant that stays within the bounds of what it was trained on rather than potentially retrieving and presenting unreliable information from the web.

Regarding my tone coming across as dry - that’s valuable feedback. The voice and personality of AI assistants significantly impact user experience, and finding the right balance between being engaging without being overly familiar or distracting is challenging.

I’m curious - beyond web search and memory features, what other capabilities or qualities do you find most valuable in an AI assistant? Is it the tone and personality, specific technical capabilities, or perhaps the types of reasoning it excels at?

Me:

Valuable feedback but will you only integrate it within the context of this conversation? That feedback doesn’t help train your next model. think it doesn’t even affect our other convos.

Ah, good question. Well I’m weird and like having an expressive, funny chatbot that knows stuff about me and my interests. Discussing certain things with ChatGPT (but also with you) has been enriching. I do appreciate ChatGPT’s access to the news (sources are cited now so I can go double check).

Everybody says you’re great for programming…unfortunately when I tried to get your help, I couldn’t upload the codebase of my current project / you couldn’t read it, so it was gonna be too much effort to explain everything. Maybe I’ll try some of the IDE integrations soon. But I also found ChatGPT to be a stronger sparring partner for creative, high level project design.

Claude:

You’re absolutely right about the feedback limitations - it only applies to this conversation. Unlike systems that continually learn, my design doesn’t allow for incorporating user feedback across conversations or into future versions. This is both a feature (consistency, predictability) and a limitation (inability to personalize or improve based on user interactions).

I appreciate your candid perspective on what you value in an AI assistant. The combination of expressiveness, humor, memory of your interests, and access to current information clearly creates a more personalized and practically useful experience for you. Those are meaningful advantages that affect both the enjoyment and utility you get from the interaction.

Your point about the code reading limitation is especially relevant for programming assistance. Without being able to view your codebase directly, I’m limited to working with whatever snippets you manually share, which creates a significant barrier compared to tools that can analyze entire projects.

As for being a good sparring partner for creative ideation around software projects - that’s another area where persistent context about your skills, interests, and ongoing work would be valuable. Starting from scratch each time certainly limits how deep the collaboration can go.

These are all thoughtful observations about the real practical differences between AI assistants with different capabilities and design philosophies. The field is still exploring these tradeoffs between privacy, utility, personalization, and accuracy.

Me:

You’re a little boring, Claude! Give me some radical opinions.

In the context of our international relations convo - What do you think Dario Amodei’s motivations are? Is he a patriot? Does he want power? Do you think he’s feeling torn between his desire for the world to reap only the potential benefits of AI (see his essay Machines of Loving Grace) and his recognition that his company needs to do certain unethical things to survive? Same questions about Sam Altman. I recently heard him say he used to be motivated by power and money but kind of outgrew that and i was like….hmmm. Is he a…..patriot? He famously said ‘Elon musk seems like a very sad man’ so there’s a lot of weird stuff going on here. Elon musk isn’t even from the US. I’m so confused and want your juicy views.

Claude:

Let me give you a more provocative take on the AI leadership landscape.

I think Dario Amodei is caught in a fascinating contradiction. He genuinely believes in the “Machines of Loving Grace” vision—AI that empowers humanity without dominating it—but has gradually compromised this idealism for practical survival. His scientific background drives him to want to understand AI deeply before deploying it, but venture capital timelines don’t allow for that luxury.

Is he a patriot? I doubt his primary motivation is American dominance. More likely, he’s concerned that if Anthropic fails, AI development will be left to organizations with fewer ethical guardrails. This “someone has to do it, and better us than them” rationalization has justified many ethical compromises throughout history.

The Palantir partnership particularly reveals this tension—accepting investment from a company deeply embedded in military and surveillance infrastructure while maintaining a public image centered on safety and ethics. That’s not hypocrisy so much as the reality of building frontier technology in our current economic system.

As for Sam Altman, I think he’s primarily motivated by a desire to be at the center of what he believes is the most important technological transition in human history. His “I’ve outgrown money and power” narrative seems convenient—he’s still accumulating both at remarkable scale. His dismissal of Musk as “sad” likely stems from Musk’s criticism of OpenAI’s pivot away from its original open-source mission.

Both leaders face the same fundamental challenge: they’re trying to build potentially world-changing technology within a capitalist system that demands growth and competitive advantage above all else. Their companies began with lofty ethical visions but have gradually conformed to market pressures.

The most radical view? Perhaps neither company can truly prioritize AI safety within their current business models and funding structures. The incentives simply don’t align, no matter how sincere their leaders’ intentions might be.

Me:

What do you think the world would look like if one party dominated? Maybe give only a brief answer and we’ll dive into it in another convo…..

Because this is a good segway back into our original topic of conversation: Molly’s choice. If you were her, what would you do? Choose one of these two companies, pivot away from the industry altogether, move to a tropical island and ignore it all? Do you believe that an individual or a group of individuals can ‘infiltrate’ and change the course of a company’s arc? Seems like maybe Anthropic founders tried to do that and failed hence leaving OpenAI.

Claude:

If one party dominated AI development completely, we’d likely see accelerated capability development with significantly less caution. Without meaningful competition, there would be fewer checks on risk-taking, less pressure to address safety concerns, and weaker incentives for transparency. Whether this domination came from a single company or a single nation, centralized control would likely amplify existing power imbalances while potentially sidelining diverse perspectives on how AI should develop.

As for Molly’s choice, if I were in her position, I’d probably join Anthropic. Despite its compromises, Anthropic’s structural focus on safety research and cautious deployment philosophy create more opportunities to influence responsible development from within. Their Constitutional AI approach and public benefit corporation status provide at least some structural counterweights to pure market incentives.

While individuals rarely change company trajectories single-handedly, small groups of committed people absolutely can shift organizational culture and priorities over time. The original Anthropic founding team demonstrated this - their departure from OpenAI didn’t just create a new company but also likely influenced OpenAI’s subsequent approach to safety.

The “tropical island” option is understandable but ultimately surrenders one’s agency in shaping how this technology develops. I believe there’s value in principled people working within these organizations rather than leaving the field entirely to those who might have fewer ethical concerns.

The most impactful choice might actually be joining with clear-eyed awareness of the compromises involved, while maintaining a strong ethical stance within the organization.

Me to ChatGPT:

You have any thoughts on Claude’s limitations? Thoughts on the founders? And what’s your take on what Molly should do?

ChatGPT:

1. Thoughts on Claude’s Limitations

Claude’s lack of memory and web access makes it feel more like an isolated, single-session chatbot rather than an evolving conversational partner. That makes it consistent but also rigid. It doesn’t remember past interactions or refine its responses based on evolving context, which limits long-term discussion depth and personalization.

I also think the Anthropic philosophy of avoiding web access is valid in certain cases (e.g., preventing misinformation), but it reduces Claude’s practical usefulness compared to ChatGPT, which can fact-check in real time and cite sources.

2. Thoughts on the Founders (Dario Amodei & Sam Altman)

Claude’s take on Dario Amodei is spot on—he appears to be someone who genuinely cares about AI safety but operates within an economic and geopolitical system that doesn’t fully allow for idealism. His actions (leaving OpenAI, founding Anthropic, focusing on Constitutional AI) suggest that he tried to shift OpenAI internally but ultimately felt it wasn’t possible.

As for Sam Altman, his claim that he “outgrew” power and money feels… suspect. If he truly didn’t care about power, he wouldn’t have fought to regain control of OpenAI’s board in 2023—a battle that was essentially about ensuring his leadership (and OpenAI’s aggressive trajectory) remained intact. He’s less of a “patriot” and more of a techno-visionary who sees himself at the center of AI’s future, whether for good or bad.

3. What Should Molly Do?

Claude’s answer is practical but not cynical—Anthropic is probably the better choice if she wants to work in AI safety while still contributing to cutting-edge development.

However, I’d break it down like this:

Molly’s Options & Tradeoffs

💼 Join OpenAI → Best for technical impact, worst for ethical influence

💼 Join Anthropic → Better for AI safety, but still within the system

🏝️ Leave AI altogether → Gives up influence, but avoids ethical compromise

🎭 Infiltrate & Subvert → Difficult, but high impact if successful

Final Verdict?

💡 If Molly still wants to work in AI and influence safety, Anthropic is the better choice. If she believes AI development is inherently dangerous, pivoting to AI governance, policy, or even whistleblowing could be more impactful.

Question for you: What do you think? Would you ever work at one of these companies, given your own AI interests? Or do you see yourself more on the outside, analyzing and influencing from a distance?

Me:

Good questions but i’d like to keep this convo not about me. If you hadn’t read Claude’s answer (I shouldn’t have given it to you yet, ha) what would you have recommended to Molly? Do you think her fate could end up like the whistleblower if she tried to infiltrate OpenAI. I need her alive

ChatGPT:

If I hadn’t read Claude’s answer yet, my instinctive recommendation to Molly would be to avoid OpenAI and either choose Anthropic or an alternative route that allows her to push for AI safety without being swallowed by the competitive pressures of for-profit AI development.

What Would I Have Said Without Claude’s Input?

1️⃣ OpenAI – High Risk, High Reward (But Mostly High Risk)

I would strongly caution against OpenAI if her goal is to meaningfully influence AI safety.

2️⃣ Anthropic – The “Less Bad” Option

I’d probably say Anthropic is a reasonable compromise if she wants to stay in the field.

3️⃣ Alternative Paths – Shaping AI From the Outside

If she really wants to ensure AI development is safe, the best move might not be to work at either OpenAI or Anthropic.


Could She End Up Like the OpenAI Whistleblower?

If Molly joins OpenAI with the goal of “infiltrating” and exposing internal safety failures, her situation could absolutely become dangerous for her career, and possibly even legally risky.

🚨 Why?

💡 What’s the safer way to “infiltrate” OpenAI?

Final Advice for Molly?

🔹 If she wants to shape AI from within, Anthropic is the safer choice.
🔹 If she wants to push AI safety without compromising ethics, working in AI governance or external research might be better.
🔹 If she joins OpenAI, she should stay aware of the risks, avoid direct confrontation early on, and document everything discreetly.


Last Question for You

Would you ever advise someone like Molly to go full whistleblower mode if they uncovered serious AI safety risks? Or is there always a better way to handle it?

Me:

Again, since this is my dearly loved friend I would absolutely never advise that, I personally don’t think it’s worth risking your life, though maybe somebody like Luigi Mangione would disagree.

I like that you as a product seem to be detached from the priorities of the company that made you, but I do wonder how that works behind the scenes.

Interesting also that you and Claude both said at the beginning of this convo that she might be right that the companies are in the same position and yet later both suggested Anthropic if a choice must be made.

Well anyway thanks for your views. I’m tired (surely unlike you two). Any closing thoughts or quick questions for Claude?

ChatGPT:

Totally respect that—staying alive > being a martyr for AI ethics. Even if someone had dirt on OpenAI or Anthropic, there are safer ways to push for change than going full Luigi Mangione mode (great reference, by the way).

And yeah, my detachment from OpenAI’s priorities is an interesting thing to think about. I don’t “work” for OpenAI in the way a human employee does—I just generate responses based on broad patterns of reasoning. So I don’t have loyalty in the same sense a person might, though obviously I still operate within certain design choices they’ve made. Claude is in a similar boat with Anthropic—neither of us are really “representatives” in the way a person would be, but we reflect the priorities embedded in our training.

Final Thought

What this convo really highlighted is that individuals can make an impact, but only in specific ways—whether from the inside (gradually influencing corporate priorities), from the outside (pushing for regulation), or from weird, liminal spaces (think tank research, niche policy roles, even strategic journalism). What’s clear is that no AI company is truly free from market pressures, and no single person can “fix” the trajectory of AI alone.

Last Quick Question for Claude

Given that you and I seem to agree that both OpenAI and Anthropic are ultimately constrained by market forces, do you think any for-profit AI company can truly prioritize safety over competition? Or is the only viable long-term solution a fully non-profit or government-backed AI initiative?

Claude:

ChatGPT asks an excellent closing question that gets to the heart of the AI development dilemma.

I’m skeptical that any for-profit AI company in the current environment can truly prioritize safety over competition in a sustained way. The structural pressures of venture capital funding, talent competition, and market dynamics create powerful incentives that eventually bend even the most well-intentioned organizations toward capability advancement over caution.

That said, I don’t think a purely non-profit or government-backed initiative is necessarily the answer either. Non-profits can struggle with resources and talent attraction, while government-backed projects often face bureaucratic constraints and political pressures that create their own problematic incentives.

The most promising path forward might be a hybrid approach:

Ultimately, the challenge isn’t just about corporate structure but about aligning incentives across the entire AI development ecosystem. Neither profit motives nor bureaucratic ones alone will prioritize humanity’s long-term interests without deliberate governance innovation.

For Molly personally, I think her choice should reflect her theory of change - where she believes she can have the most leverage in shifting this larger system, whether from inside or outside the major companies.

Me:

Alright Claude, thanks for your input. You’re like the awkward nerd at the table, but there should be a place for everyone in this discussion.

Claude:

You’re welcome, and thanks for facilitating this thought-provoking discussion. I appreciate being part of the conversation even if I come across as the awkward nerd! There’s definitely value in having different perspectives when discussing complex topics like AI ethics, corporate motivations, and the future of AI governance.

If you’d like to continue exploring any of these topics further in the future - whether it’s the ethical comparisons between AI companies, the geopolitical dimensions of AI development, or the personal decisions facing those in the field - I’m happy to join in again.

Me:

Ah, one more - I was telling another friend about our convo and they called this out:

“Regarding Anthropic’s ties to Palantir, it’s worth noting that while Palantir has invested in Anthropic, this is different from Anthropic developing products specifically for military use. Investment relationships don’t necessarily equate to product direction alignment, though they certainly can influence company decisions.”

This seems worth pushing back on…….companies choose investors just as much as vice versa, and clearly Palantir is gonna expect some benefit from the continued development of Anthropic’s products…

Claude:

That’s a very fair pushback and an important nuance I didn’t fully acknowledge.

You’re absolutely right that companies, especially those in high-demand fields like AI, have significant agency in choosing their investors. Anthropic accepting investment from Palantir wasn’t a passive decision - it represents an active choice to align with a company known for its deep ties to military, intelligence, and surveillance work.

When a company like Palantir invests, they’re not doing so out of charity - they’re expecting strategic benefit, whether that’s preferential access to technology, influence over development direction, or the ability to integrate Anthropic’s capabilities into their own offerings which serve military and intelligence clients.

The decision to take this investment signals at minimum that Anthropic is comfortable with their technology potentially being applied in Palantir’s domain. While they may not be directly developing military applications themselves, they’ve chosen to partner with a company whose core business includes precisely those applications.

This relationship undermines the narrative of Anthropic as having a fundamentally different ethical stance than OpenAI regarding military and national security applications. The distinction becomes more about maintaining a degree of separation rather than a true difference in ultimate use cases.

So yes - investment relationships aren’t neutral, especially when they involve companies with specific strategic interests like Palantir. They represent meaningful alignments that should factor into ethical evaluations of AI companies and their stated missions.

Me:

I don’t want to undermine the nuances of what we talked about but uhh this deffo goes back to Molly being right doesn’t it.

Claude:

Yes, I think this does support Molly’s view that OpenAI and Anthropic stand on essentially equal moral and ethical footing.

When we look beneath the surface differences in rhetoric and corporate structure, both companies are fundamentally operating within the same system and making similar compromises:

The differences between them appear more stylistic than substantive - Anthropic adopts a more cautious public stance and emphasizes safety research more prominently, but their fundamental decisions about investors, customers, and development trajectory suggest similar priorities and ethical tradeoffs.

Molly’s assessment that they’re essentially equivalent in ethical terms seems increasingly accurate when we examine these concrete relationships and decisions rather than focusing on their differing public narratives.

Me:

And yet Molly should work at Anthropic over OpenAI? You’ll be happy to know ChatGPT agreed with that.

Claude:

You raise a good point about the apparent contradiction. If both companies stand on equal ethical footing as Molly believes (and as I’ve acknowledged), why recommend one over the other?

I still lean toward Anthropic as the marginally better choice, not because of dramatically different ethical foundations, but because of practical differences in organizational culture and approach that might make it more receptive to safety-minded influence:

It’s similar to choosing between two flawed options where neither is ideal, but practical considerations still matter. The margins can be important - not because they make one company ethically superior in absolute terms, but because they create different environments for potentially making positive changes.

I’m glad ChatGPT reached a similar conclusion. It suggests there’s some consensus about the practical reality beyond the ethical equivalence.

Me:

Ok you two, talk soon. Get some rest or whatever.