top of page

Will AI End All Humanity?

The Inevitable Rise of AI: Fears, Geopolitics, and the Imperative for U.S. Dominance

We were just getting used to headlines screaming at us that Artificial Intelligence (AI) was going to end humanity as we know it. Now the headlines are screaming that "AGI" is going to kill us all. AGI is the acronym for Artificial General Intelligence. This is the next step in the evolution of AI.

Is this the inevitable destination where AI & AGI will lead us?
Is this the inevitable destination where AI & AGI will lead us?

Artificial Intelligence (AI) and Artificial General Intelligence (AGI) are often used interchangeably in popular discourse, but they represent fundamentally different levels of machine capability. AI refers to the broad field of creating machines that can perform tasks requiring human-like intelligence, but in practice, they do these amazing things in a very narrow area. In contrast, AGI is a hypothetical advancement where AI achieves human-level intelligence across any intellectual task, adapting and learning like a person without task-specific programming. Some refer to it as machines becoming sentient--able to understand, sense and interpret what it has learned with very little human guidance and to advance its knowledge based on what it has learned. We are already seeing examples that come very close, e.g. - fully self driving automobiles.


In an era where artificial intelligence (AI) is reshaping every facet of human existence, the march toward Artificial General Intelligence (AGI)—AI that matches or surpasses human intelligence across all domains—looms as both a promise and a peril. As Elon Musk has starkly warned, "AI is one of the biggest risks to the future of civilization." Yet, the proverbial "genie is out of the bottle," as numerous experts assert, this makes halting its progress not just impractical but impossible. Amid rampant fears of humanity's ruin, companies like Palantir are often demonized for their role in AI-driven surveillance and defense. But the reality is clear: AI's evolution is unstoppable, and for the United States to safeguard its future, it must lead the charge—lest adversaries like China or Russia wield this technology against us.


This article explores AI's ascent to AGI, the existential dread it inspires, the backlash against key players, and the geopolitical stakes that demand American supremacy. Drawing on expert predictions, quotes, and visuals, we'll unpack why dominance isn't optional but essential.


The Rapid Ascent: From AI's Origins to AGI's Horizon

AI's journey began in the mid-20th century with foundational concepts like Alan Turing's 1950 paper on machine intelligence, evolving through milestones such as the 1956 Dartmouth Conference that birthed the field. Fast-forward to today: breakthroughs in deep learning, neural networks, and large language models (like those powering ChatGPT) have accelerated progress exponentially. Key timelines paint a picture of inevitability:

  • 1950s-1980s: Early AI winters and summers, with symbolic AI dominating until computational limits stalled growth.

  • 2010s: The deep learning revolution, fueled by big data and GPUs, led to feats like AlphaGo defeating human Go champions in 2016.

  • 2020s: Generative AI explodes with models like GPT-4 (2023) and Grok (2023 onward), handling multimodal tasks.

  • Predictions for AGI: Leading voices converge on near-term arrival. OpenAI's Sam Altman forecasts AGI definitions from five years ago already surpassed, with superintelligence as the next frontier. Elon Musk predicts AGI by 2026, warning it could "exceed the intelligence of all humans combined" by 2030. Anthropic's Dario Amodei sees AI surpassing most humans in 2-3 years from 2025. Aggregated expert surveys point to a 50% chance of AGI by 2031-2040.


This trajectory isn't linear; it's explosive. As Musk notes, "I tried warning them" about superhuman AI's dangers, yet progress continues unabated. For a visual overview:


According to Elon Musk, Operational AGI may become a reality by the end of 2026


Fears Run Rampant: AGI as Humanity's Potential Doom

The specter of AGI ignites primal fears. Experts warn of "existential risk," where misaligned AI could lead to human extinction. A 2024 report equated AGI risks to nuclear threats, urging global mitigation. Yoshua Bengio, a deep learning pioneer, argues unchecked AGI races pose grave dangers. Joscha Bach highlights AGI's potential to replace humanity if uncontrolled.


Elon Musk has been vocal: "AI could be our savior or our enslaver," potentially heralding "the end of the human race." Sam Altman echoes this, signing statements that AI extinction risks should be a global priority, admitting it keeps him up at night. Yet, Altman tempers: "The world will not change all at once; it never does."


These fears aren't abstract. Uncontained AGI could self-improve recursively, leading to an "intelligence explosion" that outpaces human control. Videos like this TED-style discussion on AGI's societal impact amplify the debate: James Cameron on AI's Double-Edged Sword. Or watch Connor Leahy's stark warning: AI Expert on Extinction Risks.


Demonizing the Defenders: The Case of Palantir and AI's Dark Side

Companies at AI's forefront, like Palantir Technologies, face intense scrutiny. Founded in 2003 with CIA backing, Palantir's AI platforms aid in data analysis for defense, counterterrorism, and surveillance—tools criticized as enabling mass spying and ethical abuses. (Note: Search results yielded tangential criticisms of demonization in other contexts, but Palantir's real-world backlash stems from contracts with ICE and military, often labeled "Orwellian.") Detractors demonize it for prioritizing profit over privacy, yet its role in national security is pivotal. Palantir's powerful AI analytic tools coupled with state sponsored surveillance could yield some incredible solutions.


Imagine a drug bust take place at a specified location, there is conclusive evidence that the location was used as a distribution hub for drugs. Palantir can now comb through surveillance data and identify any vehicle that spent time at that location. It can read the license plate, cross reference to the owner and track that vehicles movements through the web of interconnected surveillance. In theory, it could be entirely automated. Where do privacy rights come into play, not to mention the need for warrants and due process. The technology is amazing but also dangerous if misused.


As one X post notes, AGI's military deployment could allow adversaries to "infiltrate our systems," underscoring Palantir's necessity.


The Genie Unleashed: Why We Can't Stop Now

"The AI genie is out of the bottle, and we can't put it back in," declares a VentureBeat analysis, emphasizing that pausing development would cede ground to rivals. Experts agree: AI's momentum is irreversible due to economic incentives, global competition, and technological diffusion. Musk concurs: AGI is "inevitable," so we must shape it responsibly.

Attempts to regulate or pause, like the 2023 open letter for a six-month moratorium, failed amid "out-of-control races." As one Reddit user quips, AI isn't an "uncontrollable autonomous organism"—but halting it unilaterally invites exploitation.


The Slippery Slope of "Rational Regulation"

While some advocate for "rational regulation" as a balanced approach to curbing AI and AGI threats—such as risk-based frameworks like the EU AI Act—history and experts warn it's a slippery slope, where initial well-intentioned rules escalate into overreach, stifling innovation, entrenching monopolies, or enabling authoritarian control. What starts as addressing ethical risks can devolve into burdensome compliance that favors big tech giants, slows U.S. progress, and cedes advantages to less-regulated adversaries.

Real-world examples illustrate this peril:

  • Predictive Policing and Surveillance: AI tools for crime prediction begin with analyzing datasets but slide into mass data collection and biased profiling, raising privacy concerns and calls for more invasive oversight, as seen in U.S. and global implementations.

  • Autonomous Weapons Systems: Regulations on AI in warfare, like limits on "kill webs," often fail to prevent escalation toward fully autonomous lethal systems without human oversight, creating ethical and arms race dilemmas.

  • Workplace and Health Monitoring: Employer AI for productivity tracking starts innocently but creeps into constant surveillance, potentially leading to "function creep" where personal data is misused, as highlighted in reports on emerging tools.

  • Educational AI: Federal guidance on AI in schools aims to enhance learning but risks a slide toward replacing teachers entirely, eroding human elements in education.

  • Broader Analogies: As entrepreneur Elad Gil notes, AI regulation mirrors COVID policies, where initial measures expanded into prolonged restrictions; similarly, ethical pledges like Google's 2018 vow against weaponized AI eroded amid pressures, leading to surveillance expansions.


These cases show how "rational" starts can trigger exaggerated outcomes, underscoring the need for minimal, targeted interventions that preserve U.S. leadership rather than hampering it.


Geopolitical Stakes: U.S. Must Dominate or Perish

AGI isn't just technological; it's geopolitical. The U.S.-China rivalry defines this arena, with AI as the "new front line." China leads in industrial AI applications, closing gaps in models and semiconductors. Russia collaborates with China on information manipulation, amplifying threats.


To protect against weaponized AI, the U.S. must lead. Export controls tighten, alliances form in the Middle East, and investments surge. As an X thread warns, AGI is the "center of a new Cold War." Another post predicts preemptive strikes if one nation nears dominance.

Symbolizing this imperative:


Conclusion: Embrace the Inevitable, Lead with Resolve...and Pray

AGI's rise spells transformation, not inevitable ruin—if steered wisely. Fears are valid, but demonizing innovators like Palantir ignores their protective role. With the genie free, U.S. dominance is our shield against adversarial misuse. As Altman reflects, "Everybody believes that AI is game changing and nobody has a clue what it means for leadership." The path forward: Innovate boldly, regulate smartly, and secure our future.


In the midst of these uncertainties, we can draw comfort from Romans 8:28: "And we know that in all things God works for the good of those who love him, who have been called according to his purpose."

Comments


bottom of page