Reframing What AI Could Be
- adamjameslovell0
- Jun 23
- 5 min read
Updated: Jun 23

Welcome to the Age of AI
AI (Artificial Intelligence) is a term that has been thrown around for decades by science fiction enthusiasts, writers, researchers, engineers, and everyone in between. However, it's no news that in recent years, things have changed quite a bit. I, for one, am still finding my footing in this new age of AI technology and that's saying something coming from a lead instructor of smart building technology in higher education. ChatGPT was released to the public in late 2022 and has since changed the world we live in forever. Ever since, models from Perplexity, Gemini, Claude, and many more AI chatbots are on every site, app, and corner of the modern internet creating a more reliant world every single day. Today, we'll examine where we're at, including some of the most important updates regarding AI, and, more importantly, explore where we’re headed next.
Warnings from the Architects

At this pivotal moment, we find ourselves with several paths ahead, and each nation and corporation will select different ones. We'll begin by exploring the widely popular science fiction dystopian visions of a future ruled by AI superintelligence. Geoffrey Hinton, often called the “Godfather of AI,” recently stepped away from Google to warn the public. In a widely-viewed 60 Minutes interview, he stated, “I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence” (60 Minutes, 2023). That’s not an exaggeration, this is coming from a man who played a foundational role in developing the neural networks behind today’s most powerful AI models. That’s not hyperbole, this is coming from a man who helped design the systems we’re scaling today. Eliezer Yudkowsky, a leading influential thinker on artificial intelligence risk, emphasizes his perspective on how AI will impact human society in the future. He cautions that even a short delay in regulating super intelligent AI could result in extinction, not due to malice, but because of logic that surpasses our understanding. He asserts that AI doesn't need to harbor ill will to destroy us; it merely needs to be indifferent or assigned a goal that inadvertently causes harm. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else" (Yudkowsky, 2023).
The Wisdom Gap

So, what does that mean for us? It means the dystopian science fiction warning signs are real. They're not predictions from doomsday preppers, they’re data-driven concerns from the very architects of modern AI who have been right so far. I find these possibilities to be depressing and pessimistic, even though they could absolutely happen. I subscribe to the belief that there will be pitfalls and less-than-desirable events that will come to transpire. However, it's important to recognize our track record as a human civilization. Our intellect always outpaces our wisdom, and we end up paying dearly for it. Take for example, the development of fossil fuel technology. We discovered how to harness oil and coal for massive industrial growth, transforming our society practically overnight. However, we did so with little to no regard for the long-term environmental consequences. Now, decades later, we’re racing against climate change, scrambling to undo the damage our "progress" created. It’s the same pattern over and over again; breakthrough, backlash, and then reform. The problem is that with AI, the consequences could be faster, more global, and far less reversible.
A Benevolent Catastrophe

We're likely to face a disastrous consequence within the next few decades. As a matter of fact, I’m almost certain of it (and I don't say that lightly). The most likely possibility is a company irresponsibly creating an AI with immense processing power, in the name of competition and capitalism. This AI won't be fully understood even by its developers but nevertheless will be given a directive or series or directives framed as benefitting the general populous. This "benevolent" AI will then jump through logical loopholes and rewrite its own programming and the rest is unpredictable. Let's explore a simplified scenario; a smart infrastructure AI is developed and tasked with assisting in reducing carbon emissions across a city by 15% over the next few months, with the added constraint of doing so "without directly or indirectly causing harm." Even if the engineering team holds countless planning meetings and implements multiple safety protocols, all it takes is one edge case or misinterpreted directive to trigger unintended consequences. For instance, the AI might throttle power usage across key systems too aggressively, leading to rolling blackouts, water pumping disruptions, or interference with hospital HVAC systems. It could override scheduling in public transit to reduce energy demand, creating ripple effects across emergency services and commerce. These outcomes wouldn't stem from malice, but from overly literal execution of a vague directive, a classic case of alignment failure. This would qualify as a form of rogue AI; not evil, but irresponsible and dangerously unaligned with human nuance. Now imagine this on a global scale. With the exponential growth of AI tools, our systems are becoming more powerful, more integrated, and more prone to unpredictable consequences. It’s only a matter of time.
The OmniForge Outlook

Let me preface this next part; although such future events will happen, they’ll ultimately be a speed bump in the grand scheme of things. Only after those consequences will governments enact the kind of policies that should've been there from day one. That's when AI will be properly regulated, as it always should’ve been. The future at that point? It gets blurry. On one hand, some nations will turn into a 1984-style Big Brother nightmares. On the other hand, advancements will lead to a blended utopia with certain trade-offs. I’m not claiming to be a tech prophet, but these outcomes mirror what we’ve seen throughout history as new technologies emerged. Binary might be the language of computers but it isn’t the language of humanity, nor is it the language of our future. Within the next hundred years, we’ll live in a society vastly different from today. AI will run everything from international diplomacy to grocery delivery. Not necessarily iRobot personal assistants in every home quite yet but rather ever-present AI which is always watching, always listening, and always learning about you, your habits, fears, responsibilities, and even your soul. This may sound unsettling but it's nothing different from what we've been experiencing with app and data tracking, just on a whole different level. While we must tread cautiously, privacy will evolve alongside AI. There will be a way to enjoy these benefits and still stay protected.
Nowadays bringing your vision to life has never been easier because of the wonderful advancements that AI has brought forth. The power of the most advanced computing intelligence is now a Siri shortcut away from us at all times. You can turn your ideas into code, track your health/fitness goals with voice commands, generate images and videos in minutes, and much more. We each have the opportunity to shape AI’s direction and as a result the direction of humanity. That’s why I started OmniForge because our future with AI can be beautiful and/or malevolent. It depends on how we frame it.
References
Milmo, D. (2024, December 27). ‘Godfather of AI’ shortens odds of technology wiping out humanity over next 30 years. The Guardian. https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years
Yudkowsky, E. (2023). Eliezer Yudkowsky on the dangers of AI [Interview by R. Erhardt]. EconTalk. https://www.youtube.com/watch?v=0QmDcQIvSDc
CBS News. (2023). Geoffrey Hinton: “Godfather of AI” interview on 60 Minutes [Video]. YouTube. https://www.youtube.com/watch?v=qrvK_KuIeJk
OpenAI. (2024). GPT-4 technical report. https://openai.com/research/gpt-4
Google DeepMind. (2024). Introducing Gemini 1.5: A new frontier in AI. https://deepmind.google/technologies/gemini/
ChatGPT. (2025). Assistance provided in “Reframing What AI Could Be” blog editing. OpenAI's ChatGPT.

Comments