Blog
Cloud Control

Automation, Layoffs and Humans

January 31, 2025

As 2025 rolls in, and we watch NVidia’s stock in amazement as it shaves off over $500B of value in light on a new AI in town, I’d like to quickly discuss another optimistic outlook - AI and how it affects us humans, especially in the context of automating our work, and potential layoffs (yes - I’m a huge hit at dinner parties ).

But all kidding aside, we have been watching and listening to the promise of AI for a good two years now (especially since GenAI became pretty much the de-facto way to mention AI). But in tech time, two years is a long time, and all those promises of being replaced with AI—well, they haven’t really materialized. So let’s take a quick look at what we have accomplished as an industry, where the promises did meet the expectations, and where they fell short.

The Good:  AI has definitely changed the way we do many things in our daily routine, from processing emails and long-form text and providing concise summaries to helping with the creative process of multimedia and artwork. On the text processing side, summarizing a long thread of emails or a collection of documents is invaluable for many of us who handle them while context-switching between multiple projects. Being able to iterate quickly through such complex processes blurs the lines of creativity and the ownership of the media, especially as we consume more content that either had AI involved in its creation or was completely created by AI.

The Bad:  Hallucinations, lies, and made-up facts. By this point, everyone has run into one of those, if not all three. GenAI was designed to come up with statistical predictions of content that align with what the user is asking for; and many times, to its detriment. Without anchoring in truth and without the ability to discern between inaccurate and factual data, GenAI is left blind to the current state of reality and will produce results that a human would have never come up with, given the same task (unless it was maliciously trying to sabotage/lie from the get-go). Furthermore, by mimicking similar reference materials, GenAI will sometimes come up with reference material that is completely made-up or taken out of context. Even more modern applications that apply RAG (retrieval augmented generation), agents, and iterative searches/runs, are still highly prone to this as they all rely on the statistical approximation nature of GenAI.

The Ugly: The AI - Human interface. We somehow ended up with a text/speech based interface between us and an AI model, which more often is, again, used to mimic the same interface between humans. I can’t count the numerous instances where I saw an AI “apologize” or joke about how it came up with inaccurate or completely false answers. This means that there still is a human in the loop. It is not truly replacing humans as of yet, and at best is augmented by software that requires supervision and guidance. It may come up with “PhD level reasoning,” and I know a lot of PhDs who are not just wrong, but also misinform others (at least AI has the academic integrity to admit mistakes and correct themselves, in most cases.)

This leads us to the happy prediction: AI isn’t replacing us anytime soon, and it may help us in some contexts. It can shed off the load of those tedious tasks and when used in the right context with the right algorithms, it can replace large elements of mundane and boring elements of our work. Humans have so much more value to add to the world than just accomplishing repetitive, duplicative and repetitive efforts. We can now tackle more complex, creative and collaborative work, but I wouldn’t be overly concerned about finding a new job for most of us anytime soon. Just learning how to correctly apply these tools to the right areas is likely the most important “leveling-up” skill to focus on in 2025.