What Will AI Actually Change?
Because I'm a nerd, I spent a lot of the Christmas period reading and thinking about ways that AI will change what we - "we" being all of humanity - do. The areas of human operation it will shape, or trigger an evolution, or even take over completely. And I had a few realisations - more on those in a moment.
Also during the break, I began having a physical reaction - you know, eyes twitching and stomach dropping - whenever I saw "AI" in an advertisement for one or another product. I'm referring here to new laptops with AI installed up to the gills or to "smart" kitchen appliances which purport to help you make the perfect canapé for your next garden soirée, or even the ironically branded Apple Intelligence and its baffling, even counterintuitive features. Given I spend so much time thinking and learning and working and advising in this space, this is an unfortunate turn of events! But it is becoming more and more evident that the rush to operationalise AI comes with real risks of fatiguing consumers at best, and alienating them at worst - gambles which major corporations are seemingly willing to take if it means a slight competitive edge and at least some justification for their outlay.
Above all, there is still a desperation to prove use-value - and simply making that up can undermine and exacerbate the ambiguity around the tech. It risks muddying some already muddy waters, building scepticism and pessimism, even in people who, till now, have been open and optimistic about where we are being shepherded.
Everybody gets AI!
OpenAI CEO Sam Altman has been fairly explicit about his belief that everyone should have access to AI - or, to quote him directly: “If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant.” Maybe this is reason enough for corporate execs to try to wedge AI into everywhere possible… the attitude seems to be: the future is coming anyway, why not bring it forward?
Maybe. But this raises a couple of issues: firstly, it creates an environment where innovation cannot happen without AI, lack of attention and investment arguably stifling evolution where it could be logical, valuable and even needed. But, more pertinent to this conversation, it distracts from the fact that current AI solutions are very useful in certain areas, already fundamentally changing how things are done.
A good example of this is programming. AI coding assistants are changing how software is created - it's not replacing software engineers anytime soon (Addy Osmani has a great breakdown of assistant pros and cons), but it is creating a new tool set for them which in turn leads to new possibilities. This is for a few clear reasons:
Code has relatively high predictability, resulting in relatively high AI accuracy
Programming languages are much simpler than spoken languages, which are complex, nuanced and fluid
Robust, practical, open source data sets are perfect for training AI
AI has proven to be effective at tasks or processes which are repeatable, predictable and where optimisation benefits from automation or learning - programming checks all of these boxes, which is why it is also regularly used as a benchmark for testing the effectiveness of language models.
Some other areas identified as being ripe for AI's picking are data entry, analysis and management, customer service, areas of healthcare and media, such as content creation and design. You are probably beginning to build a picture of the areas where AI will be deployed, customer sentiment aside - while new technologies are slowly being effectively deployed in industries like manufacturing and logistics, a common element to these fields is that they are largely already tech first.
Tech eating tech
I mentioned earlier that I had a few realisations regarding where and how AI will be deployed into the future.
The irony of AI development is that its primary area of disruption is tech itself - how something goes from idea to development to production. The gap between the start and end of this process is getting increasingly smaller. Maybe this was a strategic choice by LLM creators - that is, disrupt what you know. Also, the total addressable market is freaking huge - there's estimated to be about 30 million professional developers out in the world. Get a majority of them using your coding assistant, or replace them with your solution entirely, and you are in very valuable real estate (for example, the company which makes AI coding assistant Cursor has been valued at over USD 4 billion - and they have less than 10 people on staff!).
The other areas about to experience major disruption are the common and costly business processes which are usually critical to customer success for organisations. As mentioned, this can include data entry, customer support and brand activity like marketing or content creation. Because language models and bots can be leveraged for this work easily enough, this will happen at scale and, while the tech improves, the overarching baseline for quality will very likely drop.
The reason for this is that humans - beautiful, experienced, dumb, flawed, distracted, creative humans - are needed in this process. And while you can guarantee that a language model will produce something passable, you cannot guarantee that the decision makers will ensure that a real, living breathing human will still be there to apply their knowledge and instincts to identify gaps or correct misinformation or question a situation or just know in their gut that something isn't right and so they need to step in.
I have personally used ChatGPT, Claude and the other language models at length - they’re magical. I have created GPTs and projects and written emails and brainstormed and created plans and used it to set the foundation for extensive, robust, situation-specific documentation. It does all of this very, very well... but it is most effective when I take what it has created - based on my prompts - and make it my own. It is quite possible that the time saved by doing this ends up approaching negligibility. Which is why I can see organisations skipping this part altogether and cutting the human from the loop. In case that’s not clear, this is a very bad idea. I've used this analogy before: you can have a Roomba alone cleaning your house, or you can have a Roomba cleaning your house and also regularly vacuum and target problem areas yourself. See what I mean?
(Or - you can have multiple Roombas running around your house, some cleaning and others programmed to correct the mistakes of the primary cleaning bots while you clean the really important stuff yourself… I have just described an optimal AI Agent + human in the loop workflow but I also think that's where my metaphor falls over, so I’ll stop here.)
On the other hand, it's fairly clear that the areas which are shielded, at least for now, from true AI disruption is manual labour, work which humans are physically doing. This is an interesting turn of events - turns out the white collar work is the stuff which machines can do, not the blue collar work. Yes, robotics is a growing AI subcategory, but we are a long way off from a robot turning up to your house to fix that leak in your basement.
Proving value
So the challenge in both directions - for humans in roles facing disruption sooner rather than later and also for the available AI - is to prove value.
Hardware like laptops, kitchen appliances and your smartphone must prove that having AI onboard is of value - otherwise, what’s the point?
And humans, at least those of us in fields which are facing the headwinds of disruption, must prove their value - by knowing how they can be effective, how to leverage new tech and why they are important in a workflow which looks very different to what they have been, until now, familiar with.
Yes, it’s weird. We used to take needing a human to do a job for granted. That is changing and we can be ready by thinking about how we can be effective.
Outsourced friendliness
In the video I linked to above, tech influencer Marques Brownlee mentions that if you are an Apple customer, you are now faced with a decision: you can get Apple Intelligence to help you write a friendly email to someone - or you can just write that friendly email yourself. How much time have you saved getting AI to write friendly email after friendly email?
And I think that - not time-saved but actual helpfulness - is part of what we need to clearly define for ourselves with AI, as well as any other new, innovative technology. We need to ask: how does this actually help me? If it takes a while to come up with an answer, that’s OK. We enter into shaky territory when those answers are not forthcoming, or when we begin to simply make the answer up.
For expert advice on how to leverage AI for your business needs, get in touch today.