June 27, 2025
Everyone is talking about AI, hyping it up more than crypto – so, it’s time for an IT guy to weigh in on the subject.
Am I about to try and take you on a Willy Wonka style tour of AI – Nope.
Am I about to take you on a no thrills look at how AI can ACTUALLY be used and discuss what limitations you will actually experience – Yep.
Disclaimer: Results may vary, your satisfaction is not guaranteed and your credit card may well melt if you try to recreate anything that I talk about.
It’s all well and good that every teenager with a webcam is posting how they made millions using AI yesterday but you’re working for a business, probably one that’s run by grownups – and potentially one that has to contend with words like “security” or “risk” and phrases like“You did what? You’re fired!”. So lets start with the simple question – what are you going to use AI for?
Here are two things that I’m using AI for as a CTO:
At its core, this is about using our information assets more effectively - to cut costs, reduce risk, and improve quality. In real terms we import information assets into a database and make that data available to an LLM (on top of the LLM – not inside the LLM).
Notice I didn’t use any buzz words there – and that’s because there’s no need to. All you need to know is that you start with a document collection and finish with a representation of that documents text in a database (your IT guys will probably be nodding and throwing words like chunks and vectorisation across the table at each other about now – but you don’t need to).
Now, before I start talking about security (because let’s face it, you want to know if I’m talking about randomly throwing stuff out onto the web) let’s talk about how I do this – am I using apps and how much are they?
I’m using a combination of apps and services. To create a proof of concept you can do this for under $100 (no really, under $100 – I will talk about how you can destroy a credit card later although this is just for the sake of warning).
Spoiler alert, I’m not writing a single line of code (yet) so I don’t need developers to do this.
This is beginning to sound too good to be true right?
For this to work you will need:
What are these apps?
n8n is best described as the open-source alternative to tools like Maker or Zapier, focused on no-code/low-code workflow automation. But that description barely does it justice. With interesting features like a clean visual editor, built-in debugging tools, and over 400 prebuilt integrations (from Jira to just about everything else), n8n empowers you to design powerful workflows without touching a single line of code - unless you choose to.
For those of you who have not seen this type of tool before, you might be wondering what this actually looks like.
This is what a chat bot looks like in n8n (it’s drag and drop):
Remember, there is not a single line of code written by me in this solution – nor any fancy json parsing.
It doesn’t – you need to write a simple web page and host it to access the output (outside of the n8n dashboard) – here’s what mine looks like (and I should mention that all code was written by ChatGPT, my DevOps team put a web server in place for me, shielded it from the outside world and put a pipeline in place to handle deployments if I wanted to make any updates to the web page).
This web page uses html and javascript and provides access to the n8n workflow and results. I also used an integration in n8n to link to Jira. Still no code written by me at this point.
You might be asking what’s the point – you already have Jira and Confluence so why bother?
Whatever the repository, organisational information assets are written from a certain perspective, in a certain style and with a defined audience in mind.
Your developers, business analysts and QAs might not have been that audience. Let’s also not forget that as a rule these assets tend to lose their usefulness over time. Not because they are irrelevant but because the library keeps getting bigger and people don’t tend to ‘Dig in’. This means that you might well have really valuable information that no-one is actually using (or in some cases, no-one knows it exists because of staff turnover).
Our developers are using it to ask questions about specifications and designs. They can ask for summary information or dig into topics that would normally be the domain of the business analyst. To be clear - I’m not suggesting that we’re replacing our BAs but I am suggesting that the simple questions that the developers and QAs ask might be handled as ‘level 1’ support via the AI and leave the BAs to work with less interruptions. Thanks to the Jira integration, the Devs/QAs can also perform analysis across both tickets and documents.
This is a simple use case – with a very straightforward implementation and a minimal running cost. And yet, it is already having a positive impact in my organisation.
We are doing other things around code analysis but I’ll write about that another time.
I mentioned ‘melting credit cards’ earlier – I’ll touch on that point now. You should be aware that whichever LLM you’re using (unless it’s self-hosted), there is a cost associated with it. What a lot of people fail to realise is that it gets more expensive the more you use it (not less which would seem the more intuitive solution). I point this out because if you don’t set a budget for your API consumption you could well get a nasty surprise after your entire Dev team expresses how much they’ve all enjoyed experimenting with <insert LLM name here>.
This section is included as I’m trying to be completely open about this subject, so it would be counterproductive if I don’t let you know that there are things to be aware of with this type of solution.
As I mentioned earlier AI does not stand for Actual Intelligence, it’s at best an analogy of intelligence made available via IT infrastructure. In real terms this means that you need to be aware that you may well ask questions and get an incorrect answer. For example when two words have a different meaning the LLM might provide an answer referring to the wrong one.
Take my chatbot for example. In our documentation we refer to a ‘vehicle’.
Now in our world a vehicle might be a fund or another kind of financial structure. However, the LLM knows that a vehicle is a device used to transport people or goods…
You can see where this is going right?
In order to resolve this issue, you need to ensure your prompts in the workflow clearly instruct the model to treat your own data as the top priority. It also helps to give some context about the type of answers you expect—are you working in financial services, or are you fixing cars? So in short, there are limitations and you will need to ensure that you test properly (this is just one example, there are more).
Firstly, notice that at no point have I suggested training an LLM with your data. At the time of writing, if you’re using the ChatGPT API, your conversations/data are not logged – they do log meta data, but if you’re talking about something unique like “Blue Zebra frogs” (not a real thing – and no search results in Google) – other people outside of your organization will not suddenly have information about “Blue Zebra Frogs” (still not a thing).
The n8n instance could be run in self-hosted mode – ours isn’t. The same is also true for Supabase (and you could also use any number of other databases such as postgress in Azure).
The n8n instance does encrypt data and your account can be protected in a number of different ways depending on your license/implementation type.
Let’s also not forget that the data that you put in the database is vectorised – which makes it pretty useless to a human reader even if they manage to gain access to it (and decrypt it). I would however suggest that you don’t add any sensitive data to a publicly hosted database. The documents that I’ve ingested are not sensitive (and only represent a subset of the overall library – this was intentional for the purpose of this project and our organisational attitude towards risk).
Finally, some readers might be disappointed that I’ve documented such a simple use case with a really straightforward implementation. To be honest - I didn’t write this for them.
I wrote this for the people who are exhausted by the hype, who have realised that AI is not actual intelligence but still want to see if their organisation can benefit from it. For those tired of seeing ridiculous project estimations (both time and money) – who just want to stick their toe in the water and see if it’s right for them.
AI is a tool like any other – it does a lot of things that might make you think it’s smart, but it’s not. Does that mean it’s of no use – not at all.
I’d like to end by encouraging you to think for yourselves rather than following the hype. Look for the small, practical ways AI can make a difference in your organisation. I leave you with a quote which I think says it all:
"The world is full of magical things patiently waiting for our wits to grow sharper."
— Bertrand Russell