
Modern engineering work isn’t short of tools. What it’s short of is focus.
Messages arrive constantly. Emails, Slack threads, Jira tickets, calendar pings, comments, mentions. Everything feels urgent. Very little is actually important. For senior engineers and engineering leaders, that noise fragments attention and pulls time away from the work that really matters.
This is the problem Alex set out to solve.
As Head of Engineering, Alex spends his days balancing technical decision-making, team leadership, delivery oversight and stakeholder conversations, whilst juggling the demands of client projects. Like many engineers in senior roles, the challenge isn’t lack of capability, it’s cognitive overload.
A personal assistant, designed for focus
Alex’s assistant acts as a single intelligence layer across the tools he already uses. It monitors messages coming in from email, Slack and Jira, reading and understanding each one in context.
Rather than reacting to every interruption, the assistant looks for patterns. It groups related messages together, links them back to existing tasks, and assesses urgency based on content, sender, timing and historical behaviour.
The output is simple by design: one prioritised to-do list.
When something genuinely needs immediate attention, the assistant sends a clear push notification. Everything else waits its turn. The result is fewer context switches, less background stress, and more time spent doing meaningful engineering work.
Augmentation, not replacement
This is a textbook example of what we mean by augmented engineering.
In a world where speed is prized and responsiveness is expected, productivity often suffers because engineers are pulled in too many directions at once. AI can help, but only when it is applied with intent.
Alex’s assistant doesn’t replace judgement, creativity or leadership. It supports them. It removes friction, surfaces what matters, and protects deep focus.
The engineer stays firmly in control.
That distinction matters. The most effective uses of AI we see are not about handing responsibility to machines. They are about amplifying human capability, especially in complex, fast-moving environments.
Alex, on how he built it
I chose to create this tool due to the volume of noise I typically receive in communication channels and how distracting it can be to getting work done, but also fundamentally because I thought it would be fun.
I built the assistant using n8n, a workflow orchestration tool, which already had built in integrations with Slack, Google Workspace and Atlassian (Jira), which would be the primary integration points.
The first ingestion of data was handled by a trigger on Gmail emails, that polls my inbox every 30min for new messages. Each email is fed through an AI model (we have a ChatGPT subscription, so at the time of writing it’s currently using GPT 5.2-mini). The model is asked to classify an email into one of several possible categories, based on a snippet of text from the email body; Requires Action, Information, System Notification, Quick Response, Junk, Unknown. The workflow then applies the corresponding label to the email. This had the immediate effect of a much less cluttered inbox. Secondly, if the AI model inferred that Action was required from me, then it pulls the entire email body for the corresponding message(s) and proceeds into a secondary sub-workflow, which is responsible for interacting with my todo list.
This secondary workflow pulls out a list of my existing tasks in Jira, and uses an AI model to determine whether each input is a new task, or whether an existing task needs escalating. I have my tasks prioritised by categories of green, amber and red. An additional mention of an existing task causes the task to be escalated up the category hierarchy, with an additional mention of a high priority task resulting in the workflow sending me a notification in Slack.
Within Slack, I created a Bot, which I can interact with within channels I’m a member of and act as a second ingestion point into my assistant’s workflows. I’ve often been asked to pick up ad-hoc requests within a multitude of channels within Answer’s workspace. If I mention my Slack App within a conversation, this triggers another workflow within n8n, which will extract the most recent preceding messages within the channel it was triggered within. Again an AI model is used to infer the task that is being asked of me. This again triggers the sub-workflow that interacts with my to-do list.
When I first mentioned to colleagues that I had produced such an assistant, this caused much amusement and to no surprise to me, instant attempts to gamify the bot in Slack, which for instance received requests to “make sure Alex attends this meeting and cancel everything else that day”. I had actually foreseen this eventuality and built into the workflow detection of who was integrating with the Slack App, with the Assistant issuing a polite but firm rejection of anyone other than myself attempting to interact with it.
Issues that I needed to deal with initially were the accuracy of the classification of emails. n8n greatly helped here, as I was able to use the built in Evaluations capability, which allows you to manually trigger a workflow with test data. I provided an example dataset of real emails and an expected classification. The workflow in Evaluation Mode updates the test dataset with the actual classification from AI and then calculates a metric on accuracy. This allowed me to refine the prompt until It was very good at interpreting my inbox. Over time, there will inevitably be outliers, but I can simply add those to my test data set, refine the prompt further, until it improves once more. This should also allow me to easily switch between models being used, running the same test data against it to prove that I’ve not seen a degradation if I choose to upgrade to a new version of GPT or go in a different direction.
From a privacy perspective, I’m lucky to be afforded access to a ChatGPT subscription, with built in enterprise-level terms, including the prevention of any data I send to their models being used for the training of subsequent model versions. This assurance meant that I was comfortable to implement the interaction with AI by my assistant’s workflows handling potentially sensitive data within my inbox.
All in all, I had fun producing this little project in my spare time, which has had if anything a huge cleansing effect on my inbox. There’s always going to be a few kinks to iron out, which I’ll get to, but it’s a start, working towards a much less context-switching, prioritised work-life.
What this tells us about the future of engineering
As AI becomes more accessible, the differentiator won’t be who uses the most tools. It will be who uses them well.
Engineers don’t need more dashboards, alerts or noise. They need systems that respect their time, understand their context and help them focus on the work only humans can do.
Alex’s assistant is small, personal and deliberately scoped. And that’s the point.
The future of AI at work isn’t about replacing roles. It’s about designing systems that quietly make people better at what they already do.
That’s augmented engineering in practice.