I press the enter key. A key I have pressed thousands if not millions of times in the past typically with predictable outcomes. This time, it was different. I had absolutely no idea what to expect. Being a pessimist/skeptic at heart, my expectations were very low, however, what happened next both amazed and frightened me at the same time.
Seemingly, out of thin air, a concert of technology was being written. Resources created and intertwined to support the trivial, high-level input I provided. The final result was something that may have taken me weeks to generate myself due to the verbosity and my lack of expertise. But it seemed to know…. All.
I was finally introduced to AI from a practical standpoint. Specifically, to a LLM (large language model) online application called Claude. By simply asking it to create an application that could connect to a legacy IDP (Identity Provider) service that we were extending for a client, it spit out all the required configuration, code and instructions to deploy it on my local system. I was a recent addition to the project and had no experience with the IDP, so I took a chance that Claude could help me along in understanding the integration. I was not disappointed.
When trying to execute the code locally, an error spit out in the console. The error was based on the fact that I provided an incorrect environment variable for the IDP’s token endpoint. Rather than fix the problem, I wanted to see what Claude could do to remedy the issue. Since all of the original context of my initial prompt was still in Claude’s “brain,” I simply told Claude: “I have an error showing: {{the error text}}”. Claude realized that the issue was not necessarily a code problem, but configuration. Rather than simply tell me the URL I provided was incorrect, Claude added additional error handling to the code and logging so it would fail gracefully.
And, as I write this now, I realize I am referring to the LLM AI tool as if it was a person…
I’ve had some time to mentally process my experience, and now look around at all the people who could be directly impacted by this technology. Both in a positive and negative way. Spanning all industries.
As a technical architect with 25 years of experience in software, much of it in the eCommerce space, I understand the complexities of these applications. I can assist with difficult requirements provided by clients, sometimes in a very eclectic way. I consider myself more of an artist than someone simply following “best practices” from a manual. This makes me feel less threatened by the existence of this artificial mind. I do feel that it can be used as a tool to enhance my productivity, as described in the beginning of this reflection.
However, my empathy cannot overlook others in my field who may not have similar luxury of abstract thought. Trivial, mundane and repetitive work, handled in the past by colleagues, may find its way into the input of these AI tools. The question that we have to ask ourselves is what will happen with these jobs that may be rendered obsolete? If you feel safe from outsourcing to AI, will you be standing on solid ground or merely propped up by sand? Who will eventually take your place when years of experience could be summarized in an input prompt. What happens when AI output becomes the only AI input? At that point, have we lost control of the situation?
We, as a people, need to find a balance with AI. It should be a tool to enhance our personal growth, not replace us. Responsible use of AI involves keeping a human in the loop, ensuring that AI is a dance partner rather than the sole performer. The balance is crucial to prevent a future where human expertise and creativity are eliminated.
At AAXIS, we recognize the incredible potential of AI to augment our capabilities. By promoting responsible AI use and maintaining human oversight, we can leverage technology to drive innovation, while preserving the valuable human touch that defines our work.
Ready to learn more? Request a 20-minute meeting to see how AI can help enhance your workflows.