Skip to main content

TAG Talks: Is A.I. Coming For Our Jobs or Making Us Better At Them?


Pardon the mild clickbait title of this blog, but within it are two questions often asked about Artificial Intelligence. As a buzz term, “A.I.” means a variety of different things to different people. To some, A.I. is the bringer of the end times and a danger to the human race, while more opportunistic Generation Z individuals have embraced A.I. as a helpful tool for doing homework. Through the uncertainty of what A.I. really is and what it can become to society, I would like to talk about what we know that it is today, and what it might mean down the line. There is a hardware element with ever more powerful microchips, the developers writing the A.I. software, and what it really means in functional terms.

Let’s Try and Define “Artificial Intelligence”

Back in 1955, Stanford University professor John McCarthy defined A.I. as “the science and engineering of making intelligent machines.”

Feels open ended, right? “Intelligent machines” have been invented and sit on our desks as computers or are grasped in our palms in the form of smartphones. Year by year, the microchips inside our devices get faster, more efficient, and more capable of being programmed to receive our user input, save it, and give it back to us in the form of apps/games. Modern A.I. is about taking the “intelligent machines” McCarthy mentioned and making them capable of performing tasks normally consigned to the mental processes of humans. It’s a higher level of intuition-based technology that might just provide the biggest economic leap forward since the invention of the personal computer in the early 1970’s.

Microchips & Machine Learning

The space between our ears is filled with a brain that has mightily impressive thinking and decision-making powers. To make an example, we all have that friend we are hesitant to take a ride with. Even as a “bad driver,” he/she still possesses the driving skills that automobile engineers would love to have built into cars. The things even the least skilled people are capable of may seem insignificant but are incredibly complex to teach a machine. The first programmable digital computer (ENIAC) took up a very large room and was capable of doing 5,000 calculations per second but broke down daily. A modern graphics processing unit can perform billions of calculations per second and fits inside your smartphone with a useful life of up to several years.

Companies like NVIDIA, AMD, Qualcomm, and Intel are at the leading edge of designing these microchips that are ever more powerful and capable of performing tasks for us. Then there are companies like Taiwan Semiconductor, GlobalFoundries, and NXP Semiconductors who have the specialized technology to fabricate these microchips. These companies, among dozens of others, are the physical hardware backbone that will help build the future of A.I. in the years to come.

Yet, all this computing power is of little use without good instructions teaching these computers how to make complex decisions. This field of technology is broadly known as “machine learning.” Think of machine learning as the process of trial and error, the same way we as humans learn to do anything in our lives. Think of how you learned to park a car without hitting the curb, write grammatically accurate sentences, or cook the perfect steak that is neither MOO-ing nor hard as rubber. These are examples of human tasks where every person who has ever attempted them, learned to do them better with successive attempts. Artificial Intelligence utilizes machine learning in a similar manner, but instead running thousands/millions of scenarios in the background to arrive at just the right solution for its user.

Narrow A.I. Applications and Their Benefits/Consequences

Unlike people, computers can only achieve what they have been specifically instructed to do. It’s taken entire office buildings of highly talented engineers to write the code that allows cars to stay in their lane at highway speeds without hitting anything. The software that helps that car drive is not going to be sentient or able to switch to unlearned tasks like cooking a dinner (imagining my truck in the kitchen is quite the fever dream). Driver Assist, ChatGPT for writing comprehension, Siri on your iPhone for understanding verbal questions, facial recognition for your doorbell, and predictive analytics that keep your favorite consumer products in stock are all examples of narrow A.I. applications. These software programs have specific tasks, and while incredibly “smart” are limited to what a group of engineers taught the software to accomplish.

The inner Luddite (19th century anti-technology movement) in our minds is likely to raise some questions about what A.I. means for our jobs, our ways of life, and what happens if governments/companies put this technology to use in nefarious ways. These concerns are valid, but these same questions point to the vast potential artificial intelligence has to make our lives better. With appropriate guardrails put in place, artificial intelligence could hasten the next economic boom of our lifetimes. People and companies will be able to do difficult tasks in less time, with more efficient outcomes. Customer service will be snappier, consumer products more reliable, and complex logistics less prone to error. These are just a few of the numerous improvements we might hope to see across all industries. In essence, it should allow us to have more of what we want in life. I, for one, think that such a breakthrough technology will not only be profitable in monetary terms, but in giving us more quality time. To borrow a line from our own Ansardi Group mission statement, artificial intelligence may help bring “confidence to our clients’ lives, so they can focus on the things which matter most.”