Site Logo

Hello, you are using an old browser that's unsafe and no longer supported. Please consider updating your browser to a newer version, or downloading a modern browser.

Skip to main content
decorative image

Artificial Intelligence—Answering a Few Key Questions

Mark Morris

Senior Advisor

Minute
Read


What is artificial intelligence? Why has it gained so much prominence recently? Why should we care about it?

Defining artificial intelligence, or AI, is challenging because simply defining intelligence is not easy. Even dictionaries struggle! Marvin Minsky, an AI pioneer, described intelligence as a “suitcase word” packed with more than one meaning. Wikipedia states that intelligence has been defined many ways and says that intelligence “can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.”1 This is helpful, but examples of what “counts” as intelligent are varied and not always clear, which makes defining what is artificially intelligent difficult as well. However, we do not have to have an ironclad definition to learn about AI. Instead, we can opt for a somewhat limited, but functional definition.

AI can be defined as a non-human system that can mimic learning. Put another way, it is typically a system that includes software with code that can get better over time without needing a human
programmer to rewrite it.

AI can be defined as a non-human system that can mimic learning. Put another way, it is typically a system that includes software with code that can get better over time without needing a human programmer to rewrite it. Improvements in traditional software, by contrast, only occur when a person rewrites the code. For example, until relatively recently, Microsoft Word was an example of traditional software that was not AI—the code was written by people and static until the next update. By contrast, the current version of Word has functions like Dictate and Editor that draw on AI capabilities that “learn” over time, getting better without anyone writing new code. Combinations like this blur the line between traditional software and AI, which contributes to the confusion around what is and is not AI. At the same time, while AI is difficult to strictly define, not all software is AI. In part, the confusion surrounding what is and is not AI is caused by stretching the definition beyond any reasonable bounds. 

The explosion of digital data is difficult to quantify, but it is common to see statements like “more data has been created in the last two years than in all of history.”

With a working definition, we can take on another puzzle: why has AI become so popular recently? The term “artificial intelligence” was coined 75 years ago, so it is interesting it has become fairly common only in the last decade, and ubiquitous in the last few years. What changed? One answer is simply that Watson beat Ken Jennings at Jeopardy! in 2011, and AlphaGo beat Lee Sedol at Go in 2017. These watershed events—AI defeating human champions in two highly publicized competitions—vaulted AI into the public spotlight, capturing the collective imagination and redefining what was possible for AI. They were also manifestations of two critical underlying developments: the ongoing improvements in computing power coupled with staggering increases in digital data. 

Computing power has improved through better and faster hardware over time, retasking graphics processors (which, it turns out, are well suited to the computation required by AI), and now processors specifically tailored to AI. Critically, hardware reached a point where powerful algorithms could be run cost effectively and in a reasonable amount of time even with huge amounts of data. 

The explosion of digital data is difficult to quantify, but it is common to see statements like “more data has been created in the last two years than in all of history.” This is just another way of saying that the total amount of data is doubling every two years, and it implies that there is now 1,000 times as much data now versus ten years ago. It is not a coincidence that AI has made impressive strides as more and more data is available—data, and lots of it, is essential to AI. 

The way AI becomes better is through “training” on large amounts of data. One oversimplified way of thinking about training is as a process of comparing examples of digital data, such as a spam email or cat images, to emails that are not spam or images that are not cats, and letting an AI program come up with a description of the example. With only a few examples, the description will be very limited and the program will likely make many mistakes. But with each failure, it refines the description, becoming more flexible and more accurate. With enough data, training can result in accuracy that exceeds human abilities and can do so quickly. 

It is not a coincidence that AI has made impressive strides as more and more data is available—data, and lots of it, is essential to AI.

This ability, and the successes it has engendered in many areas, hints at the answer to a final question: why does it matter? The reality is that various incarnations of AI, such as machine learning, natural language processing, and AI-assisted robotics have made tremendous inroads. Deepindex.org, a website that tracks examples where AI is being applied and what it can do, lists over 800 different areas impacted by AI. It also matters because companies like Amazon, Google, Microsoft, and Facebook—four of the top five companies in the S&P 500 index—are all arguably AI at their cores and are aggressively pursuing new and more powerful applications. While there are many areas where AI struggles, the progress is undeniable and rapid. The trends driving it—computing power and data—along with advancements in techniques and the efforts of legions of AI researchers suggest that it will continue to expand its reach. 

It is possible, then, that ignoring or downplaying AI might be akin to doing that with electricity 120 years ago, or the internet 25 years ago. It might matter less—perhaps far less—than these innovations, but it might still be an essential tool with the ability to reshape the way businesses operate.

It is possible, then, that ignoring or downplaying AI might be akin to doing that with electricity 120 years ago or the internet 25 years ago? It might matter less—perhaps far less—than these innovations, but it might still be an essential tool with the ability to reshape the way businesses operate. Andrew Ng, a Stanford professor who has led AI efforts at Baidu and Google, phrased it with noteworthy urgency: “It’s not too late for traditional companies to develop a strategic plan to take advantage of AI.” That is good news! It is not too late—even for companies far from spam filters, cat images, Jeopardy! or Go. 

This communication contains the personal opinions, as of the date set forth herein, about the securities, investments and/or economic subjects discussed by Mr. Morris. No part of Mr. Morris’s compensation was, is or will be related to any specific views contained in these materials. This communication is intended for information purposes only and does not recommend or solicit the purchase or sale of specific securities or investment services. Readers should not infer or assume that any securities, sectors or markets described were or will be profitable or are appropriate to meet the objectives, situation or needs of a particular individual or family, as the implementation of any financial strategy should only be made after consultation with your attorney, tax advisor and investment advisor.  All material presented is compiled from sources believed to be reliable, but accuracy or completeness cannot be guaranteed.

About the Author

Mark Morris

Mark Morris

Senior Advisor Contact