Like it or not, artificial intelligence (AI) is already part of our daily lives. From the smartphones in our pockets to the Alexa virtual assistants on our kitchen counters, AI and its applications are accepted norms today.

While we appreciate that AI can automate repetitive workplace tasks or even drive a car, the reality is that its implications are much further reaching.

Luminaries like Elon Musk and Bill Gates have spoken out about the potential downsides of AI. At times, they have even issued outright warnings. Gates compared AI to nuclear energy — simultaneously dangerous and full of possibility.

Of course, these visionaries appreciate AI’s potential. Musk was among the founders of OpenAI, a research laboratory dedicated to ensuring that AI serves all of humanity — funded in tandem with Gates.

While Musk and Gates have focused on how to best harness AI’s power, Andrew Yang has looked to cushioning its toll on workers: During his run for the 2020 Democratic presidential nomination, he pitched the idea of a universal basic income (UBI) for all Americans to help offset AI-driven automation’s impact on the US labor force.

The message is clear: Now that AI applications have been developed, companies and governments — and investment professionals — must stay ahead of the curve of the AI revolution.

AI’s Potential

What distinguishes AI from statistics? Its ability to learn and modify its own instructions. This is an important concept to grasp because misunderstanding and oversimplifying AI can lead to false assumptions that may in turn result in ill-conceived regulation.

On a technical level, in the words of Joe Davison, “Statistics is the field of mathematics which deals with the understanding and interpretation of data. Machine learning is nothing more than a class of computational algorithms (hence its emergence from computer science).”

Learning is at the very heart of AI. And just as learning is a process, so too is improving AI. To be truly effective, AI needs a feedback loop, often with human input and collaboration. So every time there’s a news story about AI helping health care professionals do their jobs, for example, it reminds the general public how AI tools can improve their lives.

But AI is only as good as the data that is fed into it, and some people find that bottomless thirst for data unpalatable. The truth is, however, for Alexa to give the best answers to our questions, Amazon needs to know where its virtual assistant has come up short. It can only accomplish that with copious amounts of information.

Just as AI must be constantly learning, we, too, must be constantly learning about AI. A deep understanding of the science — its potential, foundations, and limitations — is imperative to ensure its judicious use and the development of commonsense regulations.

AI Pioneers in Investment Management

The Ethics of AI — More Than Just an Algorithm

What makes AI so powerful is its ability to go beyond making inferences to making predictions — and even decisions — by learning. For most of us, AI is just a modern convenience: It feeds us advertisements for snow boots in our favorite weather app just as a severe winter storm is predicted, for example. It also can answer our seemingly mundane questions. What’s the weather like outside today? Yet these mundane questions may have more serious implications: They can be used to generate patterns in the underlying system and collect data about our lives.

Without the proper guardrails, AI can create unneeded and biased outputs. That means ethical and optimization criteria must be at the core of all effectively constructed AI systems. For example, an AI tool applied to college admissions, job applications, or loan approvals must be designed and trained not to prioritize physical features or other irrelevant and potentially discriminatory characteristics.

AI can be just as susceptible to bias as its human programmers. That’s why we need to better understand the underlying algorithms and the data that feed them.

The Silicon Valley mantra, “Move fast and break things,” is no longer prudent. We must carefully consider whether to apply AI to a particular technology and decide on a case-by-case basis when AI can truly improve a process or needs additional tweaking, or whether relying on human judgment is the best option.

AI-related ethical issues need to be addressed in accessible ways that the public can understand. Only then can we chart the path forward. We must also recognize what we don’t know and ensure that decision makers who “trust” AI know the risks. Being wrong about a customer’s classification is very different from being wrong about a person’s health.

AI expands and enhances what we can do with our computers and we must make sure it remains under our control. AI’s capacity for faster and faster computations combined with the decision-making power it can delegate to systems based on those computations can pose a threat. That’s why we have to understand the processes behind the technology and have a say in where and how AI is applied.

Indeed, AI systems can be tricked into “thinking” something false is actually true. For example, an artist pulled a wagon loaded with 99 cell phones through the streets of Berlin. This fooled Google Maps into broadcasting a traffic jam in what were, in fact, empty streets.

Like all other tools and technology, AI is not inherently good or bad. If we accidentally cut our finger chopping vegetables, it doesn’t mean knives are bad. The difference, however, is that given the absence of foundational knowledge in AI, machine learning, and other advanced disciplines, many refuse to assume any responsibility for AI’s behavior. Intuitively we understand that a knife isn’t harmful on its own and so don’t blame the utensil when we inadvertently slice ourselves. But because we intuit a kind of consciousness in AI, we are more likely to attribute moral characteristics to its actions and outcomes.

But no true consciousness exists in any AI system. To entertain any other potentiality requires a foray into philosophical, religious, and mathematical disciplines and a comprehensive definition of what we mean by “consciousness.” Right now, our priority should be taking responsibility for AI’s consequences and benefits.

Ad tile for Artificial Intelligence in Asset Management

AI in the Office

One of the biggest ethical questions AI evokes relates to its potential impact on labor and employment. What will happen to those workers whose jobs AI automates out of existence? This is not a new dilemma. Automation in the workplace has been a catalyst for economic change and political upheaval throughout history.

As Conor McKay, Ethan Pollack, and Alastair Fitzpayne explain in the introduction to the Aspen Institute report “Automation and a Changing Economy”:

“Historically, automation is an important ingredient driving economic growth and progress. Automation has enabled us to feed a growing population while allowing workers to transition from subsistence farming to new forms of work. Automation helped move us from a craft system to mass production, from blue-collar to white-collar to ‘new-collar’ work — with better work, higher wages, more jobs, and better living standards. Similar to past innovations, these new technologies offer the potential to help us meet human needs while supporting new jobs and industries never before imagined.”

But this is of little comfort to those whose livelihoods are threatened by automation. Of course, automation was never wholly a net positive. It does, in fact, create new jobs. But as the authors observe, “Their geographic distribution and skill requirements often make them inaccessible to the individuals and the communities where . . . jobs were lost.”

In recent generations, factory workers have been especially vulnerable to the economic disruption created by new technology. But today new categories of workers are casting a wary eye on AI. Cub journalists are now competing with AI-driven machines that write formulaic articles about sports results or the weather. And marketers are watching as AI takes on some of the more data-driven elements of their job, freeing them up, at least in theory, to do more creative work. AI can serve as virtual assistants for doctors, transcribing notes, for example, or even monitor patients at home through wearable devices.

AI is also making it way into the retail and transportation sectors. While true automation may still be a ways off, both industries have already been disrupted by e-commerce and ride-hailing apps. Retail could be AI’s next big conquest. Retailers are under pressure to cut costs to contend with their online competition, and automation has helped close the gap. Robots may soon be patrolling the aisles of our local supermarkets.

“The answer to the dislocations that can result from automation should not be to stifle innovation,” McKay, Pollack, and Fitzpayne observe.

“Policies and reforms should encourage both the development of new technologies and the promise of work. Workers need access to the opportunities technology creates. Policymakers and employers can help by providing access to skills training, ensuring the availability of good jobs, and improving the systems that are in place to help those who will transition from declining to growing occupations.”

But the only way to truly prepare for the unfolding AI revolution is to stay ahead of the technology, imagine the possibilities, and then decide which reality we want to see.

For our purposes here, that means setting aside the traditional definition of the word “think.” We need to reinterpret the verb to better express how it applies to AI. The current automation revolution is different from its predecessors. Machines can now perform tasks that were previously thought to require human involvement. Computers can now make decisions on their own even if they don’t know they are making them. Previous industrial revolutions made human labor more efficient. In today’s industrial revolution, AI is bypassing the human element altogether.

Financial Analysts Journal Current Issue Tile

The Power of People

One point is often lost in discussions about AI: Humans design these machines. We make them and we feed them the data. Without our knowhow and ingenuity, AI would not exist. This means that we are still in the driver’s seat. We will decide which AI systems to develop and how.

We must take the initiative to educate ourselves about what AI really is and take a stand on where we can best use or not use this technology in the future. AI has the potential to affect every job on the planet, from factory worker, to investment adviser, to emergency room doctor, and we must decide whether increased efficiency and profits are worth the costs in lost jobs.

True progress is not determined by how far technology goes, but by how well we comprehend it and how well it contributes to solving the world’s greatest problems. Such criteria inform one another because only by understanding humanity’s dilemmas and how they might be addressed can we find the best solutions.

Although we must always be mindful of AI’s potential liabilities, this evolving science can empower not only individuals and companies but also society at large. The key to harnessing this latent power is ongoing education paired with intelligent discussion and decision making around AI’s inherent ethical dilemmas.

Only then can we effectively guide AI’s development and ensure its beneficial adoption by society at large.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / Yuichiro Chino


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

Sameer S. Somal, CFA

Sameer S. Somal, CFA, is the CEO of Blue Ocean Global Technology and co-founder of Girl Power Talk. He is a frequent speaker at conferences on digital transformation, online reputation management, diversity and inclusion, relationship capital and ethics. Fundamental to his work at Blue Ocean Global Technology, Somal leads collaboration with an exclusive group of PR, law, and management consulting agency partners. He helps clients build and transform their digital presence. Somal is a published writer and internet defamation subject matter expert witness. In collaboration with the Philadelphia Bar Foundation, he authors continuing legal education (CLE) programs and is a member of the Legal Marketing Association (LMA) Education Advisory Council. He serves on the board of the CFA Institute Seminar for Global Investors and Future Business Leaders of America (FBLA). He is an active member of the Society of International Business Fellows (SIBF).

Pablo A. Ruz Salmones

Pablo A. Ruz Salmones is the co-founder and CEO of Grupo Ya Quedó, a software development and artificial intelligence (AI) company headquartered in Mexico City. As a computer and business engineer, he leads new partnerships and enterprise client relationships at Grupo Ya Quedó in North America, Africa, and India. He also serves as director of marketing at Blue Ocean Global Technology. Ruz Salmones is a regular speaker at global conferences on topics ranging from scaling global businesses and e-commerce to the application and ethics of AI Ruz Salmones is an active member of Beta Gamma Sigma, the International Society of Business Leaders (ISoBL), the CCPM (Colegio de Contadores Públicos de México), and the Mexico City chapter organizer of Hackers/Founders. He holds an Ethical Leadership Certification from the NASBA Center for the Public Trust. Ruz Salmones is a published writer and technologist who recently developed a costing system for accurate assessment of data storage in cloud servers. He is a lifelong pianist and composer as well as a concert performer. Ruz Salmones is relentlessly committed to creating a world in which we all see everyone for what we are: human beings.