top of page

is waif

YOU CAN'T SPELL WAIF WITHOUT AI

First things first, I never intended to know anything about AI. But since life has tricked me into learning about this technology, which is somehow met with an even split of reverence and eye rolls, I thought I might as well pass it forward.
There are two dominating AI narratives, both are incredibly complex and both full of infighting. The first is the more mainstream, the second the more terrifying.

Lane 1:

In lane one we have existing AI formally known as artificial narrow intelligence (more on that later). This conventional AI landscape is largely defined by debates around automation, lob loss, driverless cars, bias and the increasingly popular concept of ‘AI for Good.’ Here we are primarily concerned with the consequences of AI, but not our ability to control it. We’ll start here.

One of the most insightful quotes about AI fittingly comes from John McCarthy, the scientist who coined the term “artificial intelligence” in 1956. McCarthy famously said that “As soon as it works, no one calls it AI anymore.” For this reason there is a common misconception that artificial intelligence is something rare and futuristic. When in reality AI is your car, your computer, its Uber and Lyft, airplanes and Gmail, even Pinterest. You know that thing where you take a picture of a check and it magically shows up in your account? That’s AI.

“Even if automation comes about later and more smoothly than originally expected, it will still seriously disrupt most every industry.”

To its credit, artificial intelligence has indeed made daily life more efficient. It’s made it a lot easier to book a flight and get quick directions. In recent years AI has found a home in just about every major industry, bringing with it the promise of innovation, a competitive advantage and fewer mistakes. What is AI getting up to in these sectors? I’m glad you asked.

In healthcare AI is making moves ranging from robotic surgery to diagnostics, while AI powered virtual nursing assistants are helping elderly patients keep to their medication schedules. In September the FDA approved an AI platform, courtesy of GE and UC San Francisco, that scans X rays with the aim of significantly reducing review time. The possibility that AI could catch mistakes and make earlier, more accurate, diagnoses is incredibly valuable. If the technology is perfected this is an area that could genuinely benefit from technological support. Meanwhile in the ever-ambitious world of finance, digital banking and loan-issuing apps are now using AI to determine who should qualify for a loan - all on your phone. AI is also used to catch credit card fraud and report suspicious activity, with the aim of preventing money-laundering and other financial crimes. Meanwhile the social media industry is massively defined by artificial intelligence, from personalized notifications to tailoring your feed.

Throughout these industries and the rest of the corporate world, there is one phrase that reigns supreme: “AI and the future of work.” If you work for McKinsey, IBM, DeLoitte, PWC, etc., you probably love these six words, in this exact order. For you they are a gold mine. Understandably, people want to know how we are going to navigate the new, digitized world. And consultancies, major corporations and tech titans have put quite a bit of time and publicity into prediction making and unveiling multi-step plans full of retraining programs and new ideas. I, myself, have organized a number of panels on this very subject, and a consensus that I’ve encountered over the years is that automation is moving slower than expected. There is also a fair bit of optimism that while jobs will certainly be lost, more will be created as well.

Even if automation comes about later and more smoothly than originally expected, it will still seriously disrupt most every industry. The hard truth is that most of the new jobs created will likely not be looking to hire recently unemployed truck drivers. Industrial transformation is not a comfortable process, which is why this issue has made itself so central to modern politics.

Citizens are rightly hopeful that the next round of politicians will implement systems of support to help guide them into this next stage of the digital age. One sector that is already contending with a new reality is the automobile industry. While AI is now integral to self-parking and cruise control, it will soon be running the show in the form of autonomous vehicles. I was in San Francisco just two months ago and every time I left my office I spotted a GM Cruise, one of a fleet of driverless cars wandering the streets of San Francisco with cameras fixed on their roofs. These cars are in the process of gathering data with the intention of improving the self-driving technology before making the service available to the public. GM is not the only auto manufacturer trying to get ahead of a post-driver world. Uber and Toyota have partnered up, and nearly every major player is racing to get a foot in the industry: Volvo, Waymo, Mercedes, BMW, Nvidia, Huawei, Baidu. The list goes on.

I won’t lie, on a personal level, I’m pretty wary of tech. My poor friends are stuck listening to me constantly lament about my childhood when I would wake up and stare out my window listening to the Louisiana birds chirp. Now I wake up to my phone. Like the rest of my generation, my attention span and will power are pretty much shot too. However, I will admit, I am starting to come around on this issue. I enjoy driving and I would be sad to see it fully eliminated, but on a larger scale there is a strong argument for massively decreasing the rate of accidents and lives lost in cars.

In the case of driverless cars the idea is simpler than the reality of implementation, and the idea is hardly simple. The transition to driverless is a classic case of the tech age. One defined by policy debates, messy collaboration between the private and public sectors and a whole lot of opinions. There is a long road ahead, but integration should be easiest in sustainable cities where tech is already at the center of the city’s brand, or where they want it to be. Who will be first? San Francisco? Portland? Arlington, Texas? And will our children’s generation years from now pay to spend a few hours at a racetrack to experience the temporary thrill of operating their own vehicle? We’ll have to wait and see.

Transitioning from human to machine rationale is not without its risks and controversies (not to mention it’s just plain scary). Facial recognition and machine bias are two areas currently facing widespread concern. I just got the iPhone 11 Pro Max - *hold for applause* - and I can unlock it with just a glance. It’s convenient, yes, but the implications behind the ease are terrifying. Handing our face over to our phones (and the companies behind them) has serious consequences for our privacy. Facial recognition technology is actively being used by ICE to track immigrants in the United States and in China to exert state control and facilitate their social credit system. China is actually the world leader in facial recognition, where citizens can use it to withdraw cash and pay for goods and the authorities can use it for never-before-seen levels of surveillance. Some Chinese police officers have even been fitted with sunglasses and body cameras that possess facial and gesture recognition technology. The privacy lost and potential for abuse go far beyond troubling.

Even without facial recognition, algorithmic bias poses a serious problem re: the objectivity of AI. Just last week Apple was called out for gender bias in their new Apple Card after co-founder Steve Wozniak and a second male executive revealed that they were approved for higher lines of credit than their wives. While in 2016 ProPublica brought to light the devastating racial bias in criminal risk assessments used to dictate sentencing across the US justice system. These AI empowered computer programs would produce a score judging the likelihood of a repeat offense for each person who
entered the system. ProPublica’s analysis revealed that black offenders
were repeatedly given higher scores, and thus longer sentences, higher bails
and later release, than their white counterparts. These types of algorithmic
failures can further increase existing disparity and have serious consequences on human lives. It is for this reason that we need to prioritize diversity within the teams building the algorithms that will shape our future society.

Meanwhile, on the very opposite end of the AI spectrum is ‘AI for Good.’ This
subsect of the ever popular ‘Tech for Good’ ecosystem focuses on the potential positive impact that AI can create. For example AI is increasing access to education online and for previously hard to reach communities across the world. There are also those designing AI to eliminate bias, not increase it. And experts across the world are actively researching how AI can help in the fight against climate change though a serious breakthrough has yet to occur.

One of the strongest examples of AI for good that I’ve encountered is Raheem AI. The company was created after founder Brandon Anderson lost his partner Raheem to a police shooting during a routine traffic stop. The policeman who shot and killed Raheem had a history of being abusive during these types of traffic stops, but he had never been formally reported. Inspired by his grief and the pervasiveness of police violence towards his community, Brandon launched Raheem: an AI empowered independent service for reporting police conduct. In the United States 95% of people who have faced police violence have not reported it. Raheem AI was launched to change this fact and empower citizens to put their experiences on record. Raheem AI partners with community oversight structures, advocates and public defenders to share this data and fight for accountability and justice. I met Brandon last year and heard his story. He is a deeply kind and passionate individual working for justice in a severely broken system.

I meant it when I said AI had found its way into every industry - from medicine to finance, social media to activism, and even art. If you really want to feel crazy check out this song written and performed by machine intelligence. Is this the next frontier of art or is it the death of it? Questions like these tend to dominate the AI narrative, most often in the form of debates on the topics we’ve discussed so far - the future of work, automation, bias, privacy and to what extent AI can be used for good. But this is the
obvious AI narrative, the comprehensible one. Through this lens, humans are still in control of the technology. There is an entire other side of the coin.

Lane 2:

Okay, let’s zoom out. There are three types of AI.

#1. Artificial narrow intelligence (ANI): This is the one we’ve been talking about for the last 4 pages. In fact, this is the only one currently in existence. From the most simplistic AI to the most complex, all AI today is ANI. ANI are given this name because their scope is, well, narrow. All existing AI are programmed to perform specific tasks autonomously. Even if an AI can beat every human alive at chess, it still can’t spot every stop sign in the “Are you a robot?” test. For that matter, it can’t understand the difference between a photo of a rabbit and one of a toad. Its capabilities are limited to the task it is programmed to do and that is all. Since all existing AI falls under this category, we will have to get theoretical from here on out. Bear with me.

#2. Artificial general intelligence (AGI): AGI is a theoretical AI whose intellectual power is equal to, or surpassing, that of a human. Not only could this AI identify every square with a stoplight in it, but it could drive a car, translate a language, perform surgery, not to mention communicate and think abstractly and creatively - just like a human. Except where a human is limited by the size of the brain and energy of the human body, AGI would (theoretically) not face such limitations. AGI would possess the skills unique to a human mind - to communicate, perceive and learn. It could develop new skills and talents independently, no longer confined to the limits of its original programming. This is no small feat, achieving AGI would arguably be the most consequential technological advancement in human history.

Despite the unknowable consequences of AGI, the race is very much on to create this technology. In 2014 Google acquired Deepmind, an AI company founded by Demis Hassabis that is actively working to achieve AGI. There are a number of theories across the scientific spectrum on how this might actually be done. The dominant theory from the 1950s through the 1980s was symbolic artificial intelligence, wherein scientists attempted to explicitly outline all the rules and facts of human knowledge. It eventually proved unsuccessful as they were unable to teach implicit knowledge or common sense to the machine.

Another popular theoretical approach is whole brain emulation or “mind uploading” (Yep, I said, mind uploading). The goal here is to replicate the brain, potentially by slicing it scanning it and reconstructing it in a 3-D software model, then copying it to a computer. One particularly concerning approach is to create an ANI programmed to achieve AGI and let the computer try and teach itself. Meanwhile, DeepMind founder Hassabis believes the solution is to focus on the ways in which the brain processes information, such as how it learns by replaying experiences during sleeping hours. Which, if any of these theories is correct remains unclear. A number of experts believe that we are vastly underestimating the challenge at hand, while others believe we could reach AGI within the next two decades (more on that later).

The fact is AGI, like ANI, would still be a computer program. No matter its intelligence it would still theoretically be loyal to its base algorithm. For this reason there are many among the futurism community who believe that we can create AGI for the greater good, also known as “friendly AI.” The AI company OpenAI was founded with this in mind. Founded in 2015 by Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman, OpenAI released a charter outlining its mission: “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work —benefits all of humanity.” The company was launched with $1 billion to fund such research. Backers included Musk, Reid Hoffman, co-founder of LinkedIn, Peter Thiel, co-founder of PayPal and others. In 2019 Microsoft invested $1 billion in OpenAI as part of an exclusive computing partnership between the two companies. With Google in possession of DeepMind and Microsoft partnered with OpenAI, these two companies are arguably in competition to achieve AGI - a feat whose consequences are literally impossible to contemplate.

The controversy around AGI is mostly centered on loss of control. If we achieve AGI, will humans still be the dominant species on earth? By definition AGI would be our intellectual equivalents, and what would stop them from surpassing us? A popular belief among AI experts is: nothing would stop them. It is only a question of how soon and what next?

#3. Artificial Superintelligence (ASI): ASI is a theoretical future technology whose intellect surpasses that of humans. ASI would hypothetically come about as the inevitable result of AGI and officially replace humans at the top of the intelligence chain. The moment wherein ASI is achieved is commonly referred to as “The Singularity.” If you’ve heard this term before, it was most likely in a sci-fi film such as The Matrix, in those parts of the internet dedicated to futurism, or within a series of heated debates between the leading minds in tech. The concept of the singularity is understandably controversial as well as ripe to be made into sci-fi content - TV, films and books alike.

The issue with ASI is that we are genuinely incapable of imagining how such superior intelligence would materialize, much like how a chimpanzee cannot imagine AirPods or nuclear weapons. Plus, the intelligence disparity between us and ASI may well be 10, 100 or 1000 times greater than that between us and the chimpanzee. These are the thoughts that keep AI experts up at night (see Tim Urban’s Intelligence Staircase). This and what would these machines do with us? Could they solve climate change and cure cancer? Would they? Would they then leave us alone to live life as we know it? My instinct is to say, “not likely,” but that would be arrogant. The only real answer is: We don’t know.

In a profile of Google and Deepmind The Economist’s 1843 Magazine wrote of a world with ASI:
“Since this future is constructed entirely on a scaffolding of untested presumptions, it is a matter of almost religious belief whether one considers the Singularity to be Utopia or hell.” In his remarkably in-depth two-part blog on the road from ANI to AGI and ASI, “The AI Revolution,” Tim Urban made the case that the Singularity will either lead to human immortality or extinction. It bears asking: If these are the potential consequences of ASI, why exactly are we trying so hard to create AGI?


But since we are (see Google, Microsoft, IBM Watson), the next best question is: When is this all going to happen? There are only a few existing studies on this matter and it is important to keep in mind that these predictions are entirely theoretical. In 2013 leading AI expert Nick Bostrom and Vincent C. Müller surveyed hundreds of AI experts on when they predicted human-level machine intelligence (AGI) would exist, and how long from that point until ASI. The survey concluded that:

“The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.”

All things considered, I’ve seen worse odds.

In 2018, Martin Ford author of Architects of Intelligence asked 23 leading AI experts by what year they think there will be at least a 50% chance of AGI existing. Interviewees included Hassabis, Jeff Dean, Google’s AI Chief, and Fei Fei Li, the director of Stanford’s AI Lab. 18 out of the 23 answered and only two went on record, prominent futurist Ray Kurzweil said 2029 and roboticist Rodney Brooks said 2200. The average of all the predictions was 2099. Noting the disparity between his survey and Bostrom and Müller’s Ford suggested age might be a factor. A number of the experts he spoke with were in their seventies and decades in the industry taught them that progress moves slower than we might assume. Again, all of these surveys are highly theoretical but Ford’s does suggest that few of us alive today will live to see our machine equals.

As is to be expected on such a high-stakes and theoretical topic, the world of AI experts is pretty split. Kurzweil is the most prominent advocate for an accelerated timeline, pointing to exponential growth and the increasing rate of invention experienced in the last half century. Microsoft co-founder Paul Allen is a known believer in a more conservative timeline, if any, arguing that we are seriously underestimating the unprecedented challenge of reaching AGI. Bostrom, world- renowned AI thinker and director of Oxford’s Future of Humanity Institute, takes a more diplomatic lens. He argues that there is truly no way to predict such a timeline, it could happen any day or it may never occur.

Before his death Stephen Hawking gave a number of interviews on AI and the future of humanity. In 2014 he told the BBC that AI could bring about the end of humanity. He warned that, “Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded...[AI] could take off on its own, and re-design itself at an ever increasing rate." In 2017 during a speech at Web Summit in Lisbon, Hawking too admitted to not knowing what would come of this technology, if it would help us, ignore us, or destroy us entirely. He cautioned that we must learn how to prepare for and avoid the risks that could come from AI, and in the end his was a hopeful message. Hawking stated, “I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”

Even the optimists - and I am one by nature - are wary. We need to prioritize and invest in AI safety research. AI safety - above all other interests - must remain at the forefront of any and all efforts to develop AGI, because the fact of the matter is we really only get one shot at this. So we better get it right.

If I leave you with anything today, let it be this: how we go about developing this technology matters - a lot. I said it in the last paragraph and I’ll say it again here - safety is vital. Beyond this point, how you feel about AI is completely up to you. Are you a techno optimist? Do you think we’re doomed for extinction? If so, when? Do you think this is all sci-fi nonsense? I’ve laid out the facts as they stand but in a world of theories, everyone gets to make a prediction.

AI today (ANI) is far less theoretical. Issues of data privacy, facial recognition, bias and automation will continue to define the coming decades. From how we engage with the companies creating this tech to who we vote for will be influenced by these issues. One thing is for sure: whether we like it or not, AI is a big part of the new world. Unless something monumental changes, it is how the world works and where the world is going. The more we understand AI the more we can understand our daily life, the shape our policies should take, how we interact and how we work, and, most importantly, what kind of future we want.

bottom of page