feature article
Subscribe Now

The Artificial Intelligence Apocalypse (Part 3)

Is It Time to Be Scared Yet? (The Answer is Yes!)

In Part 1 of this 3-part miniseries, we discussed the origins of artificial intelligence (AI), and we considered some low-hanging AI-enabled fruit in the form of speech recognition, voice control, and machine vision. In Part 2, we noted some of the positive applications of AI, like recognizing skin cancer, identifying the source of outbreaks of food poisoning, and the early detection of potential pandemics.

In fact, there are so many feel-good possibilities for the future that they can make your head spin. In a moment, we’ll ponder a few more of these before turning our attention to the dark side.

Mediated Reality + AI

Another topic we considered in Part 2 was the combination of mediated reality (MR) and AI, where mediated reality encompasses both augmented reality (AR) and deletive reality (DR).

In the case of AR, information is added to the reality we are experiencing. Such additions may be in the form of text, graphics, sounds, smells, haptic sensations, etc. By comparison, in the case of DR, information is diminished, modified, or removed from the reality we are experiencing. For example, the system may fade certain sounds or voices, or blur portions of the image we are seeing, or transform non-critical portions of the scene to grayscale, or completely remove objects from the scene.

As I noted in Part 2, I personally think that, in the not so distant future, the combination of MR+AI is going to dramatically affect the ways in which we interface with our systems, the world, and each other.

A Fashion Statement

Google Glass was an interesting first attempt to present an optical head-mounted display designed in the shape of a pair of eyeglasses. Although it proved to be a failure in commercial space, Google Glass continues to find applications in medical, manufacturing, and industrial environments.

In the relatively short term, when MR+AI really starts to come online — which I would guess to be more than 5 years in the future and less than 15 — I honestly believe that it will be commonplace to see people strolling around sporting HoloLens, Magic Leap, or Tilt Five type headsets.

HoloLens (Image Source: pixabay.com)

Now, you may scoff at me, but I remember sitting on a tube train in London in the early 1980s. Sitting across from me was a businessman dressed in traditional attire in the form of a pinstripe suit, bowler hat, briefcase, newspaper, and furled black umbrella. To his left was a goth in full regalia; to his right was a punk rocker with a spanking pink mohawk. A few years prior to this, such a scene would have seemed bizarre. By that time, however, nothing seemed out of place because these styles were all over the place.

I’m quite prepared to agree that the first time we see someone walking around wearing an MR headset we will think, “That looks funny,” to ourselves. But it won’t be long before they are sprouting up all over the place. When 50% of everyone you see is sporting an MR headset, you’ll be more concerned with what they are seeing — and you are missing — than with what you would look like. (When 90% of people are wearing them, you’ll feel embarrassed if you are one of the 10% who aren’t.)

Happy Faces

Almost every day, we hear about new technologies that could dramatically change things for the good. Regarding wearing MR+AI headsets, for example, your kneejerk reaction might be to ask, “but what happens if I need to wear glasses?”

I know what you mean. I have to wear glasses to see anything farther away than the end of my nose. I’m constantly lifting them to focus on things that are close to me, like the titles of books on my shelves. I’ve tried transition (multifocal) lenses, but I really don’t like them because they require you to move your head in order to focus on whatever you’re trying to look at.

Well, as per this paper published in the journal Science Advances, researchers at Stanford University have created glasses that track your eyes and automatically focus on whatever you’re looking at. This technology is still in the experimental stages, and it won’t be commercially available for a while, but it will become available in the not-so-distant future, and there’s no reason it couldn’t be integrated into our MR+AI headsets.

In Part 1 of this miniseries, we discussed how the folks at XMOS have the technology to disassemble a sound space into individual voices, and to subsequently focus on one or more selected voices within a crowded audio environment. Suppose you were at a party and this capability was integrated with your MR+AI headset. Now imagine that your AI works out who you are talking to, fades down the other voices, removes extraneous noise, and leaves you with a crystal-clear rendition of your companion’s voice.

Have you heard about the Ambassador from Waverly Labs? This specially designed earbud can capture speech with exceptional clarity. When combined with the app running on your smartphone, the Ambassador can be used as an interpreter that actively listens to anyone talking near you and converts their speech into your native language. If you are wanting to converse with someone, you can share one of your ambassadors with your companion, thereby allowing both of you to speak and hear in your native languages. Once again, imagine if everyone had this capability integrated into their MR+AI headsets.

There’s an interesting company called Si-Ware Systems, which makes a range of interesting Fourier Transform InfraRed (FT-IR) spectrometers. One use of these NeoSpectra Sensors is as material analyzers, and they are being used in all sorts of diverse applications, from testing soils on farms to determine if anything is missing that should be there (or if anything is present that shouldn’t be there), to discriminating between authentic and counterfeit silk carpets, to recommending colors for customers at hair salons.

NeoSpectra sensor compared to a smartwatch (Source: Si-Ware Systems)

As I wrote in my column, Great Spock! Hand-Held Material Analyzers Are Here, one of my cousins in Canada has an extreme allergy to shellfish. Her response is so acute that she won’t eat anything from an outside establishment for fear of anaphylactic shock. Well, it may not be long before someone develops a NeoSpectra-based scanner module that attaches to your smartphone. It’s not hard to imagine being able to wave this over a plate of food to be informed of the presence of things like peanuts, gluten, or shellfish; also, things like salmonella, botulism, and poisonous chemicals. It’s also not hard to envisage a future generation of this technology being embedded in one’s MR+AI headset.

Worried Faces

In the same way that some new technologies could dramatically change things for the good, others have the potential to give us some real “bad hair” days. For example, do you really want people with whom you aren’t acquainted to be able to track your every move?

Have you heard about ultrasonic beacons? These signals can be embedded in audio streams coming out of radios or television sets, or from special devices in stores. As far back as 2017, ThreatPost reported that more than 200 Android mobile apps were listening to these signals and using them to track the activities and movements of their users. As a result, someone somewhere knows what advert you just listened to and they know how long you paused in front of a particular shoe display in a certain store.

How about the fact that, as reported by CNBC, a Swedish startup hub called Epicenter is gung ho about implanting its members with microchips the size of grains of rice. On the one hand, this allows members to open doors, operate printers, or buy smoothies with a wave of the hand. On the other hand, the company can now track its members to see where they go, what they do, and with whom they talk.

The folks who provide smart speakers like the Amazon Echo and Google Home swear that they don’t listen in on our conversations. This may even be true, but we also know that these companies are busy gathering an extraordinary amount of data about each of us. Suppose my Amazon Echo is listening to me talk in my house. Even if it’s not transmitting what I say to the cloud, it’s not beyond the bounds of possibility that it could generate a “digital signature” for my voice and pass this signature to the cloud, thereby allowing Amazon to tie my voice to me as a person. Now, suppose I go to a friend’s house for a cup of coffee and a chat, and that his Echo also generates a digital signature of my voice. The end result is that Amazon can add a new piece of information to its database — “Max and Bob know each other” — and a new thread is added to the web.

Did you ever see the 1984 TV series Max Headroom (no relation)? The background story is a dystopian near-term-future dominated by television and large corporations who can track your every move. In the UK, there are currently around six million closed-circuit television cameras (CCTVs), which is approximately one camera for every 11 people. Suppose those cameras were equipped with facial recognition and AI capabilities; now suppose somebody in authority wanted to make someone’s life miserable…

Meanwhile, according to CNBC, there are already around 200 million surveillance cameras in China, watching every move the population makes, and nearly all of China’s 1.4 billion citizens are already in the government’s facial recognition database. When you couple this with the fact that the Chinese government has started ranking its citizens with a “social credit” system that can be used to reward, or punish… well, it makes you think.

Scared Faces

Are you familiar with the Canadian artificial intelligence (AI) startup company called Lyrebird and its work on a new generation of speech analysis and generation technologies? Their tools can listen to someone talking — or a recording of someone talking — for about a minute and generate a “digital signature.” Thereafter, you can use this digital signature in conjunction with a related text-to-speech app to talk in that person’s voice.

It used to be said that “a photo never lies,” but that was long before PhotoShop came along. These days, we tend to trust video, but not for long. It’s now possible to use AI to analyze a video of someone talking. The AI listens to the accompanying audio, understands all of the words and their context, recognizes the underlying emotions of the speaker (happy, sad, bemused, angry, etc.), and monitors the speaker’s corresponding facial expressions, including blinking patterns and all the micro-muscle-movements associated with each phoneme spoken. If you now give the AI a new audio track — possibly generated by Lyrebird — it can generate a deepfake video that you would swear was the real thing.

In 1997, IBM’s Deep Blue beat World Chess Champion Garry Kasparov. In 2016, Google’s AlphaGo beat a human Go grandmaster. Also in 2016, two students at Carnegie Mellon University created an AI program that learned to play the game of Doom by interpreting what it saw on the screen and teaching itself to duck and trick opponents.

Have you seen BabyX 5.0? If not, take a look at this video. This is a computer-generated psychobiological simulation incorporating computational models of basic neural systems involved in interactive behavior and learning.

Created at the Laboratory for Animate Technologies at the University of Auckland in New Zealand, BabyX is both fascinating and terrifying. On the other hand, BabyX is perhaps not quite as terrifying as the world’s first psychopathic AI, called Norman, which is the brainchild of scientists at MIT. When exposed to a Rorschach “ink blot” test, where a normal AI might identify an image as being something like, “A black and white photo of a small bird,” Norman sees something like, “A man getting pulled into a dough machine.”

Do you remember when robots were clumsy and clunky? No more. I was just looking at a recent video of the Atlas Robot from Boston Dynamics. I could barely believe my eyes to see it performing acrobatics.

It’s almost as good as your humble narrator, for goodness sake! Now imagine squads of these little scamps roaming the streets, armed with AR-15s and equipped with psychopathic Norman AI personalities. Yes, I know this is a bit of a stretch at this moment in time, but what about in 20 years?

The Stuff of Science Fiction

I love reading science fiction books and watching science fiction films. In some ways, we already live in a world of science fiction. Look at the technology we have today: cellular phones, the ability to make direct-dial phone calls anywhere in the world, high-definition televisions, wireless networks, the internet, computers, smart assistants, the list goes on. If you compare this to when I was born in 1957, it makes me feel like Buck Rogers in the 25th Century, and we haven’t seen anything yet.

But we also have to remember that many tales paint a less than rosy picture of the future. Do you remember the original Blade Runner movie from 1982? (It’s strange to think that this was set in 2019.) I’m thinking about the parts involving people creating synthetic creatures. Just the other day on the radio, while I was driving into work, there was a discussion about how it’s now possible for people to upload desired DNA sequences over the internet to have the DNA fragments printed onto glass slides. The resulting synthetic DNA can be inserted into anything from bacterial to mammalian cells. The gist of this program was that as made-to-order DNA gets cheaper, keeping it out of the wrong hands gets harder. I don’t know about you, but I don’t find this to be tremendously reassuring.

Do you recall the article in WIRED that discussed how a group of researchers from the University of Washington managed to mount a malware DNA hack whereby they encoded malicious software into physical strands of DNA? When this DNA was passed to a gene sequencer for analysis, the resulting data became a program that corrupted the gene-sequencing software and took control of the underlying computer. Imagine what could happen if this technology fell into the hands of biohackers.

Organ transplants can save lives. Unfortunately, as we all know, there aren’t sufficient organs to go around. A recurring theme in dystopian fiction is that of organ transplants from donors who are unwilling or incapable of objecting to having their organs removed. Some of these stories feature the use of clones, while others consider state-sanctioned organ transplants from criminals. Well, according to NBC News, an international tribunal has concluded that, “…the organs of members of marginalized groups detained in Chinese prison camps are being forcefully harvested — sometimes when patients are still alive…”

Is It Time to Be Scared Yet?

Returning to the topic of AI, one of the characters of the Orville TV series is Isaac. In addition to being the Orville’s science and engineering officer, Isaac is a member of the artificial, non-biological race from Kaylon-1 that views biological lifeforms, including humans, as inferior. In the episode Identity, the Orville visits Isaac’s home planet. While there, the crew discovers vast caverns containing a seemingly infinite collection of humanoid skeletal remains. We aren’t too surprised to learn that, sometime in the past, the robots decided their biological creators were superfluous to requirements. At least these robots were aware that they were created beings.

One of my favorite books is Great Sky River, by Gregory Benford. Set tens of thousands of years into the future on a star system close to the galactic center, we meet humans who have been augmented with various technologies to make them faster and stronger. It’s unfortunate that they’ve run across a mechanoid civilization, which is doing its best to exterminate our heroes. The point is that the mechanoids have no conception that they were originally created by biological beings (see also the They’re Made out of Meat animation of the short story by Terry Bisson).

As far back as 2014, the late, great Stephen Hawking warned how artificial intelligence could end humankind. Other technological luminaries, like Elon Musk, see artificial intelligence as being an existential threat to our species. In his latest book, the famed British futurist, James Lovelock, says that cyborgs — which he defines as “the self-sufficient, self-aware descendants of today’s robots and artificial intelligence systems” — will eventually replace humans and take over the world (I don’t know about you, but I for one will miss beer and bacon sandwiches).

Will all of this happen in the next few years? I think it’s safe to say not. Is it possible for us to create artificially intelligent systems that become self-aware? A few years ago, I would have said not, but now I’m tending the other way. So, is it time to be scared yet? To be honest, I think the answer is a resounding “Yes!” What say you?

 

9 thoughts on “The Artificial Intelligence Apocalypse (Part 3)”

  1. REALLY liked this series of yours. I agree with you completely – AI has delivered some wonderful stuff, and will continue to. But it is becoming scary. Reminds me of
    • TV. Who would have thought this medium with great potential to deliver learning would devolve to a medium to deliver crap, with advertising, and
    • the electrification of cars, which is going to create a huge battery recycling problem, is so utterly open to disastrous hacking, will make the utility companies – not so well regulated by corrupt governments – rich, and will have to rely on algorithms to make decisions that humans just wouldn’t make (take a chance on dodging those walkers, even though I might get killed, or drive onto the sidewalk down there and hope that the sidewalk tables and chairs stop the car before I get to the people/ dodge the dog and go into the ditch, etc.)
    • and social media, which seemed like a good idea for keeping in touch with far-flung friends and rels and which has let protesters in various countries capture images and keep in touch, and so on, but which is also giving white supremacists and terrorist groups a chance to find and communicate with like-minded individuals, making believers in existentially threatening philosophies and sociopaths and anti-environmentalists think that their ideas are acceptable. On the other hand, social media has been used to coordinate the same sorts of actions by “good” protesters (e.g., HK, Arab Spring)…
    The pace of change has accelerated so much in our lifetimes… My grandfather emigrated to Canada in the very early 20th century, coming over steerage on a steamship. He retired to Vancouver Island, and one summer on school vac, he and I watched Sputnik overhead. He had figured out when it would pass and wanted to show me this amazing achievement. A huge change, over 40-odd years, but still largely mechanical. I began a word processing service company in NYC in the late 70s, running printed material and transcribed tapes back and forth to clients by bicycle messenger. Three years later we could exchange info by fax. Within 10 years, there were no word processing departments or needs because everyone was typing. Internet – Defense Department – free to the world – social media – blah blah blah. Once things became digital and software-dependent, we’ve all just been on fast forward.
    One of the most worrying things – other than the potential for bad actors – is what we are going to do with a world full of more and more people with fewer and fewer job prospects…

    1. I agree — technology has always been a multi-edged sword — most scientists and engineers just bounce along thinking of all the wonderful things their inventions and creations are capable of, and before you know it some dingbat is using it for something horrible — and then there are the side effects (like the old battery problem from electric cars). On the one hand I hope for the best, but — increasingly — I’m starting to fear the worst.

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Gas Monitoring and Metering with Sensirion SFC6000/SFM6000 Solutions
Sponsored by Mouser Electronics and Sensirion
In this episode of Chalk Talk, Amelia Dalton and Negar Rafiee Dolatabadi from Sensirion explore the benefits of Sensirion’s SFM6000 Flow Meter and SFC Flow Controller. They examine how these solutions can be used in a variety of applications and how you can get started using these technologies for your next design.
Jan 17, 2024
14,278 views