Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

A Deepfake Putin and the Future of AI Take Center Stage at Emtech

Deepfake technology got a lot of attention, but I was more interested in the future directions for AI research.

September 20, 2019
Gideon Lichfield as Vladimir Putin

Artificial intelligence took center stage at this year's Emtech conference, presented by MIT Technology Review. The conference began with a demo of a deepfake, and featured conversations about the impact of such technologies and misinformation in general, deploying AI at scale, how organizations should approach using AI, whether facial recognition should be more closely regulated, and most interestingly, AI pioneer Yoshua Bengio's thoughts on creating broader AI.

It began with Technology Review editor in chief Gideon Lichfield talking about the potential impact of deepfakes, which quickly turned into a demo of him appearing on screen pretending to be Russian President Vladimir Putin. It was a pretty good demo—the "Putin" on-screen looked fairly realistic, though the hairline and the accent weren't quite convincing. In any case, it raised a number of issues about whether such deepfakes could potentially impact the election.

Hao Li- Pinscreen EmTech19

The demo was created by Hao Li of the University of Southern California and CEO of Pinscreen, a company that creates photorealistic avatars of people using AI-driven algorithms. He recounted how back in 2014, he worked on Furious 7, to digitally insert the face of recently-deceased actor Paul Walker on top of a facial performance captured using Walker's brother. He said at the time, the technology was not as good as the requirements, but now things are much easier. Things that would have taken months and a team now can be done by a single person.

Li said deepfakes haven't caused many issues to date "because the quality isn't there." But he said the field is developing more rapidly than he thought it would have, and mentioned Zao, the Chinese face-swapping app, would eventually get to a place where it could cause trouble.

Researchers need to take an approach assuming deepfakes will be perfect, he said, so they need to try different approaches to detect them, such as motion signatures. But he noted that misinformation has always been an issue, including such things as photo manipulation during wartime. "You don't need deepfakes to fabricate stories," he said.

(In another conversation at the conference, former Facebook CISO Alex Stamos said the biggest problem with deepfakes is not in politics, but in personal bullying such as revenge porn—saying that politicians have teams that examine the videos and point out where they are fake, but that 17-year old girls have no such recourse.)

Li talked about how some applications, such as Zao, were using content illegally, and the potential privacy concerns in uploading photos to apps such as Zao or FaceApp, saying users do not understand they are giving the company behind the app the right to do anything it wants with the photo.

He noted that people can now create new content with your likeness, and there's not much you can do about it before you see it. Afterward, you could sue, but in many cases, the damage will be done.

He made it clear that his company, Pinscreen, was not creating deepfakes but instead focuses on creating digital avatars or virtual beings as a form of interface for how you will interact in the future. For instance, in e-commerce, he said, in the "next-generation fashion catalog" you might be able to see yourself instead of a model, to preview how you will look in clothes. In addition, in AR or VR, you could use a three-dimensional digital avatar for face-to-face conversation. Pinscreen is trying to create a platform that makes it accessible for people to create such applications.

Towards Human-level AI: Scientific and Social Challenges

Yoshua Bengio EmTech19

"It is important to recognize we're very far from human-level AI in many ways, "said AI Pioneer Yoshua Bengio of the University of Montreal and the Montreal Institute for Learning Algorithms (MILA) in his talk at the conference. He said that we've made amazing progress in AI and should celebrate that, but that said, we have a lot more work to do to get to human-level AI.

Bengio talked about many different approaches towards improving AI, moving beyond the basic deep learning systems that have made so much progress in recent years.

He noted that currently AI requires taking a high-level concept from humans, such as labeling a lot of data. He said researchers are looking at how computers can make more sense of their environment and coming up with new ways to represent and conceptualize knowledge.

One such area is "learning to learn" or "meta-learning," where the models learn how to generalize better. Already this is being used to set hyperparameters for neural nets, but Bengio talked about using the same kind of techniques to make models generalize based on new data.

He noted that one problem the industry has had is that if you train on data in one country, in another country, the distribution may not be the same, causing less accurate results. He said that humans are better able to handle such differences, understanding things such as causality.

Another area he discussed was reinforcement learning, which has been used in such applications as gaming playing (such as for Go.) He talked about using the techniques of reinforcement learning in areas like robotics and dialog systems to help deep learning systems gain a better perspective. Applications might include drug discovery, material discovery, and dialog systems, anywhere you want the "learner" to explore and experiment to acquire knowledge. So far, he said, we've taken "baby steps," but there's a lot more to be done.

Another approach would be combining today's deep learning approaches with classical AI. He noted that such systems, based on logic and symbols, were what he was taught in graduate school years ago. The big limitation is that it requires human experts to provide knowledge to solve problems. But there are a lot of things we know but we can't explain, such as perception. That's where deep learning has had its biggest successes.

Classical AI was focused on reasoning, he said, while deep learning is focused on things we might call intuition. Combining the two concepts together might create a different solution built on top of these ideas, and closing this gap is something we need to do to approach human-level AI.

In answers to various questions, Bengio talked about the importance of "ethical AI," saying researchers have a responsibility to think about how their work will be used. "We are called upon to be part of a democratic discussion about how AI will be deployed," he said. He said AI would be even more powerful in the future, and that we need to increase in wisdom as we increase in power.

"AI is not magic," he said, in answer to a question about the biggest misconception in the field. He noted that sometimes organizations don't recognize the limits of the AI we have and noted that there are a whole chain of decisions and changes that organizations have to go through before they can really use AI. The organizations that will do best with AI must be willing to invest in a long-term process to build up what they need.

Bengio said he was passionate about neural networks, and the possibility that a few simple principles might explain our intelligence and let us build intelligent machines. On the social side, he said, if we collectively make the right decisions, we can really bring forward a much better world where everyone on earth can benefit.

He said that while he doesn't believe it would be impossible to design a machine that understands emotions or consciousness, we have a long way to go to understand that better in humans before we think about designing such machines.

Scaling and Deploying AI

Gadi Singer with NNP-I EmTech19

In a sponsor session, Gadi Singer, General Manager of Intel's Inference Products Group, said he expected that in the next three years, deep learning will be deployed at scale, permeating all industries.

Singer talked about how AI now has four big "superpowers." Pattern recognition is the most common, and this is used in many domains, including image recognition, speech recognition, and fraud detection. (Later, he said he was particularly interested in applications in health care, such as fMRIs, X-Rays, and CT scans.) It can be a universal approximator, as it learns the correlation between input and output, and is able to make predictions about results, which allows it to be used for simulations for things like particle movements at CERN or flight routes, using much less power and much less time than conventional simulations even if it's not quite as accurate. It is good at sequence mapping, used in things like cleaning DNA sequences or language translation. And it works for similarity-based generation—creating the next examples of something, such as creating voices, photos, or video.

He showed off Intel's new inferencing chip, the Nervana Neural Network Processor for Inference (NNP-I) saying it can achieve 50 TOPs, and was designed for high efficiency and scale.

He said there were three most important things an organization needs to have in order to be successful in deploying AI. First, is a highly-committed C-suite, one that recognizes there will be some failures in experimenting and deploying AI. Second is quality data, because deep learning requires large quantities of data and much of the work is in connecting, cleaning, and analyze that data. Finally, he said it takes "really smart data scientists"—on your team or otherwise (such as using services), people who understand technology and can integrate it into your line of business

In addition, he said, you need to understand the "unique pace of innovation" happening in this field. He said it usually takes years for a new concept to become a new product, but in AI this happens much faster. He noted that a new model from Google called BERT revolutionized natural language processing and if you didn't use those concepts within six months, you were behind.

AI Adoption and Governance In Large Organizations

Steven Hill and Cliff Justice- KPMG EmTech19

There are big differences in how large enterprises are adopting AI, according to a study by KPMG of thirty Fortune 500 companies. In another sponsor session, KPMG's Steven Hill and Cliff Justice talked about the study, with Hill noting that the success in deploying AI is "not about the technology, it's about the people."

Hill said that the difference between leaders and followers is a 10-fold increase in investment by leaders and that they are basically betting on competitive dynamics that will play out over the next decade.

Justice said there are three broad categories of value that organizations are getting from AI. First is using AI for insights, predictions, and forecasting. This is mostly focused on reducing risk and shedding light on new opportunities. Second is AI for augmentation—with things like augmented skill sets (using more resources to make better decisions), as well as virtual assistants and augmented reality for customer service. Finally, there is AI for automation, which includes automating processes in finance, HR, or on the factory floor, often using techniques such as natural language processing and computer vision. Each of these has its own return on investment, and he noted that it was "not inexpensive to implement AI.

Justice said it was important to understand that AI is very much a probabilistic technology instead of the deterministic ones that enterprises are used to. As a result, he said it is important for organizations to understand the inputs, the integrity of data, how the AI is being trained and who is giving it context, so they have a sense of confidence in the results.

Hill agreed that KPMG clients are nervous about deploying AI at scale, saying the biggest reason was a "lack of confidence" in the technology. He said organizations are concerned that an AI-based system might violate some rules or could slide into bias, and that it works so fast that in production, people aren't going to be able to check it. He believes we will need new tools and techniques to provide the confidence leaders need to move forward but said the idea of machines checking machines here is still in a nascent state.

As for advice, Justice said organizations need to rethink their ecosystems, and noted that with the big platform players all offering advanced AI, you don't need to build it all by yourself. Hill agreed that organizations shouldn't think they have to do it alone and should consider working with partners; but said that they need to be careful with procrastination, noting it takes time and investment to change your organization to properly use AI.

Roundtable: The Politics of Regulating Facial Recognition Tech

Daragh Murray- University of Essex and Mutale Nkonde- Harvard University EmTech19

A roundtable on "the politics of regulating facial recognition tech" pretty much turned into a plea for more regulation of facial recognition tech.

Daragh Murray of the University of Essex said that overall, AI can be incredibly useful for the protection of human rights, and that investment should be encouraged. But he was worried that it represents a step-change in the ability of corporations and governments to create profiles of every person and was worried that removing anonymity "prevents us from becoming who we are," saying that being anonymous allows people to experiment because no one is keeping track. He was also worried that it could have a "chilling effect," most heavily among society's minorities.

He talked about how the UK has a lot of cameras, but is still well behind the US in face recognition. He said he was worried about the lack of transparency in how such systems are being used, and said he expected there to be regulation, based on the country's Human Rights Act.

Mutale Nkonde of Harvard University was most worried about that "chilling effect," worried that the technology might hinder the development of leaders who will come forward to challenge government. But she was most worried about the harm to minority communities, noting that privacy is seen differently in different communities, mentioning how New York's "stop and frisk" policies mostly stopped black and brown people, who were overwhelmingly innocent.

She was worried about landlords using face recognition for locks, or potentially things like the Ring doorbell using similar features, and worried about the potential that the data would be shared with law enforcement, leading to impacts on minority communities.

Nkonde called for a moratorium on the use of facial recognition, and said she thought it would eventually be banned (although in a later conversation, she said she expected there would be an exception for its use in preventing terrorism.)

Yi Leng of the Chinese Academy of Sciences, participated by phone, and talked about the potential for face recognition in tracking missing children, or for paying for things as an alternative to passwords, while saying he doesn't like facial recognition in classrooms, because it changes the way teachers and student interact. He said 83 percent of Chinese people support the proper use of facial recognition by government, which is not that different than surveys in the U.S. but noted there will be cultural differences in how we define privacy.

Murray said the "system is broken," with saying it is really difficult for a person to make a really informed decision potentially hundreds of times per day, and how already many people do not realize what they have consented to. Instead, he said, we need to get consent "on a societal level."

Nkonde agreed and was worried that policymakers would let economic concerns trump societal ones in developing the rules. She said that society needs to understand what is being consented to.

It was an interesting discussion but would have benefited from having people who were developing or deploying facial recognition systems on the panel as well.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Michael J. Miller

Former Editor in Chief

Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. From 1991 to 2005, Miller was editor-in-chief of PC Magazine,responsible for the editorial direction, quality, and presentation of the world's largest computer publication. No investment advice is offered in this column. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed, and no disclosure of securities transactions will be made.

Read Michael J.'s full bio

Read the latest from Michael J. Miller