There’s nothing fake about cybersecurity potential of artificial intelligence

.

The first word in AI may stand for “artificial,” but the belief in its potential for cybersecurity in government and business circles is very real.

In fact, industry sources say efforts to make use of artificial intelligence could drive a more flexible approach to cyber-related regulation, particularly in the finance sector.

Former White House cybersecurity coordinator Rob Joyce, now back at the National Security Agency, called AI a “key element” in cybersecurity strategy in a recent speech.

“The point about AI being a key element of the future, I think there is so much that AI can do to clean out anomalies, to move the speed of cyber, in setting up those defenses,” Joyce said.

Makers of financial-technology products are looking into and promoting the possibilities, a topic discussed extensively at the recent Securities Industry and Financial Markets Association “FinTech” conference in New York City.

Rob High, vice president and CTO of IBM Watson, said in an interview on the sidelines of the conference that his firm has a “cybersecurity offering infused with AI” that is “focused on the response side” of cyber challenges.

“Once a threat is identified, we help find the remediation steps in real time,” High said. Typically, he said, computer system administrators “can’t find [the right remediation steps] fast enough” once they’ve discovered a breach and are racing to respond.

Watson already has some brand-name credibility after winning $1 million on “Jeopardy” a few years ago.

AI tools also seem likely to help in the realm of cyber threat information sharing — processing the enormous volume of data that security organizations must wade through — and aid in more mundane chores such as searches of dangerous anomalies on a company’s computer system.

Government leaders said they are taking steps that could encourage the use of AI as a cybersecurity tool.

Craig Phillips, counselor to Treasury Secretary Steve Mnuchin, told the conference that he endorses a “responsible innovation” initiative at the Office of the Comptroller of the Currency that could help clear the way for greater use of AI technology by financial firms for cyber and other purposes.

Phillips cited accelerating innovation and fixing the patchwork of regulations in the financial sector as two prime goals for the department.

Many industry panelists at the conference stressed that the financial sector is regulated by an alphabet-soup of agencies, creating challenges for firms trying to incorporate new technologies such as AI. They urged the Treasury Department to aggressively pursue regulatory harmonization.

Lawmakers are taking a look at AI as well — in late June, the House Energy and Commerce subcommittee held a hearing on the topic. There’s no legislative effort on the burgeoning technology as of yet, but AI appears to be of interest to key lawmakers looking for innovative and cost-effective cyber solutions.

Sources with financial firms and FinTech vendors say regulators are also receptive to their efforts to employ AI, while suggesting the advanced technology will also require a sophisticated approach.

“How to navigate multiple regulatory overlays is a good question” as firms consider employing AI-based tools, Jennifer Klass of Morgan Lewis said on a panel at the SIFMA event. “We’re trying to have these discussions with regulators, but it’s definitely a balancing act.”

Klass embraced the idea of a distinct, walled-off “ ’sand box’ as a way to permit innovation.”

SIFMA has proposed that federal authorities create such a “sand box” where firms can experiment with technologies such as AI without facing immediate regulatory consequences.

But various participants at the SIFMA event urged early collaboration with regulators as firms look to AI and other advanced technologies to meet cybersecurity needs.

Gary Nichols of Charles Schwab noted “there is not a well-defined ‘control environment’ for AI,” in terms of security and performance standards, and said “over the next two years this will be a major topic.”

Stefan Dicker of the FinTech firm Rise, speaking alongside Nichols on a panel, said “the conversation is accelerating on ‘dynamic controls,’ ” while Klass said the control environment around AI must “be flexible and adapt quickly.”

“We tend to look at applying [regulatory] controls to these technologies, but that’s kind of counter to the purpose,” Dicker said. “We’re talking about this and struggling with what kind of framework to apply [to AI] — people policies or technology policies, or a new category?”

Regulators and firms will have to “set parameters on how much risk they are willing to accept,” Dicker said, while adding that it probably isn’t “worth establishing rigid controls when the environment is changing so quickly.”

“Engage with the regulators at the front end,” Klass said.

Experts, firms and regulators will have many questions to address, both profound and mundane. For instance, SIFMA panelists weighed whether an artificial intelligence tool should be considered a machine or an employee for purposes of regulatory controls and oversight. Further, does it require “a magic off switch,” as one panelist asked.

Answers to such big questions will take awhile, participants here acknowledged, but the process is underway and upcoming regulatory reform moves in the financial sector, in particular, seem likely to help clarify the issues at play and promote real-world experience with AI tools.

Related Content

Related Content