The Future (And Reality) Of Autonomous Enterprise Software

BrandPost By John Barnes
Jul 12, 2018
Technology Industry

autonomoussoftware
Credit: istock

by John Barnes

Oracle’s series of announcements about autonomous database technology – with the first real product being the Autonomous Data Warehouse Cloud launched on March 27 – has been declared a profound revolution, repackaged hype, and everything between. Oracle’s announcement has touched off a vigorous multi-sided debate on the forecast for software autonomy: its definition and financial imperatives, as well as the intertwined hype and truth about its performance and the limits (if any) of an autonomous-security arms race.

Autonomous cars are already on city streets, and autonomous enterprise software that can be taught to manage, optimize, and secure itself is intrinsically simpler. There are large financial rewards for creating autonomous enterprise software, and enterprise security in particular looks like an area where autonomous capabilities are needed to combat the overwhelming volume and increasing sophistication of threats. Autonomous enterprise software is a near-certainty for the future, but how close is it to reality today?

Are we there yet?

Skeptics questioned Oracle’s choice to call their latest iteration of Platform as a Service “autonomous.” Oracle itself claims three areas of autonomy — self-driving, self-securing, and self-repairing — for the platform. That platform, in turn, supports six specific not-yet-autonomous application areas — packages for managing data, apps development, integration, analytics, security, and overall systems. 

Those six areas will remain fundamentally human jobs, with always-improving software assists, for the foreseeable future. Oracle is claiming only that the platform will be autonomous of human design, tuning/optimization, security, and maintenance — i.e. that the feasible path to software autonomy will be via the supporting platform. Oracle Autonomous Data Warehouse Cloud is the first step along that path.

The financial driver: labor costs for data warehousing

In the ongoing big data revolution, large firms that intend to become data-driven (the great majority according to a recent McKinsey Global Institute study) must have data warehouses: databases that are safely off the production track yet recent and accurate, optimized for research rather than business operations.

Currently data warehouses are costly because they have to be continually built, maintained, tuned, assessed, expanded, and re-engineered, by a team of data engineers, database analysts, and database administrators, all high-paid professions where experience comes at an even higher premium. With an autonomous software platform, many more processes can now be trusted entirely to the machines in real time. As Hyoun Park, CEO and Principal Analyst of Amalgam Insights, explains, “Oracle is seeking to eliminate any tuning and manual management of the database, which has traditionally taken up the majority of work for database administrators. With the emergence of the Autonomous Database, Oracle has the opportunity to both provide short-term benefits and to create an analytic environment that continues to improve over time, compared to traditional manual approaches that force DBAs to go back and iteratively clean up the database over and over again.”

In addition to the cost savings in labor, less downtime and far greater agility will further enhance profitability by enabling faster and more nimble response from the analytics team. Not as quantifiably, freeing high-quality humans from routine tasks is likely to lead to higher quality work in the near term, because some aspects of tuning and security are still done better by human beings.

Hype and truth, the complex tangle

Provisioning and populating a new data warehouse used to be complex, difficult, and specialist-only, with a high cost to initial mistakes. Oracle’s tutorial shows how Autonomous Data Warehouse Cloud expands a few high-level decisions on an entry screen into the same easily modified result.

But what does autonomy have to do with it?

Autonomous software requires the support of many automated processes linked to software-driven networks (SDNs). Automation and SDNs actually provide many of the benefits claimed for autonomy. For example, downtime-free patching, immediate upgrade deployment, and fast error fixes are actually advantages of simple automation plus SDN. Both enable software autonomy, but the biggest savings are not necessarily due to autonomy. Part of the point of moving software into the cloud is to allow the organization hosting that software do most of the work of maintaining, upgrading, and securing the system. The customer doesn’t necessarily know whether the work is being done autonomously or by the cloud vendor’s personnel – and may not care, as long as the work gets done.

Besides SDN and automation, other prerequisites of software autonomy also have direct positive effects on business operations. The sheer size and speed of modern systems means they do not have to be as well-tuned to deliver the same performance; some of the performance gain from cloud scalability and hardware speed will be attributed to autonomous tuning on the fly. Minimal downtime and on-the-fly operations make security updates almost instant, drastically narrowing windows of vulnerability and reducing the number of successful penetrations. Again, they don’t necessarily make a system autonomous.

The enmeshed relation between autonomy and so many other ongoing improvements means that, as with Oracle, it will likely be difficult to establish how much of the observed success is really due to autonomy.

The security arms race — why truth must eventually exceed hype

And yet, although truth and hype will be entangled beyond separation in the discussion, security issues will ensure that the truth will always turn out to be the much larger part.

The struggle between malware makers and malware detection is an arms race; more than a truism, arms race models in machine learning can create and improve security software, eventually better than human engineers.

We’re not quite there. A 2016 DARPA Grand Challenge demonstration of autonomous malware versus autonomous security, built around a “capture the flag exercise” where bots battled bots to exploit each other’s systems, showed the value of a system being capable of probing and patching its own vulnerabilities – and resulted in a $2 million prize and a DoD contract for the winning team from Carnegie Mellon University. David Brumley, the CMU professor who has gone on to commercialize the tech as CEO of ForAllSecure, acknowledges that at the same Defcon event where the DARPA challenge was held, the team’s Mayhem automated security bot “came in dead last” when it was tested against the creativity of human hackathon contestants. On the other hand, those were some of the best hackers in the world. Where autonomous systems excel is in “cold, hard calculation” of risk at a very large volume – something that’s very much needed to comprehensively audit the vulnerabilities in enterprise systems, he said in an interview last year.

Autonomous capabilities are also advancing rapidly.

Historically, if a machine can do a thing at all, it will soon do it better than people.  Park points out, “The evolution of machine optimization often follows Moore’s Law where machine-based tasks get twice as good every 12-18 months. This means that a machine that can do something at 1% of human skill today will typically be ready to replace humans in 7-10 years.”

Malicious autonomous software can already produce competent, if uninspired, malware. It will soon be launching malware at unprecedented levels of proficiency and innovation. Waiting for malware to blow something up, then fingerprinting it afterward is becoming unacceptably inefficient and expensive. Going forward, malware defense must rest with autonomous software that can perceive, investigate, recognize, and make decisions about the actions and intent of other software.

Arms races do more than just accelerate deployment of advanced tech and increase investment. They also prompt breakthroughs in unexpected directions, and spin off innovations into unexpected applications. As Dr. Marco Ramilli, founder and CTO of the cybersecurity firm Yoroi, explains, autonomy in security software could be turned against it by feeding it corrupting data:  “Autonomous artificial intelligence solutions which will take automatic decisions need to improve themselves by learning day by day. An attacker could exploit this behavior by injecting ‘malicious behavior’ close to the ‘normal behavior’ and [increasing] the distance between normal behavior’ and ‘malicious behavior’ day by day. The auto-learning algorithm will change itself and after several month[s] it will be not able to detect a real attack.”

This is just the bare hypothetical beginning of the possibility of inserting training and test sets to corrupt machine learning software into adopting perceptions, policies, and conclusions contrary to the interests of its owners.

Will autonomous security software be able to perceive its own corruption? Ramilli is not optimistic: “[AIs]are fast and precise, but they cannot really understand what is happening, only a human brain will be able to understand the real intent of [the] attack and for such a reason only a human brain is the ultimate defense.”  Yet he is hopeful that with the aid of AIs and the collaboration of colleagues worldwide, the defense will mostly, but may not completely, prevail.

The only certainty is that we don’t know yet

Whether or not Oracle’s Automated Data Warehouse Cloud constitutes real autonomy or a false dawn, the real dawn is coming on fast. The financial advantage of reassigning expensive human specialists from routine tasks ensures that, despite the difficulty in sorting out software autonomy’s rewards, and the risk of a costly autonomous security arms race that the good guys just might not win.  Whether we look forward to it or dread it, it will be here soon, if it isn’t here already.

Cloud computing is driving innovations like greater autonomy in software, but that’s not the whole story of the future of IT. Read The Future of Computing Is Bigger Than the Cloud.

(Additional reporting by David F. Carr)

_____________________________________

About John Barnes

johnbarnes

John Barnes’s blogs and articles about data analysis, business statistics, marketing intelligence, and advanced technology in the information industry have appeared frequently in several different venues since 2010.  In the technical fields he has worked in systems analysis, business statistics, software reliability theory, sentiment analysis, statistical semiotics, and formal specification.  As a writer, he has published 31 novels from commercial publishers and over 100 magazine articles and short fiction pieces. Academically he has taught in half a dozen different departments, mostly related to human communication, and his dissertation was in the digital humanities before that term was coined. After facing up to having not kept his skills current, he has spent the last few years re-training in data science, and is currently quite possibly the oldest data science intern you will ever see.