BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Artificial Intelligence (AI), Hardware And Software: History Does Rhyme

Following
This article is more than 4 years old.

“History does not repeat itself, but it rhymes,” is a quote attributed to Samuel Clemens (Mark Twain). The current state of artificial intelligence (AI) advancements remind me much of the 1970s and 80s in computer technology. Back then, there was a big argument between hardware and software people about which was more important. It was always a silly argument, as both were needed. Eventually, as hardware became standardized, the intervening decades saw a focus on software. Now chip companies are again going through the same transition with AI. Some of that started with both the Cloud and gaming, but AI if forcing more changes.

Nvidia (NASDAQ: NVDA) GPUs are one of the key assets in the data center for the deep learning (DL) segment of AI focused on different flavors of neural networks. They want companies to be able to quickly build machines for inference. As their first and still core business is gaming, they learned from that sector that it’s better for the game developers to focus on the game experience. Nvidia software tools grew to help minimize the need to know about the GPU in order to help developers. The lesson learned has been extended to the AI realm. “Accelerated computing is not just about the chips,” Jensen Huang, CEO, Nvidia said at GTC 2019. “Accelerated computing is a collaboration, a codesign, a continuous optimization between the architecture of the chip, the systems, the algorithm and the application.”

While GPUs showed advances for DL, and CPUs remain in the mix, a younger generation of companies are trying to develop new chips optimized for deep learning (DL). Their early focus naturally had to be around the chipset. However, there’s a need to focus on the application that will use the chipsets. As they begin to edge into the market, there are signs that they have learned the same lesson that Nvidia, Intel (NASDAQ: INTC), and the other major players have learned about software and ease of use.

BrainChip is one such startup. They manufacture a chipset they claim leverages neuromorphic computing. However, the little I know about neuromorphic computing focuses on leveraging analog computing while the company is providing a purely digital technology. The technology is not key to a business understanding, so just think of it as a way to have far more advanced parallelism in processing that attempts, at a very simplistic level, to imitate the processing found in a brain.

So, this is a new technology, how in the word are developers to leverage it to advantage? Evolution is far easier than revolution, so how can a company bring a new hardware technology to market in the smoothest way possible? The answer is software.

BrainChip is now focusing their message on their software toolkit, the Akida Development Environment. For the last one and a half years, they’ve been providing it via license; but they’ve now made the environment freely available through their own site, Github, Python, and elsewhere. “We think our Akida™neural processor has competitive advantages compared to existing deep learning accelerators,” said Roger Levinson, COO, BrainChip. “However, if application developers can’t leverage our low power, low cost, and incremental learning, our advantages remain unrealized. Making the Akida Development Environment freely available, and continuing to work closely with the development community, is critical to providing performance advantages to applications relying increasingly on neural networks.”


Note that, as with the rest of the AI/DL world, this is not a fourth generation, GUI driven solution. The industry is still a few years away from that. However, by openly tying its own development tools to the same resources used by developers leveraging existing tools, BrainChip is lowering the startup cost to working with their chipset.

I’ve been through many changes in the industry. One of the most painful was with Smalltalk, a wonderful early object-oriented language. The problem is that Digitalk and Parc Place positioned it as a revolution, educating people on the power of objects while scaring them about Smalltalk. The developers bought the first message then turned to C++, a terrible language but one that was similar to C. Smalltalk never made it in the market.

As deep learning grows in market size, one critical aspect of adoption is avoiding that pitfall. Hardware vendors, both large and small, are focused on software to ensure that their chipsets are an advantage, not a pitfall. AI is creating a fragmented hardware market such as we haven’t seen in decades, so it’s clear that the two sides of the coin must work closely together. Which is more important, hardware or software? The answer is “yes!”


Follow me on Twitter or LinkedInCheck out my website