X
Innovation

Microsoft Azure CTO Russinovich sees an AI world that sounds a bit like Visual Basic

Mark Russinovich, the chief technologist for Microsoft's Azure, reflects on bringing AI from cloud to edge, a process that in over time may look like a very big Visual Basic app.
Written by Tiernan Ray, Senior Contributing Writer

People who should know better light up cigarettes next to gasoline pumps. 

That is one surprising discovery in Microsoft's deployment of its machine learning capabilities to what's known as the "edge" of computing, in this case, at gas stations.

It's conceivable the lighting of a cigarette could trigger a complex web of activity that would all be managed via functions that are akin to Microsoft's Visual Basic programming language.

That reality is taking shape, as explained last week by Mark Russinovich, chief technologist for Microsoft's Azure cloud computing service. 

Russinovich, who has been in the CTO spot at Azure for nearly five years, and who is a 13-year veteran of the software giant, was in New York last week and spent some time talking with me about how a web of artificial intelligence and machine learning can ultimately be tied together via something that looks like VB.

microsoft-mark-russinovich-april-2019.jpg

Mark Russinovich in New York last week. Russinovich has been at Microsoft for thirteen years and is chief technologist for the company's Azure cloud computing service. Stitching together cloud computing and "edge" devices is allowing the company to build "the world's computer,' as he describes it.

(Image: Tiernan Ray for ZDNet)

Who would light a cigarette next to a gas pump, you might wonder, as I did. 

"Apparently they do," said Russinovich with a slight chuckle. At least, according to a Microsoft customer, oil giant Shell, which deploys at its gas stations, in the convenience store, something called Azure Data Box Edge. The product is an appliance, a "1U" rack-mountable computer sold by Microsoft. The appliance downloads machine learning models trained in Azure for image recognition, which it runs inside Docker containers to perform inference on images. 

Also: Can IBM possibly tame AI for enterprises?

Image data is fed to Azure Data Box Edge from low-power devices out by the pumps, which run a smaller runtime software stack from Microsoft, called "Azure IoT Edge." Azure Data Box Edge performs inference using its trained image recognition models to monitor if some figure out there by the gas pump is lighting up. 

"They'll have the pump automatically shut off," if that smoker is detected, is Shell's intention in these cases, says Russinovich.  

Shell is one of several customers who see a need to take the computing functions of Microsoft's cloud and put them in either data centers, or increasingly, in remote places, such as factory floors, oil rigs, and gas stations. Starbucks plans to install tens of thousands of what are known as "Azure Sphere," devices containing a microcontroller that runs Microsoft security code embedded in the chip. 

Starbucks can use Sphere to perform predictive maintenance on their coffee machines. Kroeger, the retail giant, is putting Azure Data Box Edge in all their stores, to control LED displays on shelves to show special deals on products. The appliance can also perform inference on images of shoppers, to recognize who's who -- something, Russinovich emphasizes, is kept inside the store, rather than being sent to the cloud, for privacy reasons.

Also: Enterprise AI and machine learning: Comparing the companies and applications

All this amounts to what Russinovich calls "building the world's computer." But what will tie that all together? Microsoft has a version of what's known in computing as "serverless," where infrastructure doesn't have to be specified, and functionality is effortlessly invoked by a programmer with a simple function call. Microsoft's version of this is "Azure Functions." 

Functions can be used to stitch together the collection of computing devices, from a simple Raspberry Pi machine in a store to the Box edge server in the local wiring closet, on up to the cloud instances that are running training operations.

Russinovich explains the pipeline he foresees for all these devices, with functions as a kind of glue:

If you take a look at an edge application that as I imagine it, the inference part of it will be one thing. There'll be functions that are responding to outputs of that ML model. That function is spitting out data that's then streamed up into the cloud, and is creating an alert, or just triggering a collection of the image for storage and then aggregation. I think there will be a pipeline around the data, and responses to the data. Some of it will be involving the cloud, some of it will be completely local.

I point out to Russinovich that, to me, the use of Functions makes it seem as if one could just run all of machine learning from one Visual Basic app. "It's funny that you mention that," he replies, "because in our brainstorming of what kind of programming model we create, a model that would be consistent across cloud and edge, our mental model is, Let's go after the same enterprise professional developers that we made so successful with Visual Basic."

There are still things that have to fall into place with all this cloud and edge talk. 

As more and more inference is done out at the edge, more and more advanced hardware is necessary for the edge devices, be they in a data center, a wiring closet, or on a Raspberry Pi. 

Currently, Azure Data Box Edge ships with "Arria" chips from Intel, field-programmable gate arrays, whose circuitry can be tuned to the ML model that is downloaded to it. Microsoft collectively dubs its FPGA use in the cloud as "Brainwave," using chips from Intel coupled to technology that Microsoft applies to the chips. Google and Amazon, though, have gone their own way, developing in-house, custom circuitry for inference. 

When I ask Russinovich whether Microsoft to go that route as well, he replies, "I think that that's something we've been looking at." But he quickly adds that Microsoft developed the "open neural network exchange," or ONNX, standard with Facebook and Amazon in order to support the new inference chips that are coming from a number of startups. "It's one of the big initiatives that we took to make sure that we're ready for whatever happens whether we're innovating with hardware or somebody else's is," he says.


Must read


Another issue is training of neural networks. Although the vast majority of training of neural networks will continue to be in the cloud for economic reasons, says Russinovich, it is also the case that some customers may want to bring some training to the edge over time. 

New pieces will have to be put into place to do that, he says, and he notes that Microsoft's research team is working on many technologies to bring AI models to edge with "reduced precision" arithmetic that none-the-less preserves the accuracy one has when training in the cloud.

"The richer application and program models that will be developing, we're making sure they go down as far as they possibly can," says Russinovich. 

"Now, when you get done to four megs [of memory capaity], you're pretty limited, but, a lot of stuff can push down into Raspberry Pi-class devices."

As for PyTorch, I ask Russinovich about the recent claim by Facebook's head of AI, Yann LeCun, that Python needs to be replaced by some other programming language that would be better disposed to AI and machine learning. Russinovich dismissed the notion immediately.  

"I think people can talk about that, but we don't see it [Python] going anywhere, which is why our Azure ML SDK is in python.

"Because that's what data scientists love!"

Cloud services: 24 lesser-known web services your business needs to try

Editorial standards