<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

What happens after Moore's Law? It's not what you think!

6 minute read

RedShark NewsAfter Moore's Law?

This was a hugely popular article when it first appeared. Here's another chance to read about what might happen after Moore's law!

Moore's law, the variously observed phenomenon that the number of "transistors" you can fit on a given size of silicon chip doubles every 18 months, is about to expire. It has to. If it didn't, then within a decade or so we would have components that are smaller than molecules, and we know that isn't possible.

This empirical law, which isn't actually a law, because nothing has to abide by it, has been the single most important driving factor for our digital civilisation for several decades. It's because of it that while the slide rule ruled for 340 years, in the 40 or so years since the calculator wiped out its imprecise analogue predecessor, our calculators today (we call them smartphones) are arguably a billion times faster.

You can't have that sort of rate of change for half a typical lifetime without some fundamental effect on the way that we do things. And there has been. Has anyone heard of the internet?

There is a process that accelerates technology even without specific hardware breakthroughs. In other words, if all progress in building bigger, faster chips were to stop today, then we would still have accelerating technology for some time to come. And it's this phenomenon, rather than sheer speed increases, that has been driving forward camera development recently. (Although speed is still important - for example, you need very fast processors to deal with 4K video coming off a sensor).

Advancement through algorithms

The first process is that algorithms are getting better. This isn't entirely surprising, because if you take a group of clever people, the more they think about a problem, the more elegant solutions they'll find for it. So the technology could stand still and, yet, the answers still come faster and more accurate.

And it doesn't just depend on people. As computers and computer techniques get better, they can contribute to the efficiency of algorithms, too. Don't forget that one of the principles driving accelerating technology is that each generation of high-tech tools can be used to make the next, better generation. This is a virtuous circle where problems and issues are solved with ever increasing acuity and speed.

A better debayering algorithm is going to give you better video and probably faster, as well. Clever people will invent better ways to solve problems and when they come up against a brick wall, they will use computing power to give them new data on which to base their new solutions.

You could say that one proof of this principle is that when your smartphone is updated with a new version of its OS (say iOS or Android), it becomes a new phone - able to do more and do what it does better. The same goes for cameras, too. It's quite common now to buy a camera and then, six months later, find that there's a firmware upgrade that gives it new abilities: same hardware, better software.

There's another way things can improve without better hardware. It's the same principle as the way we learn things as humans.

How we learn

When we start to learn about the world and our environment, we do it in stages. We learn something small and then learn something else small. We store these unconnected areas of knowledge and then - sometimes - we see a connection between them. We somehow manage to see the smaller areas of knowledge from a higher perspective and notice things that weren't there at ground level. All of our knowledge is hierarchical like this. You can see it at work as people learn things.

I worked with a guy once who was a great mechanical engineer but who just didn't "get" computers. This was admittedly in the days before Windows, when you had to type cryptic commands at the prompt. For some reason, he was a slow learner when it came to PCs. One day, I realised why.

He saw me typing "DIR" into the computer. For anyone born since about 1990, this might seem arcane, but it was the old operating system command to simply list all the files in a directory (which is what folders used to be called). My colleague looked fascinated, but puzzled. He said, "Do you have to type those letters in the same sequence all the time? How on earth do you remember that?"

What he didn't realise is that these were words. They weren't seemingly random sequences with about as much chance of remembering them as a printout of modem line noise (that's something else that is dating me severely).

It takes a leap to realise that once you know that something is a word, you just have to remember that word and not the individual sequence of letters, because that sequence is already in your brain's database.

And then you learn lots of words, or "commands," and your ability grows very quickly.

I'm labouring this point because it is an important one. It's about hierarchical learning and this is how machines in the future, as well as humans now, learn.

From the ground to the helicopter

In fact, this is how our brains work. We learn to recognise things at one level and then we look at them at a higher level and see patterns and correspondences. As we learn to deal with the low level stuff, we move up a level. Eventually, we deal with quite abstract concepts. Here's an example:

At one level, we see a round object. At the next level up, we see several other objects all with different shapes, all of which we recognise. Further up, we realise that when these familiar objects, in this spatial relationship to each other, represent an eye. At the same level, we're assembling all the parts of a nose to recognise another important feature of a face. Move up another level and we're looking at a face. Up again and we realise we're looking at our wife. Going up several more levels (taking into account her expression, her body language etc), we realise that she's happy. At a still higher level, you realise she's happy because this is the first time you haven't forgotten your wedding anniversary. (I'm grateful to Raymond Kurzweil for this type of illustration)

We can see exactly how this works. And today, we're able to analyse brains in more and more detail. It's this type of "learning" that is also driving technology. Perhaps the best example is the way that Google is able to take a "helicopter view" of the world, not just with Google Maps, but with all that data that they have. With so much information that they (and Facebook) have about their users, they can spot trends that wouldn't have been visible from ground level, perhaps detecting patterns in the weather or even correlating economic activity with diet.

Keep this idea of small areas of knowledge feeding into a "bigger picture" as we talk about how technology seems to be moving so quickly.


Software, virtualisation & the Cloud

We've seen it quite recently with the sudden appearance of several large sensored cinematic cameras available at a fairly low cost. What's happened here is that much of the low-level heavy lifting has been done by the sensor manufacturers. The hard stuff to do with processing has to be written, but much of it is based on already-available software libraries and it runs on chips that include Field Programmable Gate Arrays, which are very fast processors whose internal connections (between logic gates) can be re-written at boot-up by software.

Operating systems don't have to be written from scratch anymore. Nor do electronic systems that include sensors, radio (WiFI, Cellular, Bluetooth etc) and screen drivers.

Our knowledge and our ability to "do" things moves upwards through levels. As we climb to each new level we find that by looking down, we can connect seemingly unrelated areas of technology. And as we do, something much bigger happens.

This brings us to one of the most powerful drivers of technology: Virtualisation.

With so much technology available off the shelf on a circuit board that is essentially the insides of a mobile phone, an incredible number of products can be made just by writing an app.

These days, you can buy an app that monitors your heart health, using the hardware of your mobile phone. As hardware gets more powerful and increasingly commoditised, apps will be able to do almost anything.

Finally (well, it's not finally, but in order to prevent this article from being infinitely long…) there's the Cloud. When I say "Cloud" here, I mean the fact that every computer on the internet is connected to every other computer. Which means that not only can they communicate with each other, but with suitably designed software, they can share processing and storage. This needs careful management, but it does mean that processing power and storage are now essentially limitless - and wireless and mobile technology means that we never have to be disconnected from this near-limitless power.

See the future?

In reality, progress is not smooth. The fact that it isn't is still more evidence for the way we have several small breakthroughs and then, occasionally, enormous ones.

The biggest one of all is when we build a computer as powerful as the brain (whatever that means!).

Meanwhile, what does this mean for the video industry?

Well, we are now so close to the point where the rate of progress is near vertical (some people call this the "singularity) that it is getting very hard to say what is going to happen next. 50 years ago you could fairly easily predict what was going to happen in ten years time.

But could you do that now? I certainly couldn't. My prediction is that in ten years time, we won't be talking about pixels. 

Tags: Technology

Comments