Defending the liberal tradition in history, science, and philosophy
Hmmm. Now that’s one potential way to get around current limitations in CPUs I hadn’t thought of.
Or support Dean's work on Patreon
With virtual bypassing, however, each router sends an advance signal to the next, so that it can preset its switch, speeding the packet on with no additional computation. In her group’s test chips, Peh says, virtual bypassing allowed a very close approach to the maximum data-transmission rates predicted by theoretical analysis.
I’m not impressed and don’t think it will ever move past the experimental phase the way they are going about it. Packet communications is effective, but very inefficient.
Whether on chips or the internet, packet inefficiency is overcome with the brute speed of today’s processors. The time and overhead required to send an advanced signal is little different than sending another packet, and the time spent on a smaller packet varies only slightly. In effect doubling the workload to save time is like trying to lift yourself by your boot straps.
My solution would be for each processor to set a flag and address, indicating which of the four neighbors the packet needs to address along with the packet destination. a fifth processor will control the path switches of the other four and set their switched as needed. The other four would not see the flag/address and simply work on the packets if needed, and pass them on: trusting their router ( fifth processor ) to send it in the right direction.
On a one hundred core CPU that would mean 20 would handle the switching, while 80 would share the computing. Still much better than doubling the workload and rendering 100 cores to the efficiency of 50.
I’m more sanguine than you. I don’t know for sure if this will work but something tells me it will.
I remember the days when networks themselves were essentially serial buses–everything straight-line in communication–and the change to packet switching introduced fewer inefficiencies than it added. I remember when I first encountered packet switching and the whole idea of a “cloud” for communicating data I laughed at it and found it “scary” and said there was no way that could be reliable or fast on a large scale. That was on a private packet-switched network in the late ’80s/early ’90s that was worth billions of dollars (owned by GE) and by today’s standards laughably tiny and slow next to anything we use now.
A hierarchical bus like you describe might work better initially but if you keep scaling up you start having the same problem: what happens when you have 256, 512, 1024, 2,048 etc. cores? Packet switching may indeed be a better long-term solution.
What’s scary is how quickly this grows out of the scale of current human comprehension. Of course the existing chips have already just about gotten there–it’s doubtful anyone on the design team of any major CPU knows every component of it anymore. We’re more and more reliant on the hardware/firmware/software to sort stuff out for itself. It’s almost dizzying when I compare it to the processors I used to program in assembly language, where you could know every op code and every registry and what was where at any given moment. As the complexity goes up, the more creative solutions you have to have to deal with that complexity. This looks like it would let them continue to scale up in an almost organic fashion to me.
Return to top of page
Copyright © 2016 · Quattro Theme On Genesis Framework · WordPress · Log in