A point of reference is that the A100 uses 3090 generation tensor cores, which at this point is about five years old.
to compare against moore’s law (doubling every 2 years): log base 2 of 100 is 6.6, so 6.6 doublings over 5 years. 12*5/6.6 = 9 meaning a doubling every 9 months in order to compete with the gains this chip represents.
I sincerely doubt Nvidia has been consistently making that kind of progress.
They don’t say whether the 100 figure is pure compute performance or performance per watt, but it seems like a big deal either way. maybe we’ll see widescale deployment of optical compute soon.
tone: not antagonistic, just exploring the idea.
The buried lede there is the power efficiency of 66.4 TOPS/watt.
Assuming the 3570 TOPS figure is at INT8 precision, then it has ~2/3 the raw performance of the current generation Blackwell accelerators, at 1/20th the power consumption.
Really excited to see where neuromorphic hardware goes, especially photonics.


