[TUHS] PDP-11 legacy, C, and modern architectures

Warner Losh imp at bsdimp.com
Fri Jun 29 06:42:47 AEST 2018


On Thu, Jun 28, 2018 at 1:42 PM, Perry E. Metzger <perry at piermont.com>
wrote:

> On Thu, 28 Jun 2018 07:56:09 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> > > Huge numbers of wimpy cores is the model already dominating the
> > > world.
> >
> > Got a source that backs up that claim?  I was recently dancing with
> > Netflix and they don't match your claim, nor do the other content
> > delivery networks, they want every cycle they can get.
>
> Netflix has how many machines?


We generally say we have tens of thousands of machines deployed worldwide
in our CDN. We don't give out specific numbers though.


> I'd say in general that principle
> holds: this is the age of huge distributed computation systems, the
> most you can pay for a single core before it tops out is in the
> hundreds of dollars, not in the millions like it used to be. The high
> end isn't very high up, and we scale by adding boxes and cores, not
> by getting single CPUs that are unusually fast.
>
> Taking the other way of looking at it, from what I understand,
> CDN boxes are about I/O and not CPU, though I could be wrong. I can
> ask some of the Netflix people, a former report of mine is one of the
> people behind their front end cache boxes and we keep in touch.


I can tell you it's about both. We recently started encrypting all traffic,
which requires a crapton of CPU. Plus, we're doing sophisticated network
flow modeling to reduce congestion, which takes CPU. On our 100G boxes,
which we get in the low 90's encrypted, we have some spare CPU, but almost
no space memory bandwidth and our PCI lanes are full of either 100G network
traffic or 4-6 NVMe drives delivering content up at about 85-90Gbps.

Most of our other boxes are the same, with the exception of the 'storage'
tier boxes. Those we're definitely hard disk I/O bound.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/ee0f0b40/attachment.html>


More information about the TUHS mailing list