• RojoSanIchiban@lemmy.world
    link
    fedilink
    English
    arrow-up
    172
    ·
    edit-2
    1 year ago

    Maybe they should be expanding their physical network first. I waited seven years after they supposedly came to my hometown, and their coverage area barely moved. Most of that is absolutely the fault of AT&T and Comcast stonewalling pole installations but they have the money to put up their own damn poles made of gold after that 77 billion profit report.

    Now I moved elsewhere after covid and of course the only two real options still suck uncontrollably with no hope of any other big mover creating actual competition.

    • originalucifer@moist.catsweat.com
      link
      fedilink
      arrow-up
      66
      ·
      1 year ago

      i am also incredibly disappointed in their lack of achievement here. they have a metric shit-tonne of liquid cash, lawyers and tech out the butthole… but no… were back to ma’ bell still coagulating ala T2.

      so much for being different

    • ArtificialLink@yall.theatl.social
      link
      fedilink
      English
      arrow-up
      42
      ·
      edit-2
      1 year ago

      Google fiber has been supposed to be coming to the west side of Atlanta for like 10 plus years. Hasnt an expanded at all . Yet they still keep that message coming soon to your neighborhood up. And somehow where I am only one option available. Fucking shitty Comcast

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 year ago

        There’s vaults labeled “GFBR” 200 yards from my house on the east side, and it’s still “coming soon.” Meanwhile, AT&T is out here digging every 2 years.

            • Schemata@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              It’s so frustrating, I worked with a group that had their own community broadband council just to get broadband more wide spread in their county.

              Those grants are ridiculous and on objection from another fed department about their grants creating a conflict or another coop claiming they are already offering can derail a whole application. Applications that are not easy or cheap to produce either.

              Makes me sick to my stomach

          • tburkhol@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            IKR? The last time digsafe came out and marked, there were 3 separate AT&T lines twisting around each other like spaghetti, all going the same way and within 3 feet of each other. Like, you’ve already got conduit buried, just blow another fiber through it. Maybe some exec’s kid runs a horizontal drilling company.

  • tony@lemmy.hoyle.me.uk
    link
    fedilink
    English
    arrow-up
    53
    ·
    1 year ago

    I wouldn’t want to calculate what it’d cost to replace all my switches with 25G capable ones… then all the network cards… You’d have to have a really specific application to justify it.

    • Polar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      Just cost me 1K to replace 3 NICs, 1 router, and 2 switches to freaking 2.5Gb.

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I got one of the 2.5 x 8 + 10 switches StH reviewed for like $80, and x520 nics are $20. I’m happy with it for homelab stuff!

        • maxprime@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Nice! I bought some used 10g UniFi stuff (dream machine and switch) for $500 and a pair of 10g NICs and a SFP+ cable for $80 on eBay. All in CAD. Already had some UniFi WAPs.

          Homelabbing has been such a fun hobby, if a little expensive at times.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        10Gbps used enterprise equipment is pretty cheap on eBay. Biggest problem I’ve had is getting compatible SFP+ adapters for the NICs.

        • Kazumara@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Flexoptix reprogrammable tranceivers are a godsend for that. We use them almost exclusively at work and so do quite a few of ours customers (Universities and other places of higher education). But it’s probably hard to justify the cost of a reprogrammer box for a household. You can buy their transceivers pre-programmed though.

          FScom has something similar, but I can’t vouch for those, never tried. Their patch cables are fine though.

    • You999@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      You won’t but I will

      Switch: mikrotik CRS504-4XQ-IN ($799.99) Cabling: QSFP28 to 4 x 25G SFP28 DAC ($63.00 per cable) NICs: Intel XXV710 25GB ($349.0)

      I don’t know how many machines you have so for two machine it’s cost you $1562.97 and maxing out the switch would cost you $6651.83 but do you really have sixteen machines that need or can even physically saturate a 25GB line?

      I think it’s more reasonable to get something similar to ubiquiti’s USW-Pro-Aggregation and have three machines capable of the full speed and 28 machines capable of half rate speeds (at a much lower cost per machine)

          • lud@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I have no idea how well a L3 switch would work on a residential WAN connection. But don’t L3 switches lack features like NAT, DHCP, DNS, Firewall, port forwarding, etc?

            DHCP and DNS (and Firewall, but I guess you don’t have a 25 Gbit/s FW) are of course easily moved elsewhere, but what about the others?

            • You999@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Well this is getting into the weeds a bit but TLDR it depends on the L3 switch.

              For the mikrotik switch I mentioned, it runs the same RouterOS v7 as their actual routers. Anything you can do on a single purpose router you can do on the switch albeit at a slower speed for applications as the CPU in the switch isn’t as good.

              For the ubiquiti switch… I’m not actually sure as ubiquiti’s L3 implementation is not exactly ideal (bordering on broken depending on who you ask)

              • lud@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Thanks!

                I have only played around with L3 switches in packet tracer and iirc they missed a bunch of router features, not sure though.

                Either way, packet tracer uses pretty old IOS versions and Cisco is pretty annoying so it wouldn’t surprise me if they locked it down on purpose.

    • seaQueue@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Buy a media converter and do 25G -> 40G and run a 40GbE home net. Retired 40Gb gear is ludicrously cheap.

      Edit. Or just stick a two port 100GbE card in your router, use an adapter to step one port down to 25Gb and run 40Gb off the other to the rest of the network.

  • Kyrinar@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    1 year ago

    I just want an internet provider that isn’t Spectrum or single-digit download speeds. Not having any real choice fucking sucks, especially since Spectrum is horrible.

    Had AT&T fiber at my old place and god damn that shit went down one time for an hour the whole 3 and a half years I was there

    • pdxfed@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Have you looked at mobile broadband from T-Mobile or Verizon? I haven’t tried either personally but I know if I were in a broadband desert or an oligopoly market like most Americans I would definitely give it a try and see how performance is. Prices weren’t great when released, maybe $50+/mo. for home internet, you can get $ 30-40/mo around here from fixed line providers CenturyLink, FiOS/ziply, or comcrap; feel like the mobile Carriers really missed an opportunity at not pricing it cheaper to add a ton of subs or at least get people to try.

  • Jaysyn@kbin.social
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    1 year ago

    I was involved in one of these Google fiber roll outs several years ago, Google simply doesn’t know what the fuck they want or what they are doing as far as installing outside plant goes.

    EDIT: To clarify, they simultaneously had no fucking clue what they were doing & also wanted to micromanage all of their contractors.

  • Byter
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 year ago

    If you’re struggling to think of a use-case, consider the internet-based services that are commonplace now that weren’t created until infrastructure advanced to the point they were possible, if not “obvious” in retrospect.

    • multimedia websites
    • real-time gaming
    • buffered audio – and later video – streaming
    • real-time video calling (now even wirelessly, like Star Trek!)
    • nearly every office worker suddenly working remotely at the same time

    My personal hope is that abundant, bidirectional bandwidth and IPv6 adoption, along with cheap SBC appliances and free software like Nextcloud, will usher in an era where the average Joe can feel comfortable self-hosting their family’s digital content, knowing they can access it from anywhere in the world and that it’s safely backed up at each member’s home server.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Video calls were all over 1950s futurism articles. These things do get anticipated far ahead of time.

      4K Blu-ray discs have a maximum bitrate of 128 Mbps. Most streaming services compress more heavily than that; they’re closer to 30 to 50 Mbps. A 1Gbps feed can easily handle several people streaming 4K video on the same connection provided there’s some quality of service guarantees.

      If other tech were there, we could likely stream a fully immersive live VR environment to nearly holodeck-level realism on 1Gbps.

      IPv6 is the real blocker. As you say, self-hosting is what could really bring bandwidth usage up. I think some kind of distributed system (something like BitTorrent) is more likely than files hosted on one specific server, at least for publicly available files.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Also going big bandwidth ahead of the requirement curve means most people won’t use it to its full extent for a while. It’s much easier to implement and maintain such network than one trying to catch up with need.

    • bamboo@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I doubt a home server centered around software like nextcloud would ever become commonplace. I think a more probable solution involves integrating new use cases with devices people already have, or at least familiar form factors. For example, streaming from your smart TV device (chromecast, Roku, Apple TV, the actual TV itself) instead of from the cloud, or file sync using one of these devices as an always-on server. But, in both of these cases, there is in inherit benefit from using a centralized cloud operator. What are the odds that you have already downloaded the episode to stream to your TV box, but not your phone if that was where you intended to watch it anyways? And for generic storage, cloud providers replicate that data for you in various locations to ensure higher redundancy and availability than what could be guaranteed simply from a home server or similar device. I presume new use cases will need to be more creative.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      And we’re still stuck on IPv4. Going to IPv6 would do a lot more than 1Gbps connections would.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago
          • Better routing performance
          • No longer designing protocols that jump through hoops to deal with lack of direct addressing
          • MeanEYE@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Sorry to be the one to mention, but NAT is here to stay. Even if IPv6 has enough address space for everything to have a public address it’s still good security measure to have local area network that has a firewalled exit node. Especially considering how IoT has become popular and just how little people care about security of same devices.

            • frezik@midwest.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              No, stop this. NAT is not a security measure. It was not designed as one, and does not help security at all.

                • frezik@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Because hiding addresses does very little. A gateway firewall does not need NAT to protect devices behind it.

                  In fact, NAT tends to make things more complicated, and complication is the enemy of security. It’s one extra thing that firewalls have to account for. Firewalls behind NAT also don’t know where traffic is originally coming from, meaning they have one less tool at their disposal. This gets even worse with CGNAT, which sometimes has multiple levels of NAT.

                  Security is a very common objection to getting rid of NAT, and it’s wrong.

          • lud@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago
            • No longer designing protocols that jump through hoops to deal with lack of direct addressing

            Fucking CGNAT…

  • Paradox@lemdro.id
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 year ago

    I have 10 gig at home, and powerful enough networking hardware that can take advantage of it (Ubiquiti stuff)

    Nothing can ever saturate the line. So it’s great for aggregate, but that’s it

    • LukeMedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      It’s not often that I can saturate a 1Gbps line, unless you have a large household I don’t see much point in going over 1Gbps right now. Though I’m sure there are some exceptions.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        That’s what I was gonna say: it’s not that i use sufficient bandwidth to really need 1gbps but the line is never even temporarily saturated. Just rock solid

      • MeanEYE@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Having a connection that’s not even close to saturated (or backbone for that matter) means lower latency in general. But it also means future proofing and timely issues resolution as you catch problems early on.

        • LukeMedia@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Future proofing an Internet line doesn’t make much sense to me. If a higher speed plan is available, I’d just upgrade my plan if the need arises, save money in the meantime.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Flip it around and look from the ISP’s point of view. Once fiber is connected to a house, there are few good reasons to use anything else. Whomever is the first to deploy it wins.

            Now look at it from a monopoly ISP’s point of view. You’re providing 100Mbps service on some form of copper wire, and you’re quite comfortable leaving things like that. No reason to invest in new equipment beyond regular maintenance cycles. If some outside company tries to start deploying fiber, and if they start to make inroads, you’re going to have to (gasp) spend hundreds of millions on capital outlays to compete with them. Better to spend a few million making sure the city never allows them in.

            • MeanEYE@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That too. To ISP it pays off to future-proof to a degree. More to the point, it’s easier to aggregate high bandwidth users since no one will be using full connection speed all the time, it’s simply impossible. So with 100Gbps they can give 25Gbps service to a lot more people than 4. Closer to 40 or so. Good marketing, test and prepare for future at a decent investment now. It’s how things should be.

    • onlinepersona@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Man, I’d love to sit on that. Growing up with 56k and living with 100Mb/s now is already a big difference, but it shows when I push and pull docker images or when family accesses the homeserver. 1Gb/s would be better, but probably I’ll somehow use up the bandwidth with a new toy. 10Gb would keep me busy for a long time. 20Gb would allow me try out ridiculous stuff I haven’t thought of yet.

    • ours@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Same, I got 10gbit because there was some competition early with fiber getting wider. Now my same provider has slower offers at lower prices but I don’t mind the extra bandwidth in the case I would need it and I have a grandfathered offer so pay the same as 1gbit.

      • LukeMedia@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Paying the same rate is certainly an instance where it makes since. Plus, you can show off to friends!

  • Jah348@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    This is still a thing? I thought they crushed it like 10 years ago

    • ManosTheHandsOfFate@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      My provider recently started offering a 2gbps plan for $30 more a month. I was tempted until I thought about the money I’d need to spend on new equipment to take advantage of it. 1gbps fiber is plenty for now.

      • Billygoat@catata.fish
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Tbf, a lot of these multi gig plans are geared to families, where more than one person could be doing high bandwidth activities. Or even just one person doing high bandwidth things doesn’t cause the other persons zoom call to stutter.

        That being said, ain’t no one NEED 20gbits but by god I would enjoy it.

        • diomnep@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Thing is though, most consumer networking gear is capable of a maximum of 1gbit, so to even take advantage of 2gig or 2.5gig you at least need a router with a 2.5gig uplink. If you have this you can have a couple of people on the network using a gig each.

          My setup is a 1.2g cable connection going into a 2.5g port on my router, with a couple of servers connected to the router over 10g. This basically lets me download off of my servers at the full speed of the network but the rest of my devices are limited to 1gig.

          Going up to 20gig would require a large investment to see the benefits. First you would need a router with a 25g uplink port, which is really only going to be found on a specific tier of “enterprise” gear. These routers aren’t going to have a bunch of ports so you are going to need to dump the output either to a 25g switch or a couple of 10g switches (probably the most cost-effective option). From there you can distribute out to 20 machines at 1g.

          Anyway, you are definitely right about the aim of a service like this but to see the benefits of a 20g connection would require some very expensive and specialized equipment.

      • IMongoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Ya, mine is slow rolling 2gig but it kind of fucked me up because now I want 6E mesh APs and it’s going to cost me like $500. I know I don’t need it, but the fact that I could have it is tempting. Plus I need 6E for the VR headset I also don’t have.

          • ayaya@lemdro.id
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Oops, I actually know that but I got a little lost in the comment chain. I had just read the comment above yours talking about the 2gbps plan, hence the “half the time.” My ISP has also started offering 2gbps but still has a 1TB cap which means it’s possible to hit the cap in just over an hour which is pretty funny.

            • snooggums@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              So one thing that a lot of people overlook is that even with a data cap the higher speeds is still more convenient if you consume the same amount of stuff. It isn’t as noticeable now as it was when speeds went up in the kilobyte ranges thought so many people won’t even see the difference especially if they don’t hit the cap.

              That said, caps are bullshit since network congestion are caused by when people use it at the same tike, not because of the total amount per month.

    • tony@lemmy.hoyle.me.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I’ve yet to see a remote website that’ll send me 1gbps continuously except a speed test… and whilst it’s nice to see big numbers on those, it isn’t really justifying the cost.

      Even things like microsoft and steam stuff throttle far lower than that (presumably because they don’t want a million people trying to hit them for 1gbps constantly).

      Once my minimum term is up on this link I can get a 1.6Gpbs one, but probably won’t bother.

  • prorester@kbin.social
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    Why are people doubting this? This opens up massive possibilities for people, especially those who want to start businesses outside of city centers.

    You could:

    • host your own home-servers and never be worried about bandwidth

    • get 8k streams and not stutter (a low-end 8k stream requirs 50Mb/s, a family of 4 would need minimum 200 Mb/s just for videos)

    • send 8k streams and not stutter

    • offload most of your data to a datacenter on the other side of the planet and not worry about access speeds

      • boot into a browser or a minimal frontend with a low powered device and mount your home directory
    • offload computing to the cloud (no need for a gaming PC if you can just play them online)

    The biggest thing would be 8k streams. 360 8k streams would be even crazier. 360 videos are filmed using 3-6 cameras depending on how much fish-eye you want. True 360 requires at least 6. If each is filmed at 1080p that’s ~6k total resolution, but since you’re only watching one section of the video at a time, you’re really seeing 1080p.

    Those “8k 360 videos” up on youtube are a lie! They aren’t 6x8k, but most likely 8k / number of cameras. True 360 8k video would be 6x8k cameras.

    A single 8k stream at minimum requires ~50Mb/s. Multiply that by 6 and you’re at 300Mb/s just for a single 360 8k stream. Family of 4 –> 1.2Gb/s just for everybody to watch that content - and that’s the minimum. If you have a higher bit rate and aren’t streaming a 30 fps, you can quite easily double or quadruple that. Family of 4 again means 5Gb/s if everybody’s watching that kind of content in parallel.

    But this is just the beginning. Why stop at “video”. These kinds of transfer speeds upon you up to interactive technologies.

    It would still not be enough to stream 8k without any compression whatsover to reach lowest latency.

    8k = 7680 × 4320 = 33,177,600 pixels. Each pixel can have 3 values: Red Green Blue. Each take 256 (0-255) values, which is 1 byte, which means 3 bytes just for color.
    3 * 33,177,600 = 99,532,800 bytes per frame
    99,532,800 bytes / 1,024 = 97,200 kilobytes
    97,200 kilobytes / 1024 = ~95 megabytes

    So 95MB/frame. Let’s say you’re streaming your screen with no compression at 60Hz or about 60 fps (minimum). That’s 60*95MB/s = 5,695GB/s . Multiply that by 8 to get the bits and you’re at 45,562Gb/s which is way above 25Gb/s. Hell, you wouldn’t even be able to stream uncompressed 4k on that line. 2k would be possible though. I for one would like to see what an uncompressed 2k stream would look like. In the future, you could have your gaming PC at home hooked up to the internet, go anywhere with a 25Gb/s line, plop down a screen, connect it to the internet and control your computer at a distance with minimal lag as if you’re right at home.

    In conclusion, 25Gb wouldn’t allow you to do whatever you like. You could do a lot, but there’s still room. We’re not at the end of the road yet.

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Yeah, man. Thank God someone is finally thinking about the family of 4 simultaneously watching 8K 120Hz 360 degree streams.

      Also,

      • bandwidth isn’t the same as latency. This would not let you remote control “with minimal latency,” it would be exactly the same as it is with say 20Mbps download.

      • lossless and visually lossless compression dramatically reduces the amount of bandwidth required to stream video. Nobody will ever stream uncompressed video, it makes no sense.

      • If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.

      • prorester@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Again, just because it isn’t being done yet, doesn’t mean it won’t be. Every time technology progresses, we find new and interesting ways to fill the new space created by it.

        Nobody will ever stream uncompressed video, it makes no sense

        Nobody thought it would ever make sense stream games over the internet with Nvidia Go (or whatever it’s called), but it’s being done. Nobody thought it would make sense to turn a browser into a nearly full operating system, but that’s about done.

        If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.

        Genius, why didn’t I think of that. Thanks for pointing that out.

        bandwidth isn’t the same as latency

        Wow, I had no idea! I bet a 20Gb line won’t get under 1s of ping. There’s absolutely no way.

    • maxprime@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      20 gig networking — even just a switch — is so expensive. 10 gig is already out of reach for 99% of the population, even network nerds. We’re just now in the past couple years seeing a standard of motherboards with 2.5gbps rj45. A lot of brand new nvme ssds can’t saturate 25gbps. There are just so many bottlenecks. I’m not saying I wish dearly those didn’t exist, but I know from my experience upgrading to 10 gig just how many there are.

      https://store.ui.com/us/en/pro/category/all-switching/products/usw-pro-aggregation

      Personally I am more excited for high speed networking for homelabs to come down in price. At this point in my life I don’t feel the need to access my network outside of my house at super high speeds. My 100mbps up is fine for when I’m out of the house, and 10gbps is more than I need when I’m home.

      • Pretzilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Indeed. I’m getting much less than 1/10th of my provisioned 10Gbps for being cheap like that. It’s still plenty fast, though.

        10Gbps is great for feeding a building

        At this point I just want affordable 2.5Gb gear

        • maxprime@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Totally. IMO 2.5gbps should be in every new switch and router without any extra price.

          Gigabit came out in 1999. No other standard has moved so slow.

      • onlinepersona@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wouldn’t they provide you with a 20Gb compatible router? I was curious and cat8 LAN cables support 40Gb/s. They are 3x as expensive as Cat7, but with I’m just a few meters away from the router, so about 10-15€ and that’s the cables done.

        Ah… the PCI-e ethernet card is where it gets pricey 😮 250€ for 10Gb card.

        Damn…

        Although, I’d be future proof for sure. That kind of speed will probably be enough for 20 years or so.

        • maxprime@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          FWIW 10 gig cards can be much cheaper than 250€ as long as you’re willing to use SFP+ (I got a used pair of cards with a 10m optical cable for $90 CAD) but 25gig is where it gets stupid.

          Even if they do supply a capable router, you will probably want at least a switch since most ISP supplied routers only have a few ports. Plus, it’s not uncommon for an ISP router to deliver their advertised speed over only one port, even if the router has several. At the end of the day, though, if you’re paying for >gigabit you probably want to set up your own firewall with a fancy router so you can properly configure your network.

          Crazy that gigabit Ethernet is 25 years old and still the de facto standard. IMO we should all be able to afford 100gig inside our homes, finding the bottleneck inside our machines, not between them. Alas, 10gig is for the enthusiasts, and anything above that is for the elites.

      • prorester@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        There are people on the internet with about 2-3 ms of ping. I’m not a network engineer to tell you how that’s even possible, but I’ve seen it. I’m on 15ms to most game servers right now on a copper line.

        Google Stadia failed for different reasons. Nvidia Go (or whatever it’s called) still exists. Just because I have a shitty copper line doesn’t mean fibre will be as shitty.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Am thinking that in somewhat near future network boot will become a lot more dominant than it use to be. Infrastructure speeds are becoming sufficient to do somewhat longer boot but at the cost of significantly simpler administration and issue troubleshooting.

    • wahming@monyet.cc
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m just doubting Google will actually get it done. They’ve already abandoned fibre expansion once, no reason to think they’ll stick to it this time around.

  • b0gl@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I’ll never understand how you guys in the US are fine with having bandwidth limits on your broadband connections. I’d be pissed. I even have unlimited on my phone. Like wth?

    • enthusiasticamoeba@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      What makes you think people are fine with it? ISPs have monopolies over service areas and can do whatever the fuck they want. They have monopolies because of corporate lobbying. No amount of voting gets these corrupt fucks out of office bc votes literally do not matter and there’s only two parties, they’re both to the right of center, and they’re both bought and sold. Just to really make sure, we’re all taught from birth that the US is peak civilization and all other countries are backwater shitholes.

    • merlinf@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Where in the world do you not have bandwidth limits? If there were no bandwidth limits I could just DOS my entire ISP by downloading petabytes between two of my own computers.

    • Meltrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think you are mistaking bandwidth limits with data caps?

      At some point all devices have a bandwidth limit. Even if you somehow had a 10gb/sec phone data connection (which is absolutely not possible) your phone device literally cannot transfer data that fast.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Would be more exciting and worth paying attention to if Google Fiber wasn’t basically living in an iron lung over at Alphabet these days since they halted major expansion.

    • motherr@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Why would you care that’s it’s passive (pon: passive optical network)? As I understand it the limitations of passive vs active wouldn’t have any impact on the end-user. It’s not something I know a lot about, though.

      • Kazumara@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Because PONs are just fundamentally worse. Why would anyone turn fiber of all things into a shared medium. Just lay fibers from the dwelling up to the central office. It’s barely any costlier since the real expense is the digging, not the fiber. And it’s basically guaranteed to scale forever by simply replacing the optics on the ends. That kind of infrastructure can also be leased out to other providers on an individual dwelling granularity. With PONs competitors are forced into reselling bandwidth, at best, or the infrastructure can be monopolised fully.

      • Kazumara@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        As opposed to a normal fiber link to the switch in the central office. No oversubscription or shared media.

        • Squizzy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I don’t understand how it is shared media through a PON system? What is the name for this alternative I’d like to look into it.

          • Kazumara@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            In a typical PON (GPON, XG-PON, XGS-PON) you have a single fiber from the central office to the optical splitter in the street, from where up to 64 subscribers are connected one fiber each. The bit between central office and splitter is shared. The splitter is passive and just sends 1/64 of the light to each downstream port, in the other direction it combines all the downsteam light towards the upstream port.

            The OLT in the central office sends on one wavelength (e.g. 1577 nm) and all subscriber ONTs send on one other common wavelenth (e.g. 1270 nm). In both directions a time division technique is applied. I believe in the downstream the individual time frames are encrypted with different keys in turn, such that only the specific destination ONT can read the content of their specific time frames. In the upstream the ONTs have to make sure only to send in their own slots, as otherwise the OLT would receive superimposed optical signals that couldn’t be read. You can probably see how this could go wrong if a neighbor had malfunctioning equipment.

            The alternative doesn’t really have a set of standards like PON, as you can just use whatever optical transceivers you want for each customer individually. Though I guess that for operational reasons an ISP would still standardise the setup for all customers. For example the ISP whose services I subscribe to tells customers to use “Bidir LR, 10 km, TX1310, RX1490-1550 nm”, as 1G, 10G, or 25G, depending on which you order.

            To distringuish such a setup from a PON setup I have seen it being called point-to-point (P2P).