I did some experimentation with UniFi hubs and came to the conclusion that if you can give each device its own WiFi channel that would be ideal -- contention is that bad and often an uncontended channel with otherwise poor characteristics will beat a contended channel that otherwise looks good.
The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
> The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
Absolutely. Everything other than cell phones and laptops-not-at-a-desk should be on Ethernet.
I had wires run in 2020 when I started doing even more video calls. Huge improvement in usability.
The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it. The two devices in our house which use a wired connection are both plugged directly into the switch in our utility closet.
(We do have one internet-connected device which permanently lives about an inch away from one of the ethernet sockets, but it is, ironically, a wifi-only device with no RJ45 port.)
Some friends live in a rental that they’ve decorated well. It wasn’t until multiple visits that I realized they had run Ethernet throughout the house.
You can get skinny Ethernet cables that bend easily. If you get some that match your paint, and route them in straight lines, those can be unobtrusive. Use tricks like running the cables along baseboards and other trim pieces. If you really want to minimize the visual impact you can use cable runners and paint over them. The cables are not attention-grabbing compared to furniture or art on the wall.
If you’re willing to drill holes (if you terminate the cable yourself, the hole can be narrow), you can pass the cables through walls. If you don’t want to drill, you can go under a door.
If you’ve got fourteen outlets, it seems like there ought to be some solution to get cables everywhere you need.
If you own, you should replace and/or move them. Might sound scary if you've never done this before but it is much easier than you'd think. If you want to make your future life easier I suggest running a PVC pipe (at minimum in the drop down portion). Replacing or adding new cabling will be much easier if you do this so it's totally worth the few extra bucks and few extra minutes of work. They'll also be less likely to be accidentally damaged (stepping on them, rodents, water damage, etc). I seriously cannot understand why this is not more common practice (leave the pull string in). You might save a few bucks but you sacrifice a lot more than you're saving... (chasing pennies with pounds)
If rental, you could put in an extender. If you're less concerned about aesthetics you can pop the wall place off and directly tie into the existing cable OR run a new one in parallel. If you're willing to donate the replacement wire and don't have access to the attic but do to both ends of the existing cable then you can use one to pull the other through. You could coil the excess wire behind the plate when you reinstall it. But that definitely runs the risk of losing the cable since it might be navigating through a hard corner. If you go that route I'd suggest just asking your landlord. They'd probably be chill about it and might even pay for it.
I live in a brick house where only half of the walls are hollow. Bringing Ethernet wires to a few critical areas and putting small surface-mount RJ-45 sockets was not that hard.
Of course, some thin raceways can be seen somewhere along the baseboard. It does not look terrible, and is barely noticeable.
Stick houses with hollow walls are cheaper to build (assuming cheap wood) and cheaper to work on. Probably cheaper to maintain too, but not as durable, so it might work out... Otoh, durable isn't great when housing trends have moved on.
Much more durable in an earthquake though, which is important in places like the US where half the country is a serious seismic hazard zone. In many locales only wood or steel framing is allowed because historically stone and concrete construction collapsed due to the strength of the earthquakes.
You can lay your own cables, either to the next wall socket or directly to a switch. Flat ethernet cables can be very helpful for hiding and for crossing doorways. Generous "unnecessary" wire length helps with keeping them out of sight.
> The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it.
I had a similar situation a few years back. It was a rental so I didn't have access to the attic let alone permission to do my own drops. It'll depend a _lot_ on your exact setup, but we had reasonably good results with some ethernet-over-power adapters.
Ethernet of powerline adapters a very YMMV situation. Occasionally, it works great for people, but more often than not, the performance is poor and/or unreliable, especially in countries with split-phase 120/240 volt power (where good performance relies on choosing outlets with hots on the same side of the center-tapped neutral. The people who most commonly share success stories with powerline Ethernet are residents of the UK, where houses only have 2 wires coming in from the pole and there's often a ring main system where an entire floor of a house will be on one circuit.
A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters, which will actually give you 1+ Gbps symmetrical. The latency is a very consistent 3-4ms, which is negligible. I use Screenbeam (formerly Actiontek) ECB6250 adapters, though they now make a new model, ECB7250, which is identical to the ECB6250 except with 2.5GBASE-T ports instead of 1000BASE-T.
> A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters
I'll second this. MoCA works. You can get MoCA adapters off Ebay or whatnot for cheap: look for Frontier branded FCA252. ~90 MBps with a 1000BASE-T switch in the loop. I see ~3 ms of added latency. I've made point-to-point links exclusively, as opposed to using splitters and putting >2 MoCA adapters on shared medium, but that is supported as well.
That was my experience too. The experience with powerline ethernet adapters was unbearable on a daily basis.
We had an unused coax (which we disconnected from the outside world) and used MoCA adapters (actiontek) and it's been consistently great/stable. No issues ever... for years.
We have them at home as well and they really suck. They lose connection every 20ish minutes at best, and take about 5 to reconnect. Makes Zoom meetings impossible, among other things.
I used those during covid to get a reliable connection for video calls and it was a huge step up over wifi. The bandwidth was like 1/10th of actual gige, so I got a wire pulled to my office when I went to fibre but there’s no question in my mind that decent powerline adaptors are the winner for connection stability.
It depends on your wiring but I've had pretty good success with AV2000 powerline ethernet. I get about 400Mbps and a reliable 2ms ping which is good enough for gaming and streaming from my media center.
The endpoint in my living room also has a wifi AP so signal is pretty good for laptops and whatnot.
In NYC every channel is congested, I can see like 25 access points at any time and half are poorly configured. Any wired medium is better than the air, I could probably propagate a signal through the drywall that's more reliable than wifi here.
So having something I can just plug into the wall is pretty nice compared to running cables even if it's a fraction of gigE standards.
And the big one I want to point out, is that this AI stuff has me downloading so many ten gigabyte model files to run them locally that I'm really feeling the lack of speed that my setup has.
Mu-MIMO would help. The real problem is that energy between a unit and an AP is not in a pencil-thin RF laser-beam --- it is spread out. Other nodes hear that energy, and back off. If we had better control of point-to-point links, then you could have plenty of bandwidth. It's not as if the photon field cannot hold them all. When we broadcast in all directions, we waste energy, and we cause unnecessary interference to other receivers.
it was quite a while back but I read some press release about a manufacturer that would make an access point that had mechanically steered directional antennas. Unfortunately I don't think it ever made it to market.
That can help in one direction, but networks are bi-directional.
No matter how fancy and directive the antenna arrangement may be at the access point end, the other devices that use this access point will be using whatever they have for antennas.
The access point may be able to produce and/or receive one or many signals with arbitrarily-aimed, laser-like precision, but the client devices will still tend to radiate mostly-omnidirectionally -- to the access point, to eachother, and to the rest of the world around them.
The client devices will still hear eachother just fine and will back off when another one nearby is transmitting. The access point cannot help with this, no matter how fanciful it may be.
(Waiting for a clear-enough channel before transmitting is part of the 802.11 specification. That's the Carrier Sense part of CSMA/CA.)
We downsized from a house built in 1914 with phone jacks everywhere to a house built in 2007 with coax and ethernet ports in every room, some rooms with two.
At the 1914 house, I used ethernet-over-powerline adapters so I could have a second router running in access point mode. The alternative was punching holes in the outside walls since there was no way to feasibly run cabling inside lath-and-plaster walls.
I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
My son has ethernet in his dorm with an ethernet switch so he can connect his video game consoles and TV. I think that's pretty common.
> I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
Speaking from a US standpoint, it still not common in new construction for ethernet to be deployed in a house. I'm not sure why. It seems like a no-brainer.
Coax is still usually reserved to a couple jacks -- usually in the living room and master bedrooms.
It’s a cost that doesn’t show up on listings. There’s a surprising number of ways new US construction sucks that just comes down to how it can be advertised.
> I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
Aye.
Cat5/6/whatever-ish cabling has been both the present and the future for something on the order of 25 years now. It's as much of a no-brainer to build network wiring into a home today as it once was to build telephone and TV wiring into a home. Networking should be part of all new home builds.
And yet: Here in 2025, I'm presently working on a new custom home, wherein we're installing some vaguely-elaborate audio-visual stuff. The company in charge of the LAN/WAN end of things had intended to have the ISP bring fiber WAN into a utility area of the basement (yay fiber!), and put a singular Eeros router/mesh node there, and have that be that.
The rest of the house? More mesh nodes, just wirelessly-connected to eachother. No other installed network wires at all -- in a nicely-finished and fairly opulent house that is owned by a very successful local doctor.
They didn't even understand why we were planning to cable up the televisions and other AV gear that would otherwise be scooping up finite wireless bandwidth from their fixed, hard-mounted locations.
In terms of surprise: Nothing surprises me now.
(In terms of cost: We wound up volunteering to run wiring for the mesh nodes. It will cost us ~nothing on the scale that we're operating at, and we're already installing cabling... and not doing it this way just seems so profoundly dumb.)
Powerline Ethernet is a coin toss though. Depending on how many or few shits the last electrician to work on your house gave, it could be great or unusable. Especially if you're in a shared space like an apartment/condo: in theory units are supposed to be sufficiently electrically isolated from each other that powerline is possible; in practice, not so much. I've been in apartments where I plugged in my powerline gear and literally nothing happened: no frames, nothing.
Ethernet cables can be as long as 100meters, long enough to snake around most any apartment. Add on a few rugs to cover over where they'd be tripping hazards and you're all set.
the one sort of asterisk I'd put there is that ethernet cable damage is a real risk. Lots of stories of people just replacing cables they have used for a while and seeing improvements.
But if you can pull it off (or even better, move your router closest to the most annoying thing and work from there!), excellent
In an apartment I once had, I ran some cat5-ish cable through the back wall of one closet and into another.
In between those closets was a bathroom, with a bathtub.
I fished the cable through the void of the bathtub's internals.
Spanning a space like this is not too hard to do with a tape measure, some cheap fiberglass rods, a metal coat hanger, and an apt helper.
Or these days, a person can replace the helper by plugging a $20 endoscope camera into their pocket supercomputer. They usually come with a hook that can be attached, or different hooks can be fashioned and taped on. It takes patience, but it can go pretty quickly. In my experience, most of the time is spent just trying to wrap one's brain around working in 3 dimensions while seeing through a 2-dimensional endoscope camera that doesn't know which way is up, which is a bit of a mindfuck at first.
Anyway, just use the camera to grab the rod or the ball of string pushed in with the rod or whatever. Worst-case: If a single tiny thread can make it from A to B, then that thread can pull in a somewhat-larger string, and that string can finally pull in a cable.
(Situations vary, but I never heard a word about these little holes in the closets that I left behind when I moved out, just as I also didn't hear anything about any of the other little holes I'd left from things like hanging up artwork or office garb.)
I assumed to get from one side of a doorframe to the other, instead of crossing underneath the door, go around the perimeter of the room the door is for. Which seems like a lot to remove a trip hazard, but I suspect the Wife Approval Factor plays a role
A lot has changed in the 25 years since gbit wired ethernet was rolled out. While wired ethernet stagnated due to greed.
Got powerlines? Well then you can get gbit+ to a few outlets in your house.
Got old CATV cables? Then you can use them at multiple gbit with MoCA.
Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.
Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.
> Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
Depending on the age of the house, there's a chance that phone lines are 4-pair, and you can probably run 1G on 4-pair wire, it's probably at least cat3 if it's 4-pair and quality cat3 that's not a max length run in dense conduit is likely to do gigE just fine. If it's only two-pair, you can still run 100, but you'll want to either run a managed switch that you can force to 100M or find an unmanaged switch that can't do 1G ... Otherwise you're likely to negotiate to 1G which will fail because of missing pairs.
Gigabit ethernet "requires" 4 pairs of no-less-than cat5. The 100mbps standard that won the race -- 100BASE-TX -- also "requires" no-less-than cat5, but only 2 pairs of it.
Either may "work" with cat3, but that's by no means a certainty. The twists are simply not very twisty with cat3 compared to any of its successors...and this does make a difference.
But at least: If gigabit is flaky over a given span of whatever wire, then the connection can be forced to be not-gigabit by eliminating the brown and blue pairs. Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.
I think I even still have a couple of factory-made cat5-ish patch cords kicking around that feature only 2 pairs; the grey patch cord that came with the OG Xbox is one such contrivance. Putting one of these in at either end brings the link down to no more than 100BASE-TX without any additional work.
(Scare quotes intentional, but it may be worth trying if the wire is already there.
Disclaimers: I've made many thousands of terminations of cat3 -- it's nice and fast to work with using things like 66 blocks. I've also spent waaaaay too much time trying to troubleshoot Ethernet networks that had been made with in-situ wiring that wasn't quite cutting the mustard.)
I wish I could have multiple modems coming into the house using the same provided cable. Why’s that not possible?
When I was younger I went and bought a new modem so I could play halo on my Xbox in another room than where my parents had the original modem. Found out then I’d need to pay for each modem.
> the best way to speed up your Wi-Fi is to not use it.
So true!
Other tips I’ve found useful:
Separate 2.4ghz network for only IoT devices. They tend to have terrible WiFi chipsets and use older WiFi standards. Slower speed = more airtime used for the same amount of data. This way the “slow” IoT devices don’t interfere with your faster devices which…
Faster devices such as laptops and phones belong on 5ghz only network, if you’re able to get enough coverage. Prefer wired backhaul and more access points, as you’re better off with a device talking on another channel to an ap closer to it rather than tieing up airtime with lots of retries to a far away ap (which impacts all the other clients also trying to talk to that ap)
WiFi is super solid at our house but it took some tweaking and wiring everything that doesn’t move.
That's sounds like a good concept: I'm no stranger to cheap IoT devices chewing up local 2.4GHz bandwidth with chatter and I have a lot of that going on. But does it matter in 2025?
As a broad concept: Ever since my last Sonos device [that they didn't deliberately brick] died, I don't have any even vaguely bandwidth-intensive devices left in my world that are 2.4GHz-only.
Whatever laptop I have this year prefers the 5GHz network, and has for 20 years. My phone, whatever it is today, does as well and has for 15 years. My CCwGTV Chromecast would also prefer hanging out on the 5GHz network if it weren't plugged into the $12 ethernet switch behind the TV.
Even things like the Google Home Mini speakers that I buy on the used market for $10 or $15 seem to prefer using 5GHz 802.11ac, and do so at a reasonably-quick (read: low-airtime) modulation rate.
The only time I spend with my phone or tablet or whatever on the singular 2.4GHz network I have is when I'm at the edge of what I can reach with my access points -- like, when I visit the neighbors or something, where range is more important than speed and 2.4GHz tends to go a wee bit further.
So the only things I have left in normal use that requires a 2.4GHz network are IoT things like smart plugs and light bulbs and other small stuff like my own little ESP/Pi Zero W projects that require so little bandwidth that the contention doesn't matter. (I mean... the ye olde Wii console and PSP handheld only do 2.4GHz, but they don't have much to talk about on the network anymore and never really did even in the best of times.)
It's difficult to imagine that others' wifi devices aren't in similar form, because there's just not much stuff left out there in the world that's both not IoT and that can't talk at 5GHz.
I can see some merit to having a separate IoT VLAN with its own SSID where that's appropriate (just to prevent their little IoT fingers from ever reaching out to the rest of the stuff on my LAN and discovering how insecure it may be), but that's a side-trip from your suggestion wherein the impetus is just logical isolation -- not spectral isolation.
So yes, of course: Build out a robust wireless network. Make it awesome -- and use it for stuff.
But unless I'm missing something, it sounds like building two separate-but-parallel 2.4GHz networks is just an exercise in solving a problem that hasn't really existed for a number of years.
Absolutely. Your IoT devices should be on their own 2.4ghz network running on a specific channel to isolate them. You should also firewall these devices pretty heavily on their own router.
The only devices on wifi should be cell phones and laptops if they can't be plugged in. Everything else, including TVs, should be ethernet.
When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.
A good way of thinking about it is that every 2.4ghz device you add onto a network will slow all the other devices by a small amount. This compounds as you add more devices. So those smart lights? Yeaaahh
> When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.
I don't know why you're saying, a 2.4 GHz device should not interfere with 5 GHz channels unless it's somehow emits some harmonics, which would most definitely make i noncompliant with various FC standards. Or do you mean the modem was so crappy it couldn't deal with processing noisy 2.4 GHz channels at the same time as 5GHz ones? That might be true, but I would assume the modems would run completely different DSP chains on different asics, so this would be surprising.
> do you mean the modem was so crappy > but I would assume the modems
Your assumption is sometimes incorrect as cheap devices can share some RF front end. Also apparently resource contention can also occur due to CPU, thermal, and memory issues.
Ah, splendid. I'm so glad that you have come before me today to present this bot's confounding quandary, and I receive it with tremendous glee.
Please allow me to proffer the following retort: The answer to having a shitty, incapable router is to use one that is not shitty, and is capable.
(The router bits have no clue what RF spectrum is being utilized, and never have. They just deal with packets. The packets are shaped the same fucking way regardless of the physical interface on which they arrive, or are destined for.)
If one must forgo the comfort of complete isolation from the vulgarities of contemporary media and visual indulgence – an unwise choice, yet one that many appear compelled to make – then prudence demands mitigation rather than surrender.
A measured compromise would entail the meticulous profiling of the TV’s network traffic, followed by the imposition of complete blocking at the DNS level (via Pi-hole, NextDNS and alike) first, whilst blacklisting the outgoing CIDR's on the router itself at the same time.
This course of action shall not eliminate the privacy invasion risk in its entirety – for a mere firmware update may well redirect the TV traffic to novel hosts – yet it shall transform a reckless exposure into a calculated and therefore manageable risk.
Most people don't know that Big Tech is extracting data from them on a massive scale. It's up to us, the "tech people," to educate the people and show them alternatives like Graphene. As for the TV, my advice is not to connect it to the internet. If you need to stream something, hook up a laptop or dedicated device to the TV.
This is where regulation comes in. For the TV makers. Things should be secure by default and come with fines if they aren't.
As for the extracting of data, yes that happens on a massive scale. In free products that no one is forced to use. And I would argue that, by now, almost everyone should know that comes at a price, it's just not monetary to the user. At that point it's a choice people make and should be allowed to make.
Solid idea and something I should work towards. We have Ethernet drops in every room but you’re right about IoT devices. Now I have some more planning to do.
An idle Wi-Fi client with no traffic should have a very minimal effect on your network's quality. The TV is only going to be slowing things down if it's actually using the network and downloading/uploading. Which regrettably, is a problem with smart TVs. But there's no reason to limit the number of idle clients on a Wi-Fi network assuming your gateway can handle it. The challenge is though in the real world many devices that should be idle aren't.
For my IoT network I just block most every device's access to the internet. That cuts down on a lot of their background chatter and gives me some minor protection.
Also honestly, I feel the majority of wifi problems could be fixed by having proper coverage (more access points), using hardwired access points (no meshing), and getting better equipment. I like Ubiquiti/Unifi stuff but other good options out there. Avoid TP-Link and anything provided by an ISP. If you do go meshing, insist on a 6ghz backhaul, though that hurts the range.
> Why do macs show all the neighbor's televisions in the airplay menu?
That's a feature that can be configured on the TV/AirPlay receiver. They've configured to allow streaming from "Anyone", which is probably the default. They could disable this is setting and limit it to only clients on their home network. And you can't actually stream without entering a confirmation code shown on the TV.
When you stream to an AirPlay device this way it sets up an adhoc device-to-device wireless connection which usually performs much better that using a wifi network/router and is why screen sharing can be so snappy. Part of the 'Apple Wireless Direct Link' proprietary secret sauce also used by AirDrop. You can sniff the awdl0 or llw0 interfaces to see the traffic. Open AirDrop and then run `ping6 ff02::1%awdl0` to see all the Apple devices your Mac is in contact with (not necessarily on your wifi network)
> and you can't really turn off wifi on a mac without turning off sip.
Just `sudo ifconfig en0 down` doesn't work? You can also do `networksetup -setairportpower en0 off`. Never had issues turning off wifi.
> It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
Certainly this is the brute-force way to do it and can work if you can run enough UTP everywhere. As a counterexample, I went all-in on WiFi and have 5 access points with dedicated backhauls. This is in SF too, so neighbors are right up against us. I have ~60 devices on the WiFi and have no issues, with fast roaming handoff, low jitter, and ~500Mbit up/down. I built this on UniFi, but I suspect Eero PoE gear could get you pretty close too, given how well even their mesh backhaul gear performs.
I'm not super familiar with SF construction materials but I wonder if that plays a part in it too? If your neighbors are separated by concrete walls then you're probably getting less interference from them than you'd think and your mesh might actually work better(?)... but what do I know since I'm no networking engineer.
Old Victorians in SF will sometimes have lathe and plaster walls (the 'wet wall' that drywall replaced). Lathe and plaster walls often have chicken wire in them that degrade wifi more than regular drywall will.
I agree, but as a quite heavy user household, switching to Unifi 10y ago has fixed our issues, and they haven’t returned. With most devices on WiFi, on 3 APs.
For people who don't or can't have Ethernet wiring, I've had great success with Ethernet over coax. My ancient coax wiring gets 800mbps back to my router with a screenbeam MoCA 2.5
MoCA is truly amazing. I'm getting full symmetrical 940 Mbps speeds simultaneously over upload and download using RG59 cable with a pair of ECB6250. It helps that our house is fairly small, as the high frequencies that MoCA uses get attenuated pretty quickly on RG59 cabling, but even still, I'm impressed by the results.
I wish I could put Ethernet everywhere but I live in a German apartment in a German house and here walls are massive and made out of brick and concrete. Routing cables through this without it being a massive eyesore is pretty hard.
Try Powerline. This €40 device will turn your electrical sockets into an 100-500 mbps Ethernet cable. Simple and efficient. Just check if sockets you want to connect are on the same circuit breaker. If yes, chances are really high it would work very well.
I’ve connected a switch and a second access point with mine.
Also I think they work best if there fewer of them on the same circuit. But not sure. Check first.
Oh, one more idea. You can use existing coax cables (tv cable) via adapters to get 1-2 reliable gbps over cable. For e.g. a switch with an additional access point
Unfortunately, Unifi only supports DFS channels (which is the only real way for 'each device to have its own wifi channel in a crowded area) on some of their models.
Sometimes DFS certification comes after general device approval, but I'm not aware of any that just flat out doesn't support it. It supported it 10+ years ago.
Yea I've had all sorts of UniFi gear and have never seen an access point that only works on DFS channels. That'd make no sense and their admin software actively discourages DFS channel selection.
I'd guess OP might be trying to use 160mhz channel width on 5ghz band, which will only work on DFS channels though. I wouldn't recommend 160mhz channel width unless you have a very quiet RF environment and peak speed is very important to you. Also I've found it hard to get clients to actually use the full 160mhz width on a network configured this way.
I use powerline ethernet adapters to hook up the media center in the living room. They aren't super fast (~100 mbps) but they are so much more consistent than wifi.
> You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower.
TV streaming seems like a bad example, since it's usually much lower average bandwidth than e.g. a burst of mobile app updates installing with equal priority on the network as as soon as a phone is plugged in for charging, or starting a cloud photo backup.
Kind of true, but potentially also untrue. If that TV is running a crappy WiFi chip running an older WiFi standard on the same channel, it'll end up performing worse or not playing as nice with other clients during those bursts of buffering. That'll potentially be seen by other clients as little bursts of jitter.
That's true of any client with older and crappier WiFi chips though, but TVs are such a race to the bottom when it comes to performance in so many other things.
I hear people say this often, but when you look into what they actually mean, it's often a comparison of having a single mediocre ISP CPE in a corner of an apartment, at most with a wireless repeater in another, vs. Ethernet. Of course the wire wins in that comparison.
Now put an access point into every room and wire them to the router, and things start looking very differently.
Where I live we have what seems like an unusual amount of fiber cuts... whenever the cable company or the phone company fiber is cut, at least one of the major wireless networks is offline too; maybe calls work, but data doesn't. They could potentially restore service through wireless backhaul, but they don't. They also rely on utility power and utility power outages longer than about 4 hours mean towers are going to turn off.
yes, and... convenience says 'use WiFi'. No wires! I've said, if it moves - wireless. If it doesn't -- wired. Counterexamples that 'work': AM / FM / TV / Paging big transmitters to simple/cheap receivers. For the 1-way case, that works. But for 2-way....
Ethernet pretty much sucks and has not improved substantially in consumer devices since the previous century. It also has pretty severe idle power consumption consequences for PCs, unless you are an expert who goes around fixing that.
There is not any meaningful sense in which 2.5gb ethernet is "standard". There are no TVs with 2.5gb ethernet ports. Or even 1gb ports. Yet they all have WiFi 5 or better.
2.5GbE only started gaining steam when cheap Realtek chips became available (especially since the Intel chips that were on the market earlier were buggy). Those have been adopted by almost all desktop motherboards now on the market, and most laptops that still have Ethernet. Embedded systems are lagging because they're always behind technologically and because they have longer design cycles, but it's pretty clear that most devices designed in the last year or two are moving beyond 1GbE and 2.5GbE will be the new baseline going forward.
> It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
It's not that bizarre. About the only media one might have access to that is above 100mbps is 4k blu-ray rips which can hit peaks above 100m; but TVs don't really cater to that. They're really trying to be your conduit to commercial streaming services which do not encode at that high of a bitrate (and even if they did, would gracefully degrade to 100Mbps). And then you can save on transformers for the two pairs that are unused for 100base-tx.
> It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
It's a few pennies cheaper and i'm sure they have some data showing 70%+ will just use WiFi. TCL in particular doesn't even have very good/stable drivers for their 10/100 NIC; there's a ton of people on the Home Assistant forums that have noticed that their android powered smart TV will just ... stop working / responding on the network until it's rebooted.
I’m sure you’re right, but the fact that it’s almost certainly literal pennies makes it very lame. Lack of stable drivers is also ridiculous given how long gbps Ethernet has been around.
Ethernet will usually hit hardware limits of your HDD or SSD before it actually maxes out. 1gb ethernet is better than wifi in 99% of cases because wifi in the real world is pretty bad, even with modern standards. Why else do they have to continually revamp the standards to get around congestion and roaming issues? Cuz wifi is garbage in the real world. Ethernet = Very little jitter, latency, or packet loss. Wifi = Tons of jitter, latency and packet loss.
Your take is really weird and doesn't represent the real world. What blog did you read this on and why haven't you bothered to attack that obviously wrong stance?
This is the most ridiculous lie in the thread. An ethernet link that can barely keep up with a $150 SSD costs $1250 per switch port, and needs a $1200 NIC and can go only 3m over copper before you need a $1000+ optic assembly. There is nobody with an ethernet setup in their home that outruns consumer-grade SSDs. "Ethernet is limited by SSDs" is a Charlie's Hoes level of wrong.
But if you actually want your Ethernet to be similar speed to your SSD, you don't need to spend that much. Get some used gear.
32 port 40GbE switch (Dell S6000) $210 used
Dual port 40GbE NIC (Mellanox MCX354A-FCCT) $31 used
40GbE DAC 1 meter new price $22
or 40GbE optics from FS.com (QSFP-SR4-40G) $43 new + MMF fiber cable
Of course, that's probably not going to be very power efficient for home use - 32 port switch and probably only connecting a handful of devices at most.
You still get the best speeds over ethernet today because of how wifi standards are slow walked, both on the router and the device connected with the router. Ethernet standards are slow walked too of course but we are talking slow walking a 2.5g or 10g connection here, even otherwise crappy hardware is likely to have 1g ethernet and it’s been that way for at least 10 or 15 years.
If you want to transfer the contents of your old mac to your new mac, your best options in order of speed are 1) thunderbolt, 2) wifi, and 3) ethernet. You do not, in any sense, get "the best speeds" from ethernet. The market penetration of greater-than-1gb wired networks in consumer devices is practically nothing.
My isp-supplied router had 10gbe on both wan and lan sides. I swapped it for my own, but that is what modern consumer equipment looks like.
You can find a 2 port 10gbe+4 port 2.5gbe switch for just over $30 on Amazon.
If the run isn’t too long this can all run over cat5. Handily beats wifi especially for reliability but Thunderbolt is fastest if you only have 2 machines to link.
I have all 2.5gbit at home with some 10gbit SFP copper connections, it wasn't particularly difficult. The devices with built-in Ethernet ports are all gigabit of course, but the ones with USB-C ports have 2.5gbit adapters.
I could go to 10gbit but the Thunderbolt adapters for those all have fans.
I have a U7 Pro XGS hooked up to a Pro HD 24 POE switch (all 2.5gb ports or faster).
The only way I've managed to convince any Wifi 7 client to exceed 1gbps is by freshly connecting to it over 6ghz while standing physically within arm's reach of the AP. That's it. That's the only time it can exceed 1gbps.
In all other scenarios it's well under 1gbps, often more like 300-500mbps. Which is great for wifi, but still quite below the cheapest ethernet ports around. And 6ghz client behavior across OS's (Windows, MacOS, iOS, and Android) is so bad at roaming that I actually end up just disabling it entirely. The only thing it can do is generate bragging rights screenshots, in actual use it's basically entirely DOA.
And that's ignoring that ~$200 N150 NUCs come with 2.5gbps ethernet now.
I’m with you on 6ghz wifi disappointment. My phone does well with it since it supports MLO but my macbook will refuse to roam away from 6ghz until it’s close to unusable.
This is so insanely wrong that I almost feel like we're being trolled. Yes, a direct Thunderbolt connection would be best. Failing that, a guaranteed 1Gb Ethernet connection, which is ubiquitous and dirt cheap, and has latency measured in microseconds, is going to wipe the floor with real-world Wi-Fi 7 speeds. And for what you'd pay for end-to-end Wi-Fi 7 compatible gear, you could be using 10Gb Ethernet, which is in a different league of stability and actual observed throughput compared to anything wireless.
I have Firewalla Wi-Fi 7 APs connected via 10Gb Ethernet to my router. They're brilliant, very expensive, very high quality devices. I use them only for devices which I can't hardwired, because even 1Gb Ethernet smokes them in actual real-world use.
Sure have, within the last 2 weeks when I helped a coworker migrate to a new machine! Both were November 2024 MacBook Pros, so Apple's current top-of-the-line laptops.
Running over Wi-Fi dragged on interminably and we gave up several hours in. When we scrounged up a could of USB Ethernet dongles and started over, it took about an hour.
So yeah, my own personal experience confirms exactly what I'd expect: Wi-Fi is slow and high-latency compared to Ethernet, and you should always use hardwired connections when you care about stability and performance more than portability. By all means, use Wi-Fi for routine laptop mobility. If you have the option, definitely run a cable to your stationary desktop computers, game consoles, set-top boxes, NASes, and everything else within reach of a switch.
If you’re the kind of person who wants better than gigabit Ethernet, it’s very available. 2.5Gbe is just a USB adapter away. Mac Studio comes with 10GbE. Unifi networking gives you managed multi-gig and plenty of others do unmanaged multigig at affordable prices. Piles of consumer NAS support multigig.
I think this market is driven by content creators. Lots of prosumers shoot terabytes of video on a weekly basis. Local NAS are essential and multi-gig local networks dramatically improve the editing experience.
or a single shitty wifi chipset in your network thanks to a cheap iot device.
Wifi is garbage. This person has no idea what they're talking about. It sounds like they read a blog post like 5 years ago and stuck with it cuz it's an edgy take.
Yes, me and the other literally billions of people who do not use wired Ethernet to their TV are just parroting an old blog. The OP who says Ethernet is an absolute requirement for Netflix is clearly correct. You sure got me.
Yes thunderbolt is best but look at costs. Apple is selling a 4ft cable for $130. I have a ton of random cat 5e and cat 6 and they go for a couple dollars.
Now lets talk about my actual “old mac” and “new mac” Mid 2012 mbp and my m3 pro. The 2012 only can do 802.11n so not gigabit speeds. It does have a gigabit ethernet however.
Even if I was going m3 pro to m3 pro, I’m only getting full wifi 6e speeds if I actually have a router that makes use of 160hz channels. My router can’t. It is hard to even gleam router offerings to see which are offering proper wifi 6 because there are like dozens of skus sold even to different stores from the same brand getting slightly different skus. Afaik my mac does not support 160hz wifi 6 either.
A 4ft USB 4 cable is $30. That's more bandwidth per dollar than an Ethernet cable. Thunderbolt cables aren't cost prohibitive any more (though the devices at either end are still very expensive).
This is a clear case of "you get what you measure". Measuring speed is so easy, everybody can do it, and do it all the time. No wonder that providers optimize for speed. But it also works the other way around. We have developed a focus on speed as it was the only thing that mattered.
I have worked with networks for many years, and users blaming all sorts of issues on the network is a classic, so of course in their minds they need more speed and more bandwidth. But improvements only makes sense up to some point. After that it is just psychological.
I don't get what the point of the article is. Is the takeaway that I should lower the channel width in my home? How many WAPs would I need to be running for that to matter? I'd argue it's more important to get everyone to turn down TX power in cases where your neighbors in an apartment building are conflicting. And that's never going to happen, so just conform to the legal limit and your SNR should be fine. Anything that needs to be high performance shouldn't be on wifi anyway.
If you want to spend a really long time optimizing your wifi, this is the resource: https://www.wiisfi.com/
This sort of thing is definitely in the class of "are you experiencing problems? if not don't worry about it".
If you are experiencing problems, this might give you an angle to think about that you hadn't otherwise, if you just naively assume Wifi is as good as a dedicated wire. Modern Wifi has an awful lot of resources, though. I only notice degradation of any kind when I have one computer doing a full-speed transfer for quite a while to another, but that's a pretty exceptional case and not one I'm going to run any more wires around for for something that happens less than once a month.
2.4GHz wifi at 40MHz squats literally half of the usabke channels, you speed improvement, very likely you now get 100mbps. If you just disabled 2.4GHz and forced 5GHz you would get the exact same improvement and wouldn't be polluting half of the available frequencies.
Add another idiot sitting on channel 8 or 9 and the other half of the bandwidth is also polluted, now even your mediocre IoT devices that cannot be on 5GHz are going to struggle for signal and instead of the theoretical 70/70mbps you could get off a well placed 20MHz channel you are lucky to get 30.
Add another 4 people are you cannot make a FaceTime call without disabling wifi or forcing 5GHz
The takeaway is that you'll probably experience more reliable wifi if you turn your 5ghz channel width down to 40mhz and especially make sure your 2.4ghz width is 20mhz not 40mhz. As noted, you can't do anything about the neighbors, but making these changes can improve your reliability. And I think the larger takeaway is that if manufacturers just defaulted to 40mhz 5ghz width, like enterprise equipment does, wifi would be better for everyone. But if your wifi works great then no need.
Also that's an amazing resource, thanks for linking.
I lose wifi signal consistently in my bedroom on my 80Mhz wide 5Ghz wifi.
I just now reduced it to 20Mhz, and though there is a (slight) perceptible drop in latency, those 5 extra dB I gained from Signal/Noise have given me wifi in the bedroom again
> Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated.
Is that actually a thing? Why would any ISP intentionally add unnecessary load to their network?
For what it's worth, I think most ISPs that do this will host their speed test in-network so their speeds are inflated. This benefits both the ISP and whoever is in charge of the speed test(like speedtest.net).
So they're not really increasing their network load a measurable amount since the data never actually leaves their internal network. My ISP's network admin explained this to me one day when I asked about it. He said they don't really notice any difference.
I've only met around 10 people that even know what a speed test is. I'm not sure how most consumers would even go about automating one. What would be the first step?
Their `networkQuality` implementation is on the CLI for any Mac recently updated. It's pretty interesting and I've found it to be very good at predicting which networks will be theoretically fast, but feel unreliable and laggy, and which ones will feel snappy and fast. It measures Round-trips Per Minute under idle and load condition. It's a much better predictor of how fast casual browsing will be than a speed test.
Moving into a house for the first time since before college this year, I only just learned about Wi-Fi channel width this week. Apparently the mesh routers I ended up picking several months ago had a default width of 160 MHz, but only go as low as 80 MHz, so that's what I ended up switching to. Anecdotally it has seemed to be somewhat more reliable, but maybe in the long run finding something that can go even lower might be worth it because we do still notice some stutter occasionally that would be nice to reduce even if the theoretical max throughout was a bit lower.
> Because consumers have been conditioned to understand only raw speed as a metric of Wi-Fi quality and not more important indicators of internet experience such as responsiveness and reliability.
Whie the two are not the same, they are not exactly separable.
You will not get good Internet speed out of a flaky network, because the interrupted flow of acknowledgements, and the need to retransmit lost segments, will not only itself impact the performance directly, but also trigger congestion-reducing algorithms.
Most users are not aware whether they are getting good speed most of the time, if they are only browsing the web, because of the latencies of the load times of complex pages. Individual video streams are not enough to stress the system either. You have to be running downloads (e.g. torrents) to have a better sense of that.
The flakiness of web page loads and insufficient load caused by streams can conceal both: some good amount of unreliability and poor throughput.
I'm surprised, at least for businesses, small cell wifi is not a thing. For example, if you walk into an office building everyone seems to have a physical phone on their desk that is hard wired. What if that is also a small cell AP. Like a personal AP. Using automation and central provisioning and analytics can make this doable. Yeah handoff and roaming has to be seamless and quick but it doesn't feel that hard, no? If so this would be pretty neat and would solve the contention issue in the air.
For me the only thing that really matters, and globally sucks with WiFi is roaming.
My house is old and has stones walls up to 120cm, including the inner walls, so I have to have access points is nearly all rooms.
I never had a true seamless roaming experience. Today, I have TP-Link Omada and it works better than previous solutions, but it is still not as good as DECT phones for examples.
For example if I watch a twitch stream in my room and go to the kitchen grab something with my tablet or my phone, I have a freeze about 30% of the times, but not very long. Before I sometime had to turn the wifi off and on on my device for it to roam.
I followed all Omada and general WiFi best practice I could find about frequency, overlap... But it is still not fully seamless yes.
DECT phones run on the 1.9 GHz spectrum which doesn't get absorbed by water like 2.4 GHz, and will penetrate through many other materials far better than higher frequencies.
Most people place wifi repeaters incorrectly, or invest in crappy repeater / mesh devices that do not multiple radios. A Wifi repeater or mesh device with a single radio by definition cuts your throughput in half for every hop.
I run an ISP. Customers always cheap out when it comes to their in home wireless networks while failing to understand the consequences of their choices (even when carefully explained to them).
Eh, multiple APs and roaming being awful isn't just a matter of shitty placement and bad wireless backhaul, it's also client side software. I have two APs on opposite ends of my house and my phone tries to hang on to whatever AP its connected to far longer than it should when moving around the house. My APs are placed correctly, and support 802.11r, yet my phone and most other devices don't try to roam until far, far past the point they should have switched to the other AP.
The design of roaming being largely client initiated means roaming doesn't really work how people intuitively think it should, because at least every device I've ever seen seems to be programmed to aggressively cling to a single AP.
Have you tried turning down the tx power on your APs? It will help your devices decide to roam, and it may not actually reduce your effective range, because often times effective range is limited by tx power on the client more than the AP.
I wish the Wi-Fi developers would put some serious effort into improving range and contention. Forgot 40 MHz vs 80 MHz — how about some 5 MHz channels? How about some modulations designed to work at low received power and/or low SNR? How about improving the stack to get better performance when a device has mediocre signal quality to multiple APs at the same time?
There are are these cool new features like MLO, but maybe devices could mostly use narrow channels and only use more RF bandwidth when they actually need it.
IEEE 802.11ax(WiFi6): traditional channels can be subdivided between 26 and 2x996 resource units according to need(effectively a 2MHz channel at the low end). This means multiple devices can be transmitted to within the same transmit opportunity.
> How about some modulations designed to work at low received power and/or low SNR?
And do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel? Preferably even if they’re from different vendors and even if the users are not experts?
> 802.11(og), 1 & 2 Mbps
I’m a little vague on the details, but those are rather old and I don’t think there is anything that low-rate in the current MCS table. Do they actually work well? Do they support modern advancements like LDPC?
> > 802.11(og), 1 & 2 Mbps
> I’m a little vague on the details, but those are rather old
They're the original, phase shift keyed modulations.
> Do they actually work well?
They work great, if your problem is SNR, and if you value range more than data rate.
They are, of course, horribly spectrally inefficient which means they work better than OFDM near the guard bands. OFDM has a much flatter power level over frequency, so you have to limit TX power whenever the shoulder of the signal nears the guard band. IIRC, some standard supports individually adjusting the resource unit transmit power which would solve this as well. PSK modulation solves this somewhat accidentally. Guardbands especially suck since there's only 3 non overlapping 2.4GHz channels.
> I don’t think there is anything that low-rate in the current MCS table.
> Do they support modern advancements like LDPC?
Dunno! Generally though, each MCS index will specify both a modulation mechanism (BPSK, OFDM, ...) and a coding rate. All of the newer specs allow you to go almost as slow if you want to, usually 6-7mbps ish , and this is done with the same modulation scheme just a bit faster and with newer coding.
> do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel?
Yes and no. It doesn't improve RF coexistence directly. But in many cases allows much more efficient use of the available airtime. Before every outgoing packet to a different station consumed a guard interval and the entire channel bandwidth, but now for a single guard interval you can pack as many station's data as will fit.
A households bandwidth use is quite a bit different to a business. While a household may have a lot of devices most of them are doing very little at any given time, but the primary device in use requires the best speed possible. In a business however there are a lot of primary devices and not a lot of idle little devices and as such fairness and reliability dominate the needs as does getting the frequencies maxed out for coverage and total bandwidth available.
Wifi 8 will probably be another standard homes can skip. Like wifi 6 it is going to bring little that they need to utilise their fibre home connnections well across their home.
The thing about speed tests causing a bad experience because they hog airtime felt like a non sequitur (since performing them is rare and manual) until I saw this:
> Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated
But there’s no support for this claim presented frankly I am skeptical. What WiFi devices are regularly conducting speed tests without being asked?
> What WiFi devices are regularly conducting speed tests without being asked?
ISP provided routers, at least Xfinity does. I've gotten emails from them (before I ripped out their equipment and put my own in) "Great news, you're getting more than your plan's promised speeds" with speedtest results in the email, because they ran speed tests at like 3AM.
I wouldn't be surprised if it's happening often across all the residential ISPs, most likely for marketing purposes.
Pretty sure Verizon does this as well, when I had a tech come out he had access to historical speed test results from my router (I didn't ask any questions about it at the time so don't have any more info).
DOCSIS cable modems perform perform regularly scheduled tests, but it's only between devices, and shouldn't affect available bandwidth, because there's far more bandwidth within the DOCSIS network than between the network and the Internet.
> there's far more bandwidth within the DOCSIS network than between the network and the Internet.
Really? DOCSIS has been the bottleneck out of Wi-Fi, DOCSIS, and wider Internet every time I've had the misfortune of having to use it in an apartment.
Especially the tiny uplink frequency slice of DOCSIS 3 and below is pathetic.
Eero does this automatically (mine says it was last run 2 days ago at 5:08am) and I had software on my DD-WRT router (OpenLede) that did it, though obviously not many people (overall) are running that.
I used to run a docker than ran a speed test every hour and graphed the results but I haven't done that in a while now.
I think Roku devices might. There's a network speed indicator in the settings and I think it had values before I explicitly ran a test. My Rokus are all wired, because I'm civilized, and the test interval is very short, so that ends my investigation.
At least in my UniFi instance, this is only done when manually triggered, but I seem to recall a setting where it could be automatically updated daily.
Honestly what's unsaid in a lot of this is that it would be really nice if there were more and wider ISM bands. So much makes use of 900Mhz, 2.4GHz and 5GHz in novel and innovative ways, that if the government and FCC really actually wanted to spark innovation including augmenting wifi performance, they'd stop letting telcos and other questionable interests hoard spectrum and release it as ISM (and no, they shouldn't steal from ham bands to make ISM bands either).
Hardwire everything you can over ethernet to get them off Wifi.
Use a dedicated 2.4ghz AP for all IoT devices. Firewall this network and only allow the traffic those devices need. This greatly reduces congestion.
Use 5ghz for phones/laptops and keep IoT off that network.
That's really about it. If you have special circumstances there are other solutions, but generally the solution to bad wifi is to not use the wifi, lol.
In the IoT space I really wish an "ESP for power line Ethernet" existed these days.
I have 50+ ESP based devices on WiFi and while low bandwidth (and their own SSID) I really wish there were affordable options that they could be "wired" for comms (since they mostly control mains appliances, but the rules and considerations for mixing data and mains in one package are prohibitively expensive).
The point isn't wifi contention per se (it's working fine) - it's that having home automation depend on wireless signals at all is both a vulnerability, and feels silly when all those devices have hard wired power.
I live alone, and just counted, I have 10 in regular use. A few more that can connect to WiFi but aren’t (why would I want my tower fans on the internet, anyway?)
I had probably 20 prior to swapping out some smart light bulbs and switches for Zigbee.
34 devices connected to my router at the moment, 8 wired and 26 wifi. About 8 of the wifi devices are phones, tablets, and laptops; the rest are various iot things: locks, plugs, alarm, thermostat, water heater, doorbell, etc.
It is pretty easy to get there when everyone has a phone, a laptop, and there are a few shared tablets around. Add work + personal machines and it goes up a bit more.
Add a few wifi security cameras and other IoT devices and 30+ is probably pretty common.
I got 28 online right now according to my Eero. 3 people, with smartphones and laptops. Several game consoles, a few Apple TVs and music streaming devices, Ring camera, Zwave Hub, printer, washing machine, garage opener, Ring doorbell and an assortment of Echo dots.
Wireless temperature monitor
Sync module for some Blink cameras
2 smart plugs
Roomba
5 smart lights
RPi 3
3 of the smart lights I currently don't need and and so aren't actually connected. That leaves 8 connected 2.4 GHz devices.
On 5 GHz I've got 16 devices:
Amazon Fire Stick
iPad
Printer
Echo Show
Apple Watch
Surface Pro 4
iMac
Nintendo Switch
EV charger
Mac Studio
A smart plug
Google Home Mini
Echo Dot
RPi 4
Kindle
iPhone
The iMac and the Surface Pro 4 are almost never turned on, and the printer is also most of the time. That leaves 13 regularly connected 5 GHz devices.
That's a total of 21 devices usually connected on my WiFi, right what the article says is average. :-)
Smartphone, laptop, tablet, watch - that's 4 already. And this isn't just counting personal devices. Include TV, streaming stick, game console, printers, bulbs, plugs, speakers, doorbell, security cameras, thermostat and you'll hit that number pretty quick.
There are 16 devices on my WiFi right now and I would've though I was above average. I have a bunch of weird stuff like 3 Raspberry Pis that most households would not have, but I don't have most of the stuff you listed.
I guess I am less "connected" than the average American. Can't say I feel like I am missing out, though.
Most of your mobile devices are doing background tasks. It’s not typically high bandwidth stuff, but they are connected even when you aren’t using them.
Doesn't seem unreasonable. Look at your router. I have 17 and I would say we're a totally normal household - the kids don't even have phones yet.
We have 2 phones, a tablet for the kids, a couple of Google homes, a Chromecast, 2 yoto players, a printer, a smart TV, 2 laptops, a raspberry pi, a solar power Inverter, an Oculus Quest, and a couple of things that have random hostnames.
I did some experimentation with UniFi hubs and came to the conclusion that if you can give each device its own WiFi channel that would be ideal -- contention is that bad and often an uncontended channel with otherwise poor characteristics will beat a contended channel that otherwise looks good.
The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
> The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
Absolutely. Everything other than cell phones and laptops-not-at-a-desk should be on Ethernet.
I had wires run in 2020 when I started doing even more video calls. Huge improvement in usability.
The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it. The two devices in our house which use a wired connection are both plugged directly into the switch in our utility closet.
(We do have one internet-connected device which permanently lives about an inch away from one of the ethernet sockets, but it is, ironically, a wifi-only device with no RJ45 port.)
Some friends live in a rental that they’ve decorated well. It wasn’t until multiple visits that I realized they had run Ethernet throughout the house.
You can get skinny Ethernet cables that bend easily. If you get some that match your paint, and route them in straight lines, those can be unobtrusive. Use tricks like running the cables along baseboards and other trim pieces. If you really want to minimize the visual impact you can use cable runners and paint over them. The cables are not attention-grabbing compared to furniture or art on the wall.
If you’re willing to drill holes (if you terminate the cable yourself, the hole can be narrow), you can pass the cables through walls. If you don’t want to drill, you can go under a door.
If you’ve got fourteen outlets, it seems like there ought to be some solution to get cables everywhere you need.
Rental or do you own?
If you own, you should replace and/or move them. Might sound scary if you've never done this before but it is much easier than you'd think. If you want to make your future life easier I suggest running a PVC pipe (at minimum in the drop down portion). Replacing or adding new cabling will be much easier if you do this so it's totally worth the few extra bucks and few extra minutes of work. They'll also be less likely to be accidentally damaged (stepping on them, rodents, water damage, etc). I seriously cannot understand why this is not more common practice (leave the pull string in). You might save a few bucks but you sacrifice a lot more than you're saving... (chasing pennies with pounds)
If rental, you could put in an extender. If you're less concerned about aesthetics you can pop the wall place off and directly tie into the existing cable OR run a new one in parallel. If you're willing to donate the replacement wire and don't have access to the attic but do to both ends of the existing cable then you can use one to pull the other through. You could coil the excess wire behind the plate when you reinstall it. But that definitely runs the risk of losing the cable since it might be navigating through a hard corner. If you go that route I'd suggest just asking your landlord. They'd probably be chill about it and might even pay for it.
There are times I do envy people living in stick houses with hollow walls.
I live in a brick house where only half of the walls are hollow. Bringing Ethernet wires to a few critical areas and putting small surface-mount RJ-45 sockets was not that hard.
Of course, some thin raceways can be seen somewhere along the baseboard. It does not look terrible, and is barely noticeable.
Fibre is good for getting to hard-to-reach places.
But the slope is slippery. If you’re doing fibre, you might as well do 10gbe.
Stick houses with hollow walls are cheaper to build (assuming cheap wood) and cheaper to work on. Probably cheaper to maintain too, but not as durable, so it might work out... Otoh, durable isn't great when housing trends have moved on.
Much more durable in an earthquake though, which is important in places like the US where half the country is a serious seismic hazard zone. In many locales only wood or steel framing is allowed because historically stone and concrete construction collapsed due to the strength of the earthquakes.
> not as durable
Your clearly don’t live in an earthquake prone area.
I do. But given how cheapskate New Zealand is, I’m 100% sure that we would build in stone and brick if it was cheaper.
Until it gets cold outside and you need to heat them, or cool then obviously
You can lay your own cables, either to the next wall socket or directly to a switch. Flat ethernet cables can be very helpful for hiding and for crossing doorways. Generous "unnecessary" wire length helps with keeping them out of sight.
> The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it.
I had a similar situation a few years back. It was a rental so I didn't have access to the attic let alone permission to do my own drops. It'll depend a _lot_ on your exact setup, but we had reasonably good results with some ethernet-over-power adapters.
Ethernet of powerline adapters a very YMMV situation. Occasionally, it works great for people, but more often than not, the performance is poor and/or unreliable, especially in countries with split-phase 120/240 volt power (where good performance relies on choosing outlets with hots on the same side of the center-tapped neutral. The people who most commonly share success stories with powerline Ethernet are residents of the UK, where houses only have 2 wires coming in from the pole and there's often a ring main system where an entire floor of a house will be on one circuit.
A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters, which will actually give you 1+ Gbps symmetrical. The latency is a very consistent 3-4ms, which is negligible. I use Screenbeam (formerly Actiontek) ECB6250 adapters, though they now make a new model, ECB7250, which is identical to the ECB6250 except with 2.5GBASE-T ports instead of 1000BASE-T.
> A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters
I'll second this. MoCA works. You can get MoCA adapters off Ebay or whatnot for cheap: look for Frontier branded FCA252. ~90 MBps with a 1000BASE-T switch in the loop. I see ~3 ms of added latency. I've made point-to-point links exclusively, as opposed to using splitters and putting >2 MoCA adapters on shared medium, but that is supported as well.
That was my experience too. The experience with powerline ethernet adapters was unbearable on a daily basis.
We had an unused coax (which we disconnected from the outside world) and used MoCA adapters (actiontek) and it's been consistently great/stable. No issues ever... for years.
We have them at home as well and they really suck. They lose connection every 20ish minutes at best, and take about 5 to reconnect. Makes Zoom meetings impossible, among other things.
I used those during covid to get a reliable connection for video calls and it was a huge step up over wifi. The bandwidth was like 1/10th of actual gige, so I got a wire pulled to my office when I went to fibre but there’s no question in my mind that decent powerline adaptors are the winner for connection stability.
I’ve used Ethernet over coax in my current apartment.
It’s worked well!
You do need to be a bit careful as coax signal can be shared with neighbors and others sometimes.
You can isolate your ethernet over coax from your neighbor with a MoCA POE "point of entry" filter which blocks the frequencies used by MoCA.
You can buy them online for around $10 and they install without tools,
Besides neighbors, you may also need a POE filter if you have certain types of cable modem.
For PoE you want two networks for the best performance. One for each phase of your mains.
In general they do suck, but they can be pretty decent if you stick them all on one phase, even better if all on the same breaker.
Powerline Ethernet != PoE (power over Ethernet)
Yes, no idea what I was thinking when I typed that. I've used both extensively, in fact this message was sent over a PoE enabled WiFi AP.
Does this advice still hold true for Internet that is provided through power sockets in the house?
It depends on your wiring but I've had pretty good success with AV2000 powerline ethernet. I get about 400Mbps and a reliable 2ms ping which is good enough for gaming and streaming from my media center.
The endpoint in my living room also has a wifi AP so signal is pretty good for laptops and whatnot.
In NYC every channel is congested, I can see like 25 access points at any time and half are poorly configured. Any wired medium is better than the air, I could probably propagate a signal through the drywall that's more reliable than wifi here.
So having something I can just plug into the wall is pretty nice compared to running cables even if it's a fraction of gigE standards.
And the big one I want to point out, is that this AI stuff has me downloading so many ten gigabyte model files to run them locally that I'm really feeling the lack of speed that my setup has.
The reality is that most people only have a single cord coming into the house
So they would have to do quite a bit of work to run cable. Also people living in apartments that cant just start drilling through walls.
I'd say most ppl use wifi because they have too, not pure convenience
Mu-MIMO would help. The real problem is that energy between a unit and an AP is not in a pencil-thin RF laser-beam --- it is spread out. Other nodes hear that energy, and back off. If we had better control of point-to-point links, then you could have plenty of bandwidth. It's not as if the photon field cannot hold them all. When we broadcast in all directions, we waste energy, and we cause unnecessary interference to other receivers.
it was quite a while back but I read some press release about a manufacturer that would make an access point that had mechanically steered directional antennas. Unfortunately I don't think it ever made it to market.
That can help in one direction, but networks are bi-directional.
No matter how fancy and directive the antenna arrangement may be at the access point end, the other devices that use this access point will be using whatever they have for antennas.
The access point may be able to produce and/or receive one or many signals with arbitrarily-aimed, laser-like precision, but the client devices will still tend to radiate mostly-omnidirectionally -- to the access point, to eachother, and to the rest of the world around them.
The client devices will still hear eachother just fine and will back off when another one nearby is transmitting. The access point cannot help with this, no matter how fanciful it may be.
(Waiting for a clear-enough channel before transmitting is part of the 802.11 specification. That's the Carrier Sense part of CSMA/CA.)
We downsized from a house built in 1914 with phone jacks everywhere to a house built in 2007 with coax and ethernet ports in every room, some rooms with two.
At the 1914 house, I used ethernet-over-powerline adapters so I could have a second router running in access point mode. The alternative was punching holes in the outside walls since there was no way to feasibly run cabling inside lath-and-plaster walls.
I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
My son has ethernet in his dorm with an ethernet switch so he can connect his video game consoles and TV. I think that's pretty common.
> I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
Speaking from a US standpoint, it still not common in new construction for ethernet to be deployed in a house. I'm not sure why. It seems like a no-brainer.
Coax is still usually reserved to a couple jacks -- usually in the living room and master bedrooms.
Adding cat5e or cat6 to each room is just a cost. Builders generally compete on cost.
It’s a cost that doesn’t show up on listings. There’s a surprising number of ways new US construction sucks that just comes down to how it can be advertised.
Most people think they can just use WiFi, and most of them are probably right.
> I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
Aye.
Cat5/6/whatever-ish cabling has been both the present and the future for something on the order of 25 years now. It's as much of a no-brainer to build network wiring into a home today as it once was to build telephone and TV wiring into a home. Networking should be part of all new home builds.
And yet: Here in 2025, I'm presently working on a new custom home, wherein we're installing some vaguely-elaborate audio-visual stuff. The company in charge of the LAN/WAN end of things had intended to have the ISP bring fiber WAN into a utility area of the basement (yay fiber!), and put a singular Eeros router/mesh node there, and have that be that.
The rest of the house? More mesh nodes, just wirelessly-connected to eachother. No other installed network wires at all -- in a nicely-finished and fairly opulent house that is owned by a very successful local doctor.
They didn't even understand why we were planning to cable up the televisions and other AV gear that would otherwise be scooping up finite wireless bandwidth from their fixed, hard-mounted locations.
In terms of surprise: Nothing surprises me now.
(In terms of cost: We wound up volunteering to run wiring for the mesh nodes. It will cost us ~nothing on the scale that we're operating at, and we're already installing cabling... and not doing it this way just seems so profoundly dumb.)
Powerline Ethernet is a coin toss though. Depending on how many or few shits the last electrician to work on your house gave, it could be great or unusable. Especially if you're in a shared space like an apartment/condo: in theory units are supposed to be sufficiently electrically isolated from each other that powerline is possible; in practice, not so much. I've been in apartments where I plugged in my powerline gear and literally nothing happened: no frames, nothing.
MoCA adapters are an option if you’re already wired for coax
MoCA is how I get Ethernet upstairs. Works great.
Ethernet cables can be as long as 100meters, long enough to snake around most any apartment. Add on a few rugs to cover over where they'd be tripping hazards and you're all set.
the one sort of asterisk I'd put there is that ethernet cable damage is a real risk. Lots of stories of people just replacing cables they have used for a while and seeing improvements.
But if you can pull it off (or even better, move your router closest to the most annoying thing and work from there!), excellent
I got good results from running cables around the entire perimeter of a room to avoid crossing doorways. Doesn't work so well on bathrooms though.
Oh, bathrooms are [sometimes] easy.
In an apartment I once had, I ran some cat5-ish cable through the back wall of one closet and into another.
In between those closets was a bathroom, with a bathtub.
I fished the cable through the void of the bathtub's internals.
Spanning a space like this is not too hard to do with a tape measure, some cheap fiberglass rods, a metal coat hanger, and an apt helper.
Or these days, a person can replace the helper by plugging a $20 endoscope camera into their pocket supercomputer. They usually come with a hook that can be attached, or different hooks can be fashioned and taped on. It takes patience, but it can go pretty quickly. In my experience, most of the time is spent just trying to wrap one's brain around working in 3 dimensions while seeing through a 2-dimensional endoscope camera that doesn't know which way is up, which is a bit of a mindfuck at first.
Anyway, just use the camera to grab the rod or the ball of string pushed in with the rod or whatever. Worst-case: If a single tiny thread can make it from A to B, then that thread can pull in a somewhat-larger string, and that string can finally pull in a cable.
(Situations vary, but I never heard a word about these little holes in the closets that I left behind when I moved out, just as I also didn't hear anything about any of the other little holes I'd left from things like hanging up artwork or office garb.)
I’m pretty tech-addicted, but I’ve never felt the need for a hard-wired drop in the bathroom.
I assumed to get from one side of a doorframe to the other, instead of crossing underneath the door, go around the perimeter of the room the door is for. Which seems like a lot to remove a trip hazard, but I suspect the Wife Approval Factor plays a role
Well, unless you’re multihomed, you’ll always only have one cable coming in.
It’s what you do with that cable that matters :)
Even the telco provided router/ap combo units usually have a built in switch, so you don’t even need another device in most cases.
A lot has changed in the 25 years since gbit wired ethernet was rolled out. While wired ethernet stagnated due to greed.
Got powerlines? Well then you can get gbit+ to a few outlets in your house.
Got old CATV cables? Then you can use them at multiple gbit with MoCA.
Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.
Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.
> Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
Depending on the age of the house, there's a chance that phone lines are 4-pair, and you can probably run 1G on 4-pair wire, it's probably at least cat3 if it's 4-pair and quality cat3 that's not a max length run in dense conduit is likely to do gigE just fine. If it's only two-pair, you can still run 100, but you'll want to either run a managed switch that you can force to 100M or find an unmanaged switch that can't do 1G ... Otherwise you're likely to negotiate to 1G which will fail because of missing pairs.
Gigabit ethernet "requires" 4 pairs of no-less-than cat5. The 100mbps standard that won the race -- 100BASE-TX -- also "requires" no-less-than cat5, but only 2 pairs of it.
Either may "work" with cat3, but that's by no means a certainty. The twists are simply not very twisty with cat3 compared to any of its successors...and this does make a difference.
But at least: If gigabit is flaky over a given span of whatever wire, then the connection can be forced to be not-gigabit by eliminating the brown and blue pairs. Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.
I think I even still have a couple of factory-made cat5-ish patch cords kicking around that feature only 2 pairs; the grey patch cord that came with the OG Xbox is one such contrivance. Putting one of these in at either end brings the link down to no more than 100BASE-TX without any additional work.
(Scare quotes intentional, but it may be worth trying if the wire is already there.
Disclaimers: I've made many thousands of terminations of cat3 -- it's nice and fast to work with using things like 66 blocks. I've also spent waaaaay too much time trying to troubleshoot Ethernet networks that had been made with in-situ wiring that wasn't quite cutting the mustard.)
Eh flat Ethernet cables can easily be snaked all over with adhesive clips, and if you color match cable/clips/walls, it doesn’t look bad.
Visiting museum ships also showed me you can sometimes route cables over living and working spaces.
This is what I did. Takes minimal effort and then you never have to worry about it again.
Cables routed on visible walls look absolutely terrible. I wish they didn’t, but they do.
Yes, it’s better if your cable and clips and wall all match, but it still looks bad.
Why? Run them along baseboards in the corners, you'll never notice them (or at least we didn't at our last house, white on white).
What if you ran the cable on the top of the wall and covered it with crown molding?
when done right, raceway along (or even behind the) baseboards works nicely
I wish I could have multiple modems coming into the house using the same provided cable. Why’s that not possible?
When I was younger I went and bought a new modem so I could play halo on my Xbox in another room than where my parents had the original modem. Found out then I’d need to pay for each modem.
If you're not sure what a router is, you should probably look that up, because it sounds like you want another router.
It actually sounds like they just want a switch
If you have coax, look into MoCA. I have one attic device on a MoCA connection and it runs very well.
> the best way to speed up your Wi-Fi is to not use it.
So true!
Other tips I’ve found useful:
Separate 2.4ghz network for only IoT devices. They tend to have terrible WiFi chipsets and use older WiFi standards. Slower speed = more airtime used for the same amount of data. This way the “slow” IoT devices don’t interfere with your faster devices which…
Faster devices such as laptops and phones belong on 5ghz only network, if you’re able to get enough coverage. Prefer wired backhaul and more access points, as you’re better off with a device talking on another channel to an ap closer to it rather than tieing up airtime with lots of retries to a far away ap (which impacts all the other clients also trying to talk to that ap)
WiFi is super solid at our house but it took some tweaking and wiring everything that doesn’t move.
That's sounds like a good concept: I'm no stranger to cheap IoT devices chewing up local 2.4GHz bandwidth with chatter and I have a lot of that going on. But does it matter in 2025?
As a broad concept: Ever since my last Sonos device [that they didn't deliberately brick] died, I don't have any even vaguely bandwidth-intensive devices left in my world that are 2.4GHz-only.
Whatever laptop I have this year prefers the 5GHz network, and has for 20 years. My phone, whatever it is today, does as well and has for 15 years. My CCwGTV Chromecast would also prefer hanging out on the 5GHz network if it weren't plugged into the $12 ethernet switch behind the TV.
Even things like the Google Home Mini speakers that I buy on the used market for $10 or $15 seem to prefer using 5GHz 802.11ac, and do so at a reasonably-quick (read: low-airtime) modulation rate.
The only time I spend with my phone or tablet or whatever on the singular 2.4GHz network I have is when I'm at the edge of what I can reach with my access points -- like, when I visit the neighbors or something, where range is more important than speed and 2.4GHz tends to go a wee bit further.
So the only things I have left in normal use that requires a 2.4GHz network are IoT things like smart plugs and light bulbs and other small stuff like my own little ESP/Pi Zero W projects that require so little bandwidth that the contention doesn't matter. (I mean... the ye olde Wii console and PSP handheld only do 2.4GHz, but they don't have much to talk about on the network anymore and never really did even in the best of times.)
It's difficult to imagine that others' wifi devices aren't in similar form, because there's just not much stuff left out there in the world that's both not IoT and that can't talk at 5GHz.
I can see some merit to having a separate IoT VLAN with its own SSID where that's appropriate (just to prevent their little IoT fingers from ever reaching out to the rest of the stuff on my LAN and discovering how insecure it may be), but that's a side-trip from your suggestion wherein the impetus is just logical isolation -- not spectral isolation.
So yes, of course: Build out a robust wireless network. Make it awesome -- and use it for stuff.
But unless I'm missing something, it sounds like building two separate-but-parallel 2.4GHz networks is just an exercise in solving a problem that hasn't really existed for a number of years.
Absolutely. Your IoT devices should be on their own 2.4ghz network running on a specific channel to isolate them. You should also firewall these devices pretty heavily on their own router.
The only devices on wifi should be cell phones and laptops if they can't be plugged in. Everything else, including TVs, should be ethernet.
When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.
A good way of thinking about it is that every 2.4ghz device you add onto a network will slow all the other devices by a small amount. This compounds as you add more devices. So those smart lights? Yeaaahh
> When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.
I don't know why you're saying, a 2.4 GHz device should not interfere with 5 GHz channels unless it's somehow emits some harmonics, which would most definitely make i noncompliant with various FC standards. Or do you mean the modem was so crappy it couldn't deal with processing noisy 2.4 GHz channels at the same time as 5GHz ones? That might be true, but I would assume the modems would run completely different DSP chains on different asics, so this would be surprising.
> do you mean the modem was so crappy > but I would assume the modems
Your assumption is sometimes incorrect as cheap devices can share some RF front end. Also apparently resource contention can also occur due to CPU, thermal, and memory issues.
https://chatgpt.com/share/68e9d2ee-01a4-8004-b27b-01e9083f7e... (Note that Prof is one "character" I have defined in the prompt customisation)
Or:
https://g.co/gemini/share/1e8d55831809
Ah, splendid. I'm so glad that you have come before me today to present this bot's confounding quandary, and I receive it with tremendous glee.
Please allow me to proffer the following retort: The answer to having a shitty, incapable router is to use one that is not shitty, and is capable.
(The router bits have no clue what RF spectrum is being utilized, and never have. They just deal with packets. The packets are shaped the same fucking way regardless of the physical interface on which they arrive, or are destined for.)
My advice would be NOT to connect any kind of TV to the Internet. They have microphones and sometimes cameras, and are a huge privacy risk.
If one must forgo the comfort of complete isolation from the vulgarities of contemporary media and visual indulgence – an unwise choice, yet one that many appear compelled to make – then prudence demands mitigation rather than surrender.
A measured compromise would entail the meticulous profiling of the TV’s network traffic, followed by the imposition of complete blocking at the DNS level (via Pi-hole, NextDNS and alike) first, whilst blacklisting the outgoing CIDR's on the router itself at the same time.
This course of action shall not eliminate the privacy invasion risk in its entirety – for a mere firmware update may well redirect the TV traffic to novel hosts – yet it shall transform a reckless exposure into a calculated and therefore manageable risk.
so does your phone :)
Yes, but unlike TVs, my phone runs free software (Graphene) and is free of the spyware "smart" TVs are known for.
Most people don't run Graphene so point stands.
Most people don't know that Big Tech is extracting data from them on a massive scale. It's up to us, the "tech people," to educate the people and show them alternatives like Graphene. As for the TV, my advice is not to connect it to the internet. If you need to stream something, hook up a laptop or dedicated device to the TV.
This is where regulation comes in. For the TV makers. Things should be secure by default and come with fines if they aren't.
As for the extracting of data, yes that happens on a massive scale. In free products that no one is forced to use. And I would argue that, by now, almost everyone should know that comes at a price, it's just not monetary to the user. At that point it's a choice people make and should be allowed to make.
Solid idea and something I should work towards. We have Ethernet drops in every room but you’re right about IoT devices. Now I have some more planning to do.
An idle Wi-Fi client with no traffic should have a very minimal effect on your network's quality. The TV is only going to be slowing things down if it's actually using the network and downloading/uploading. Which regrettably, is a problem with smart TVs. But there's no reason to limit the number of idle clients on a Wi-Fi network assuming your gateway can handle it. The challenge is though in the real world many devices that should be idle aren't.
For my IoT network I just block most every device's access to the internet. That cuts down on a lot of their background chatter and gives me some minor protection.
Also honestly, I feel the majority of wifi problems could be fixed by having proper coverage (more access points), using hardwired access points (no meshing), and getting better equipment. I like Ubiquiti/Unifi stuff but other good options out there. Avoid TP-Link and anything provided by an ISP. If you do go meshing, insist on a 6ghz backhaul, though that hurts the range.
> not use it.
A few things come to mind...
- You can buy ethernet adapters... for iPhone/ipad/etc. Operations are so much faster, especially large downloads like offline maps.
- many consumer devices suck wrt to wifi. For example, there seem to me ZERO soundbars with wired subwoofers. They all incorporate wifi.
- also, if anyone has lived in a really dense urban environment, wifi is a liability in just about every way.
- Whats's worse is how promiscuous many devices are. Why do macs show all the neighbor's televisions in the airplay menu?
- and you can't really turn off wifi on a mac without turning off sip. (in settings, wifi OFF toggle is stuck on but greyed out)
> Why do macs show all the neighbor's televisions in the airplay menu?
That's a feature that can be configured on the TV/AirPlay receiver. They've configured to allow streaming from "Anyone", which is probably the default. They could disable this is setting and limit it to only clients on their home network. And you can't actually stream without entering a confirmation code shown on the TV.
When you stream to an AirPlay device this way it sets up an adhoc device-to-device wireless connection which usually performs much better that using a wifi network/router and is why screen sharing can be so snappy. Part of the 'Apple Wireless Direct Link' proprietary secret sauce also used by AirDrop. You can sniff the awdl0 or llw0 interfaces to see the traffic. Open AirDrop and then run `ping6 ff02::1%awdl0` to see all the Apple devices your Mac is in contact with (not necessarily on your wifi network)
> and you can't really turn off wifi on a mac without turning off sip.
Just `sudo ifconfig en0 down` doesn't work? You can also do `networksetup -setairportpower en0 off`. Never had issues turning off wifi.
> many consumer devices suck wrt to wifi. For example, there seem to me ZERO soundbars with wired subwoofers. They all incorporate wifi.
Sonos has its issues, but I do need to point out that their subs (and the rest) all have Ethernet ports in addition to WiFi.
> It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
Certainly this is the brute-force way to do it and can work if you can run enough UTP everywhere. As a counterexample, I went all-in on WiFi and have 5 access points with dedicated backhauls. This is in SF too, so neighbors are right up against us. I have ~60 devices on the WiFi and have no issues, with fast roaming handoff, low jitter, and ~500Mbit up/down. I built this on UniFi, but I suspect Eero PoE gear could get you pretty close too, given how well even their mesh backhaul gear performs.
I'm not super familiar with SF construction materials but I wonder if that plays a part in it too? If your neighbors are separated by concrete walls then you're probably getting less interference from them than you'd think and your mesh might actually work better(?)... but what do I know since I'm no networking engineer.
It's all wood construction, originally stick victorians with 2x4 exterior walls. My "loudest" neighbor is being picked up on 80MHz at -47 dBm.
Old Victorians in SF will sometimes have lathe and plaster walls (the 'wet wall' that drywall replaced). Lathe and plaster walls often have chicken wire in them that degrade wifi more than regular drywall will.
Man, at times in my life I would've killed to get a -47 dBm or better signal.
FWIW you don't need POE Eero devices for a wired backhaul, all of their devices has support it.
lol 5 APs for ~60 devices is so wasteful and just throwing money at the problem.
I'm glad it works but lol that's just hilarious.
you have five access points and 60 devices? How many square feet are you trying to cover?
He said SF with neighbors so I'm assuming condo/apartment. Probably less than 2000sq feet would be my guess.
5 aps for 60 devices is hilarious. I have over 120 devices running off 2 APs without issue. lol
You have 120 wifi-connected devices at home?? What kind of devices? 100 smart light bulbs or something like that?
I'm just curious – I'm a relatively techy person and I have maybe 15 devices on my whole home network.
I agree, but as a quite heavy user household, switching to Unifi 10y ago has fixed our issues, and they haven’t returned. With most devices on WiFi, on 3 APs.
For people who don't or can't have Ethernet wiring, I've had great success with Ethernet over coax. My ancient coax wiring gets 800mbps back to my router with a screenbeam MoCA 2.5
MoCA is truly amazing. I'm getting full symmetrical 940 Mbps speeds simultaneously over upload and download using RG59 cable with a pair of ECB6250. It helps that our house is fairly small, as the high frequencies that MoCA uses get attenuated pretty quickly on RG59 cabling, but even still, I'm impressed by the results.
Yeah, we built our home and i made sure whenever there would be devices on the wall there was an ethernet cable there, best decision ever.
I wish I could put Ethernet everywhere but I live in a German apartment in a German house and here walls are massive and made out of brick and concrete. Routing cables through this without it being a massive eyesore is pretty hard.
Try Powerline. This €40 device will turn your electrical sockets into an 100-500 mbps Ethernet cable. Simple and efficient. Just check if sockets you want to connect are on the same circuit breaker. If yes, chances are really high it would work very well.
I’ve connected a switch and a second access point with mine.
Also I think they work best if there fewer of them on the same circuit. But not sure. Check first.
I tried that but the performance was worse than wifi.
G.hn powerline devices are better than the ancient HomePlug AV2 ones. Which devices did you try?
Oh, one more idea. You can use existing coax cables (tv cable) via adapters to get 1-2 reliable gbps over cable. For e.g. a switch with an additional access point
Does it have any wiring? I've lived in old homes with coax for cable and those can be used with moca adapters to do ethernet. They can do 2.5gbps too.
Unfortunately, Unifi only supports DFS channels (which is the only real way for 'each device to have its own wifi channel in a crowded area) on some of their models.
What unifi AP doesn't support DFS?
Sometimes DFS certification comes after general device approval, but I'm not aware of any that just flat out doesn't support it. It supported it 10+ years ago.
Yea I've had all sorts of UniFi gear and have never seen an access point that only works on DFS channels. That'd make no sense and their admin software actively discourages DFS channel selection.
I'd guess OP might be trying to use 160mhz channel width on 5ghz band, which will only work on DFS channels though. I wouldn't recommend 160mhz channel width unless you have a very quiet RF environment and peak speed is very important to you. Also I've found it hard to get clients to actually use the full 160mhz width on a network configured this way.
And all iot devices on protocol such as zwave
I use powerline ethernet adapters to hook up the media center in the living room. They aren't super fast (~100 mbps) but they are so much more consistent than wifi.
> You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower.
TV streaming seems like a bad example, since it's usually much lower average bandwidth than e.g. a burst of mobile app updates installing with equal priority on the network as as soon as a phone is plugged in for charging, or starting a cloud photo backup.
Kind of true, but potentially also untrue. If that TV is running a crappy WiFi chip running an older WiFi standard on the same channel, it'll end up performing worse or not playing as nice with other clients during those bursts of buffering. That'll potentially be seen by other clients as little bursts of jitter.
That's true of any client with older and crappier WiFi chips though, but TVs are such a race to the bottom when it comes to performance in so many other things.
There are two kinds of networks: wireless networks and reliable networks.
Wired connection is an absolute hack.
I hear people say this often, but when you look into what they actually mean, it's often a comparison of having a single mediocre ISP CPE in a corner of an apartment, at most with a wireless repeater in another, vs. Ethernet. Of course the wire wins in that comparison.
Now put an access point into every room and wire them to the router, and things start looking very differently.
Lmao.
People say this until it takes 3 days to restore a fibre cut, when the wireless guys just work around the problem with replacement radios etc.
Issue with Wireless is usually the wireless operator. And most of them do work hard to give wireless a bad rep.
Where I live we have what seems like an unusual amount of fiber cuts... whenever the cable company or the phone company fiber is cut, at least one of the major wireless networks is offline too; maybe calls work, but data doesn't. They could potentially restore service through wireless backhaul, but they don't. They also rely on utility power and utility power outages longer than about 4 hours mean towers are going to turn off.
yes, and... convenience says 'use WiFi'. No wires! I've said, if it moves - wireless. If it doesn't -- wired. Counterexamples that 'work': AM / FM / TV / Paging big transmitters to simple/cheap receivers. For the 1-way case, that works. But for 2-way....
That tip about not using it also works with Ethernet and other technologies, BTW.
Ethernet pretty much sucks and has not improved substantially in consumer devices since the previous century. It also has pretty severe idle power consumption consequences for PCs, unless you are an expert who goes around fixing that.
>Ethernet [...] has not improved substantially in consumer devices since the previous century.
We've gone from 100 Mbps being standard consumer level to 2.5 or 10 Gbps being standard now. That sounds substantial to me.
10G Ethernet is not quite that common yet, but should become very common soon: https://news.ycombinator.com/item?id=44071701
There is not any meaningful sense in which 2.5gb ethernet is "standard". There are no TVs with 2.5gb ethernet ports. Or even 1gb ports. Yet they all have WiFi 5 or better.
2.5GbE only started gaining steam when cheap Realtek chips became available (especially since the Intel chips that were on the market earlier were buggy). Those have been adopted by almost all desktop motherboards now on the market, and most laptops that still have Ethernet. Embedded systems are lagging because they're always behind technologically and because they have longer design cycles, but it's pretty clear that most devices designed in the last year or two are moving beyond 1GbE and 2.5GbE will be the new baseline going forward.
In practical terms, WiFi 5 is slower than 1gb Ethernet.
It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
> It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
It's not that bizarre. About the only media one might have access to that is above 100mbps is 4k blu-ray rips which can hit peaks above 100m; but TVs don't really cater to that. They're really trying to be your conduit to commercial streaming services which do not encode at that high of a bitrate (and even if they did, would gracefully degrade to 100Mbps). And then you can save on transformers for the two pairs that are unused for 100base-tx.
> It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
It's a few pennies cheaper and i'm sure they have some data showing 70%+ will just use WiFi. TCL in particular doesn't even have very good/stable drivers for their 10/100 NIC; there's a ton of people on the Home Assistant forums that have noticed that their android powered smart TV will just ... stop working / responding on the network until it's rebooted.
I’m sure you’re right, but the fact that it’s almost certainly literal pennies makes it very lame. Lack of stable drivers is also ridiculous given how long gbps Ethernet has been around.
No video streams out there uses over 100mbits so makes sense.
I’ve read that 8k streams can exceed 100mbps. I have not dig very far into that though since I don’t have an 8k tv or any 8k sources.
Home user CPE we install have multiple 2.5G Ethernet ports.
even with 1Gbit/s ethernet, measure latency. It will be smaller and more predictable than any wifi you can have.
Ethernet will usually hit hardware limits of your HDD or SSD before it actually maxes out. 1gb ethernet is better than wifi in 99% of cases because wifi in the real world is pretty bad, even with modern standards. Why else do they have to continually revamp the standards to get around congestion and roaming issues? Cuz wifi is garbage in the real world. Ethernet = Very little jitter, latency, or packet loss. Wifi = Tons of jitter, latency and packet loss.
Your take is really weird and doesn't represent the real world. What blog did you read this on and why haven't you bothered to attack that obviously wrong stance?
This is the most ridiculous lie in the thread. An ethernet link that can barely keep up with a $150 SSD costs $1250 per switch port, and needs a $1200 NIC and can go only 3m over copper before you need a $1000+ optic assembly. There is nobody with an ethernet setup in their home that outruns consumer-grade SSDs. "Ethernet is limited by SSDs" is a Charlie's Hoes level of wrong.
Yes even an HDD can keep up with 1GbE.
But if you actually want your Ethernet to be similar speed to your SSD, you don't need to spend that much. Get some used gear.
32 port 40GbE switch (Dell S6000) $210 used
Dual port 40GbE NIC (Mellanox MCX354A-FCCT) $31 used
40GbE DAC 1 meter new price $22 or 40GbE optics from FS.com (QSFP-SR4-40G) $43 new + MMF fiber cable
Of course, that's probably not going to be very power efficient for home use - 32 port switch and probably only connecting a handful of devices at most.
You still get the best speeds over ethernet today because of how wifi standards are slow walked, both on the router and the device connected with the router. Ethernet standards are slow walked too of course but we are talking slow walking a 2.5g or 10g connection here, even otherwise crappy hardware is likely to have 1g ethernet and it’s been that way for at least 10 or 15 years.
If you want to transfer the contents of your old mac to your new mac, your best options in order of speed are 1) thunderbolt, 2) wifi, and 3) ethernet. You do not, in any sense, get "the best speeds" from ethernet. The market penetration of greater-than-1gb wired networks in consumer devices is practically nothing.
My isp-supplied router had 10gbe on both wan and lan sides. I swapped it for my own, but that is what modern consumer equipment looks like.
You can find a 2 port 10gbe+4 port 2.5gbe switch for just over $30 on Amazon.
If the run isn’t too long this can all run over cat5. Handily beats wifi especially for reliability but Thunderbolt is fastest if you only have 2 machines to link.
I have all 2.5gbit at home with some 10gbit SFP copper connections, it wasn't particularly difficult. The devices with built-in Ethernet ports are all gigabit of course, but the ones with USB-C ports have 2.5gbit adapters.
I could go to 10gbit but the Thunderbolt adapters for those all have fans.
I have a U7 Pro XGS hooked up to a Pro HD 24 POE switch (all 2.5gb ports or faster).
The only way I've managed to convince any Wifi 7 client to exceed 1gbps is by freshly connecting to it over 6ghz while standing physically within arm's reach of the AP. That's it. That's the only time it can exceed 1gbps.
In all other scenarios it's well under 1gbps, often more like 300-500mbps. Which is great for wifi, but still quite below the cheapest ethernet ports around. And 6ghz client behavior across OS's (Windows, MacOS, iOS, and Android) is so bad at roaming that I actually end up just disabling it entirely. The only thing it can do is generate bragging rights screenshots, in actual use it's basically entirely DOA.
And that's ignoring that ~$200 N150 NUCs come with 2.5gbps ethernet now.
I’m with you on 6ghz wifi disappointment. My phone does well with it since it supports MLO but my macbook will refuse to roam away from 6ghz until it’s close to unusable.
This is so insanely wrong that I almost feel like we're being trolled. Yes, a direct Thunderbolt connection would be best. Failing that, a guaranteed 1Gb Ethernet connection, which is ubiquitous and dirt cheap, and has latency measured in microseconds, is going to wipe the floor with real-world Wi-Fi 7 speeds. And for what you'd pay for end-to-end Wi-Fi 7 compatible gear, you could be using 10Gb Ethernet, which is in a different league of stability and actual observed throughput compared to anything wireless.
I have Firewalla Wi-Fi 7 APs connected via 10Gb Ethernet to my router. They're brilliant, very expensive, very high quality devices. I use them only for devices which I can't hardwired, because even 1Gb Ethernet smokes them in actual real-world use.
> wipe the floor with real-world Wi-Fi 7 speeds.
I see that you have never tried this. By the way, Mac Migration Assistant doesn't need Wi-Fi infrastructure at all.
Sure have, within the last 2 weeks when I helped a coworker migrate to a new machine! Both were November 2024 MacBook Pros, so Apple's current top-of-the-line laptops.
Running over Wi-Fi dragged on interminably and we gave up several hours in. When we scrounged up a could of USB Ethernet dongles and started over, it took about an hour.
So yeah, my own personal experience confirms exactly what I'd expect: Wi-Fi is slow and high-latency compared to Ethernet, and you should always use hardwired connections when you care about stability and performance more than portability. By all means, use Wi-Fi for routine laptop mobility. If you have the option, definitely run a cable to your stationary desktop computers, game consoles, set-top boxes, NASes, and everything else within reach of a switch.
If you’re the kind of person who wants better than gigabit Ethernet, it’s very available. 2.5Gbe is just a USB adapter away. Mac Studio comes with 10GbE. Unifi networking gives you managed multi-gig and plenty of others do unmanaged multigig at affordable prices. Piles of consumer NAS support multigig.
I think this market is driven by content creators. Lots of prosumers shoot terabytes of video on a weekly basis. Local NAS are essential and multi-gig local networks dramatically improve the editing experience.
brb ima turn on my microwave halfway through your transfer
To this day I expect my wifi to drop whenever I hear a microwave, thanks to the one in my parents house: https://digitalseams.com/blog/microwave-ovens-wi-fi-and-http
Shouldn't such microwaves be decommissioned? I would assume that microwaves that are not properly shielded are dangerous to people in their vicinity?
or a single shitty wifi chipset in your network thanks to a cheap iot device.
Wifi is garbage. This person has no idea what they're talking about. It sounds like they read a blog post like 5 years ago and stuck with it cuz it's an edgy take.
Yes, me and the other literally billions of people who do not use wired Ethernet to their TV are just parroting an old blog. The OP who says Ethernet is an absolute requirement for Netflix is clearly correct. You sure got me.
Yes thunderbolt is best but look at costs. Apple is selling a 4ft cable for $130. I have a ton of random cat 5e and cat 6 and they go for a couple dollars.
Now lets talk about my actual “old mac” and “new mac” Mid 2012 mbp and my m3 pro. The 2012 only can do 802.11n so not gigabit speeds. It does have a gigabit ethernet however.
Even if I was going m3 pro to m3 pro, I’m only getting full wifi 6e speeds if I actually have a router that makes use of 160hz channels. My router can’t. It is hard to even gleam router offerings to see which are offering proper wifi 6 because there are like dozens of skus sold even to different stores from the same brand getting slightly different skus. Afaik my mac does not support 160hz wifi 6 either.
A 4ft USB 4 cable is $30. That's more bandwidth per dollar than an Ethernet cable. Thunderbolt cables aren't cost prohibitive any more (though the devices at either end are still very expensive).
Compared to what?
This is a clear case of "you get what you measure". Measuring speed is so easy, everybody can do it, and do it all the time. No wonder that providers optimize for speed. But it also works the other way around. We have developed a focus on speed as it was the only thing that mattered.
I have worked with networks for many years, and users blaming all sorts of issues on the network is a classic, so of course in their minds they need more speed and more bandwidth. But improvements only makes sense up to some point. After that it is just psychological.
I don't get what the point of the article is. Is the takeaway that I should lower the channel width in my home? How many WAPs would I need to be running for that to matter? I'd argue it's more important to get everyone to turn down TX power in cases where your neighbors in an apartment building are conflicting. And that's never going to happen, so just conform to the legal limit and your SNR should be fine. Anything that needs to be high performance shouldn't be on wifi anyway.
If you want to spend a really long time optimizing your wifi, this is the resource: https://www.wiisfi.com/
This sort of thing is definitely in the class of "are you experiencing problems? if not don't worry about it".
If you are experiencing problems, this might give you an angle to think about that you hadn't otherwise, if you just naively assume Wifi is as good as a dedicated wire. Modern Wifi has an awful lot of resources, though. I only notice degradation of any kind when I have one computer doing a full-speed transfer for quite a while to another, but that's a pretty exceptional case and not one I'm going to run any more wires around for for something that happens less than once a month.
2.4GHz wifi at 40MHz squats literally half of the usabke channels, you speed improvement, very likely you now get 100mbps. If you just disabled 2.4GHz and forced 5GHz you would get the exact same improvement and wouldn't be polluting half of the available frequencies.
Add another idiot sitting on channel 8 or 9 and the other half of the bandwidth is also polluted, now even your mediocre IoT devices that cannot be on 5GHz are going to struggle for signal and instead of the theoretical 70/70mbps you could get off a well placed 20MHz channel you are lucky to get 30.
Add another 4 people are you cannot make a FaceTime call without disabling wifi or forcing 5GHz
The takeaway is that you'll probably experience more reliable wifi if you turn your 5ghz channel width down to 40mhz and especially make sure your 2.4ghz width is 20mhz not 40mhz. As noted, you can't do anything about the neighbors, but making these changes can improve your reliability. And I think the larger takeaway is that if manufacturers just defaulted to 40mhz 5ghz width, like enterprise equipment does, wifi would be better for everyone. But if your wifi works great then no need.
Also that's an amazing resource, thanks for linking.
I lose wifi signal consistently in my bedroom on my 80Mhz wide 5Ghz wifi.
I just now reduced it to 20Mhz, and though there is a (slight) perceptible drop in latency, those 5 extra dB I gained from Signal/Noise have given me wifi in the bedroom again
Every doubling of the channel width costs roughly 3dB. Shannon's law strikes again!
Every doubling of the channel width doubles the Shannon limit*
* An a gaussian white noise environment, which WiFi usually isn't in.
Wow, that is an awesome resource and something I wish I knew about earlier!
Everytime I have questions about Wi-Fi I search for this distinctive site wiisfi.com … I should bookmark this.
The best Ressource out there. Period.
> Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated.
Is that actually a thing? Why would any ISP intentionally add unnecessary load to their network?
See https://www.thousandeyes.com/blog/cisco-announces-intent-to-... for example, SamKnows is in millions of homes measuring performance and now sending the data to Cisco.
For what it's worth, I think most ISPs that do this will host their speed test in-network so their speeds are inflated. This benefits both the ISP and whoever is in charge of the speed test(like speedtest.net).
So they're not really increasing their network load a measurable amount since the data never actually leaves their internal network. My ISP's network admin explained this to me one day when I asked about it. He said they don't really notice any difference.
I've only met around 10 people that even know what a speed test is. I'm not sure how most consumers would even go about automating one. What would be the first step?
Apple has a draft specification for a better way of measuring network quality than just doing speed tests: https://github.com/network-quality/goresponsiveness
Their `networkQuality` implementation is on the CLI for any Mac recently updated. It's pretty interesting and I've found it to be very good at predicting which networks will be theoretically fast, but feel unreliable and laggy, and which ones will feel snappy and fast. It measures Round-trips Per Minute under idle and load condition. It's a much better predictor of how fast casual browsing will be than a speed test.
Moving into a house for the first time since before college this year, I only just learned about Wi-Fi channel width this week. Apparently the mesh routers I ended up picking several months ago had a default width of 160 MHz, but only go as low as 80 MHz, so that's what I ended up switching to. Anecdotally it has seemed to be somewhat more reliable, but maybe in the long run finding something that can go even lower might be worth it because we do still notice some stutter occasionally that would be nice to reduce even if the theoretical max throughout was a bit lower.
> Because consumers have been conditioned to understand only raw speed as a metric of Wi-Fi quality and not more important indicators of internet experience such as responsiveness and reliability.
Whie the two are not the same, they are not exactly separable.
You will not get good Internet speed out of a flaky network, because the interrupted flow of acknowledgements, and the need to retransmit lost segments, will not only itself impact the performance directly, but also trigger congestion-reducing algorithms.
Most users are not aware whether they are getting good speed most of the time, if they are only browsing the web, because of the latencies of the load times of complex pages. Individual video streams are not enough to stress the system either. You have to be running downloads (e.g. torrents) to have a better sense of that.
The flakiness of web page loads and insufficient load caused by streams can conceal both: some good amount of unreliability and poor throughput.
I'm surprised, at least for businesses, small cell wifi is not a thing. For example, if you walk into an office building everyone seems to have a physical phone on their desk that is hard wired. What if that is also a small cell AP. Like a personal AP. Using automation and central provisioning and analytics can make this doable. Yeah handoff and roaming has to be seamless and quick but it doesn't feel that hard, no? If so this would be pretty neat and would solve the contention issue in the air.
For me the only thing that really matters, and globally sucks with WiFi is roaming.
My house is old and has stones walls up to 120cm, including the inner walls, so I have to have access points is nearly all rooms.
I never had a true seamless roaming experience. Today, I have TP-Link Omada and it works better than previous solutions, but it is still not as good as DECT phones for examples.
For example if I watch a twitch stream in my room and go to the kitchen grab something with my tablet or my phone, I have a freeze about 30% of the times, but not very long. Before I sometime had to turn the wifi off and on on my device for it to roam.
I followed all Omada and general WiFi best practice I could find about frequency, overlap... But it is still not fully seamless yes.
DECT phones run on the 1.9 GHz spectrum which doesn't get absorbed by water like 2.4 GHz, and will penetrate through many other materials far better than higher frequencies.
Most people place wifi repeaters incorrectly, or invest in crappy repeater / mesh devices that do not multiple radios. A Wifi repeater or mesh device with a single radio by definition cuts your throughput in half for every hop.
I run an ISP. Customers always cheap out when it comes to their in home wireless networks while failing to understand the consequences of their choices (even when carefully explained to them).
Eh, multiple APs and roaming being awful isn't just a matter of shitty placement and bad wireless backhaul, it's also client side software. I have two APs on opposite ends of my house and my phone tries to hang on to whatever AP its connected to far longer than it should when moving around the house. My APs are placed correctly, and support 802.11r, yet my phone and most other devices don't try to roam until far, far past the point they should have switched to the other AP.
The design of roaming being largely client initiated means roaming doesn't really work how people intuitively think it should, because at least every device I've ever seen seems to be programmed to aggressively cling to a single AP.
Have you tried turning down the tx power on your APs? It will help your devices decide to roam, and it may not actually reduce your effective range, because often times effective range is limited by tx power on the client more than the AP.
"Wheres your router"
"The basement"
"Uh, i can send someone out to install some repeaters for $$$"
"No just make internet good now"
I use a DECT VoIP phone for most of my phone calls. It's great!
“Behind every good wi-fi network is an excellent wired backbone infrastructure.” - the Tao of Wi-Fi
“Those who understand wireless use cables” - random guy on the internet.
I wish the Wi-Fi developers would put some serious effort into improving range and contention. Forgot 40 MHz vs 80 MHz — how about some 5 MHz channels? How about some modulations designed to work at low received power and/or low SNR? How about improving the stack to get better performance when a device has mediocre signal quality to multiple APs at the same time?
There are are these cool new features like MLO, but maybe devices could mostly use narrow channels and only use more RF bandwidth when they actually need it.
IEEE 802.11af old TV band
IEEE 802.11ah 900Ish
IEEE 802.11ax(WiFi6): traditional channels can be subdivided between 26 and 2x996 resource units according to need(effectively a 2MHz channel at the low end). This means multiple devices can be transmitted to within the same transmit opportunity.
> How about some modulations designed to work at low received power and/or low SNR?
802.11(og), 1 & 2 Mbps.
And do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel? Preferably even if they’re from different vendors and even if the users are not experts?
> 802.11(og), 1 & 2 Mbps
I’m a little vague on the details, but those are rather old and I don’t think there is anything that low-rate in the current MCS table. Do they actually work well? Do they support modern advancements like LDPC?
> > 802.11(og), 1 & 2 Mbps > I’m a little vague on the details, but those are rather old
They're the original, phase shift keyed modulations.
> Do they actually work well?
They work great, if your problem is SNR, and if you value range more than data rate.
They are, of course, horribly spectrally inefficient which means they work better than OFDM near the guard bands. OFDM has a much flatter power level over frequency, so you have to limit TX power whenever the shoulder of the signal nears the guard band. IIRC, some standard supports individually adjusting the resource unit transmit power which would solve this as well. PSK modulation solves this somewhat accidentally. Guardbands especially suck since there's only 3 non overlapping 2.4GHz channels.
> I don’t think there is anything that low-rate in the current MCS table.
> Do they support modern advancements like LDPC?
Dunno! Generally though, each MCS index will specify both a modulation mechanism (BPSK, OFDM, ...) and a coding rate. All of the newer specs allow you to go almost as slow if you want to, usually 6-7mbps ish , and this is done with the same modulation scheme just a bit faster and with newer coding.
> do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel?
Yes and no. It doesn't improve RF coexistence directly. But in many cases allows much more efficient use of the available airtime. Before every outgoing packet to a different station consumed a guard interval and the entire channel bandwidth, but now for a single guard interval you can pack as many station's data as will fit.
The next release standard is 802.11bn, "Wifi 8", and it has been dubbed Ultra High Reliability (UHR):
* https://en.wikipedia.org/wiki/IEEE_802.11bn
So other considerations are being considered.
A households bandwidth use is quite a bit different to a business. While a household may have a lot of devices most of them are doing very little at any given time, but the primary device in use requires the best speed possible. In a business however there are a lot of primary devices and not a lot of idle little devices and as such fairness and reliability dominate the needs as does getting the frequencies maxed out for coverage and total bandwidth available.
Wifi 8 will probably be another standard homes can skip. Like wifi 6 it is going to bring little that they need to utilise their fibre home connnections well across their home.
The thing about speed tests causing a bad experience because they hog airtime felt like a non sequitur (since performing them is rare and manual) until I saw this:
But there’s no support for this claim presented frankly I am skeptical. What WiFi devices are regularly conducting speed tests without being asked?> What WiFi devices are regularly conducting speed tests without being asked?
ISP provided routers, at least Xfinity does. I've gotten emails from them (before I ripped out their equipment and put my own in) "Great news, you're getting more than your plan's promised speeds" with speedtest results in the email, because they ran speed tests at like 3AM.
I wouldn't be surprised if it's happening often across all the residential ISPs, most likely for marketing purposes.
Pretty sure Verizon does this as well, when I had a tech come out he had access to historical speed test results from my router (I didn't ask any questions about it at the time so don't have any more info).
That would be a speedtest between the router/modem and CMTS then, not one between a Wi-Fi connected device and the ISP, no?
I have noticed Spectrum internet shits the bed at 12:30am pretty reliably.
Really? My spectrum has been super reliable in Michigan. Way better than when I had Comcast here
DOCSIS cable modems perform perform regularly scheduled tests, but it's only between devices, and shouldn't affect available bandwidth, because there's far more bandwidth within the DOCSIS network than between the network and the Internet.
> there's far more bandwidth within the DOCSIS network than between the network and the Internet.
Really? DOCSIS has been the bottleneck out of Wi-Fi, DOCSIS, and wider Internet every time I've had the misfortune of having to use it in an apartment.
Especially the tiny uplink frequency slice of DOCSIS 3 and below is pathetic.
Eero does this automatically (mine says it was last run 2 days ago at 5:08am) and I had software on my DD-WRT router (OpenLede) that did it, though obviously not many people (overall) are running that.
I used to run a docker than ran a speed test every hour and graphed the results but I haven't done that in a while now.
Eero I think just tests internet speed from the gateway, so no Wi-Fi involved.
I think Roku devices might. There's a network speed indicator in the settings and I think it had values before I explicitly ran a test. My Rokus are all wired, because I'm civilized, and the test interval is very short, so that ends my investigation.
Ubiquiti UniFi used to, I don't know if it still does.
It’s configurable
At least in my UniFi instance, this is only done when manually triggered, but I seem to recall a setting where it could be automatically updated daily.
Google Nest access points do this, but they do it only when networks are idle, so I fail to see the negative consequences.
Honestly what's unsaid in a lot of this is that it would be really nice if there were more and wider ISM bands. So much makes use of 900Mhz, 2.4GHz and 5GHz in novel and innovative ways, that if the government and FCC really actually wanted to spark innovation including augmenting wifi performance, they'd stop letting telcos and other questionable interests hoard spectrum and release it as ISM (and no, they shouldn't steal from ham bands to make ISM bands either).
The only way forward is new frequencies and larger blocks of spectrum.
The average US household has 21 Wi-Fi devices
I wonder how many of those could be wired.
I actually switched from 40 MHz to 80 Mhz when a friend complained about slow downloads on my Wi-Fi.
So yeah, I do think speed is more important.
Responsiveness doesn’t matter that often and when it does, plugging in Ethernet takes it out of the equation.
You can't use that speed if your device is dropping half the packets.
Is it really that big of an issue? With device spread over 2.4 5 and 6 ghz you really need a lot of them to run into issues
Not the "Need for Speed" I expected.
Is there a good guide on what the right things to do are?
Hardwire everything you can over ethernet to get them off Wifi.
Use a dedicated 2.4ghz AP for all IoT devices. Firewall this network and only allow the traffic those devices need. This greatly reduces congestion.
Use 5ghz for phones/laptops and keep IoT off that network.
That's really about it. If you have special circumstances there are other solutions, but generally the solution to bad wifi is to not use the wifi, lol.
In the IoT space I really wish an "ESP for power line Ethernet" existed these days.
I have 50+ ESP based devices on WiFi and while low bandwidth (and their own SSID) I really wish there were affordable options that they could be "wired" for comms (since they mostly control mains appliances, but the rules and considerations for mixing data and mains in one package are prohibitively expensive).
Have you considered 1 WiFi device and 49 sub-ghz devices?
The point isn't wifi contention per se (it's working fine) - it's that having home automation depend on wireless signals at all is both a vulnerability, and feels silly when all those devices have hard wired power.
"The average US household has 21 Wi-Fi devices"... wtf?
Doesn't take long to add up. Family of 4 - every phone, including prior generation which might be off in a draw: 3-8
Router, and extenders (multi floor house): 1-4
Chromecast|Sonos|Apple speaker/Chromecast|google|firestick|roku|apple TV/smart speaker/hifi receiver/eaves dropping devices: 2-10
Smart doorbell/light switch/temperature sensor/weather station/co2|co detector/flood detector/bulb/led strip/led light/nanoleaf/garage door: 4-16
Some cars: 0-2
Some smart watches speak wifi: 0-4
Computers.. maybe the desktops are wired (likely still support wifi), all laptops, chromebooks, and tablets : 3-8
All game consoles, many TVs, some computer monitors: 3-8
Some smart appliances: 0-4 (based on recent news of ads, best to aim for 0)
I live alone, and just counted, I have 10 in regular use. A few more that can connect to WiFi but aren’t (why would I want my tower fans on the internet, anyway?)
I had probably 20 prior to swapping out some smart light bulbs and switches for Zigbee.
21 for an average household isn’t nuts.
34 devices connected to my router at the moment, 8 wired and 26 wifi. About 8 of the wifi devices are phones, tablets, and laptops; the rest are various iot things: locks, plugs, alarm, thermostat, water heater, doorbell, etc.
It is pretty easy to get there when everyone has a phone, a laptop, and there are a few shared tablets around. Add work + personal machines and it goes up a bit more.
Add a few wifi security cameras and other IoT devices and 30+ is probably pretty common.
I got 28 online right now according to my Eero. 3 people, with smartphones and laptops. Several game consoles, a few Apple TVs and music streaming devices, Ring camera, Zwave Hub, printer, washing machine, garage opener, Ring doorbell and an assortment of Echo dots.
I'm probably not average, but I have over 50 wifi devices registered on my UBNT system and 15 wired.
I just checked;
I currently have 23, my parent's house has 19
People have all kinds of stuff on wifi these days - cameras, light bulbs, dishwashers, irrigation, solar, hifi..
Yep. And each of your neighbors also has that many devices and you’re all sharing the same channels.
I count 14 in a 2 person household, 4 bedroom house; 3 wired.
And that's not to mention everything else on the 2.4GHz band :) Bluetooth, zigbee, your microwave, etc
That seems very high to me. A family of four each has 5 devices connected at the same time?
I'm single and have 11 devices on 2.4 GHz:
3 of the smart lights I currently don't need and and so aren't actually connected. That leaves 8 connected 2.4 GHz devices.On 5 GHz I've got 16 devices:
The iMac and the Surface Pro 4 are almost never turned on, and the printer is also most of the time. That leaves 13 regularly connected 5 GHz devices.That's a total of 21 devices usually connected on my WiFi, right what the article says is average. :-)
Smartphone, laptop, tablet, watch - that's 4 already. And this isn't just counting personal devices. Include TV, streaming stick, game console, printers, bulbs, plugs, speakers, doorbell, security cameras, thermostat and you'll hit that number pretty quick.
There are 16 devices on my WiFi right now and I would've though I was above average. I have a bunch of weird stuff like 3 Raspberry Pis that most households would not have, but I don't have most of the stuff you listed.
I guess I am less "connected" than the average American. Can't say I feel like I am missing out, though.
Most of your mobile devices are doing background tasks. It’s not typically high bandwidth stuff, but they are connected even when you aren’t using them.
Check your network and see how many wifi devices you have. I'm up to 60+ thanks to a handful of IoT devices, smart speakers, etc... It adds up quick.
Doesn't seem unreasonable. Look at your router. I have 17 and I would say we're a totally normal household - the kids don't even have phones yet.
We have 2 phones, a tablet for the kids, a couple of Google homes, a Chromecast, 2 yoto players, a printer, a smart TV, 2 laptops, a raspberry pi, a solar power Inverter, an Oculus Quest, and a couple of things that have random hostnames.
It adds up.