swiftcoder
16 days ago
> Obviously forking go’s crypto library is a little scary, and I’m gonna have to do some thinking about how to maintain my little patch in a safe way
This should really be upstreamed as an option on the ssh library. Its good to default to sending chaff in untrusted environments, but there are plenty of places where we might as well save the bandwidth
gerdesj
16 days ago
"where we might as well save the bandwidth"
I come from a world (yesteryear) where a computer had 1KB of RAM (ZX80). I've used links with modems rocking 1200 bps (1200 bits per second). I recall US Robotics modems getting to speeds of 56K - well that was mostly a fib worse than MS doing QA these days. Ooh I could chat with some bloke from Novell on Compuserve.
In 1994ish I was asked to look into this fancy new world wide web thing on the internet. I was working at a UK military college as an IT bod, I was 24. I had a Windows 3.1 PC. I telnetted into a local VAX, then onto the X25 PAD. I used JANET to get to somewhere in the US (NIST) and from there to Switzerland to where this www thing started off. I was using telnet and WAIS and Gopher and then I was apparently using something called "www".
I described this www thing as "a bit wank", which shows what a visionary I am!
drzaiusx11
15 days ago
Fellow old here, I had several 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput. Took forever to download shit over BBS...
beagle3
15 days ago
The real analog copper lines were kind of limited to approx 28K - more or less the nyquist limit. However, the lines at the time were increasingly replaced with digital 64Kbit lines that sampled the analog tone. So, the 56k standard aligned itself to the actual sample times, and that allowed it to reach a 56k bps rate (some time/error tolerance still eats away at your bandwidth)
If you never got more than 24-28k, you likely still had an analog line.
mgiampapa
15 days ago
56k was also unidirectional, you had to have special hardware on the other side to send at 56k downstream. The upstream was 33.6kbps I think, and that was in ideal conditions.
cestith
15 days ago
The special hardware was actually just a DSP at the ISP end. The big difference was before 56k modems, we had multiple analog lines coming into the ISP. We had to upgrade to digital service (DS1 or ISDN PRI) and break out the 64k digital channels to separate DSPs.
The economical way to do that was integrated RAS systems like the Livingston Portmaster, Cisco 5x00 seriers, or Ascend Max. Those would take the aggregated digital line, break out the channels, hold multiple DSPs on multiple boards, and have an Ethernet (or sometimes another DS1 or DS3 for more direct uplink) with all those parts communicating inside the same chassis. In theory, though, you could break out the line in one piece of hardware and then have a bunch of firmware modems.
dspillett
15 days ago
The asymmetry of 56k standards was 2:1, so if you got a 56k6 link (the best you could get in theory IIRC) your upload rate would be ~28k3. In my expereience the best you would get in real world use was ~48k (so 48kbpd down, 24kbps up), and 42k (so 21k up) was the most I could guarantee would be stable (baring in mind “unstable” meant the link might completely drop randomly, not that there would be a blip here-or-there and all would be well again PDQ afterwards) for a significant length of time.
To get 33k6 up (or even just 28k8 - some ISPs had banks of modems that supported one the 56k6 standards but would not support more than 28k8 symmetric) you needed to force your modem to connect using the older symmetric standards.
drzaiusx11
15 days ago
Yeah 28k sounds more closer to what I got when things were going well. I also forget if they were tracking in lower case 'k' (x1000) or upper case 'K' (x1024) units/s which obviously has an effect as well.
mnw21cam
15 days ago
The lower case "k" vs upper case "K" is an abomination. The official notation is lower case "k" for 1000 and lower case "ki" for 1024. It's an abomination too, but it's the correct abomination.
tracker1
15 days ago
That's a newer representation, mostly because storage companies always (mis)represented their storage... I don't think any ISPs really misrepresent k/K in kilo bits/bytes
encom
15 days ago
Line speed is always base 10. I think everything except RAM (memory, caches etc.) is base 10 really.
dspillett
15 days ago
* 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput*
56k modem standards were asymmetric, the upload rate being half that of the download. In my experience (UK based, calling UK ISPs) 42kbps was usually what I saw, though 46 or even 48k was stable¹ for a while sometimes.
But 42k down was 21k up, so if I was planning to upload anything much I'd set my modem to pretend it as a 36k6 unit: that was more stable and up to that speed things were symmetric (so I got 36k6 up as well as down, better than 24k/23k/21k). I could reliably get a 36k6 link, and it would generally stay up as long as I needed it to.
--------
[1] sometimes a 48k link would last many minutes then die randomly, forcing my modem to hold back to 42k resulted in much more stable connections
tracker1
15 days ago
Even then, it required specialized hardware on the ISP side to connect above 33.6kbps at all, and almost never reliably so. I remember telling most of my friends just to get/stick with the 33.6k options. Especially considering the overhead a lot of those higher modems took, most of which were "winmodems" that used a fair amount of CPU overhead insstead of an actual COM/Serial port. It was kind of wild.
dspillett
15 days ago
Yep. Though I found 42k reliable and a useful boost over 36k6 (14%) if I was planning on downloading something big¹. If you had a 56k capable modem and had a less than ideal line, it was important to force it to 36k6 because failure to connect using the enhanced protocol would usually result in fallback all the way to 28k8 (assuming, of course, that your line wasn't too noisy for even 36k6 to be stable).
I always avoided WinModems, in part because I used Linux a lot, and recommended friends/family do the same. “but it was cheaper!” was a regular refrain when one didn't work well, and I pulled out the good ol' “I told you so”.
--------
[1] Big by the standards of the day, not today!
Jedd
15 days ago
> several 56k baud modems
These were almost definitely 8k baud.
tfvlrue
15 days ago
In case anyone else is curious, since this is something I was always confused about until I looked it up just now:
"Baud rate" refers to the symbol rate, that is the number of pulses of the analog signal per second. A signal that has two voltage states can convey two bits of information per symbol.
"Bit rate" refers to the amount of digital data conveyed. If there are two states per symbol, then the baud rate and bit rate are equivalent. 56K modems used 7 bits per symbol, so the bit rate was 7x the baud rate.
AlpineG
15 days ago
Not sure about your last point but in serial comms there are start and stop bits and sometimes parity. We generally used 8 data bits with no parity so in effect there are 10 bits per character including the stop and start bits. That pretty much matched up with file transfer speeds achieved using one of the good protocols that used sliding windows to remove latency. To calculate expected speed just divide baud by 10 to covert from bits per second to characters per second then there is a little efficiency loss due to protocol overhead. This is direct without modems once you introduce those the speed could be variable.
fkarg
15 days ago
Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.
davrosthedalek
15 days ago
Yes, because at that time, a modem didn't actually talk to a modem over a switched analog line. Instead, line cards digitized the analog phone signal, the digital stream was then routed through the telecom network, and the converted back to analog. So the analog path was actually two short segments. The line cards digitized at 8kHz (enough for 4kHz analog bandwidth), using a logarithmic mapping (u-law? a-law?), and they managed to get 7 bits reliably through the two conversions.
ISDN essentially moved that line card into the consumer's phone. So ISDN "modems" talked directly digital, and got to 64kbit/s.
nyrikki
15 days ago
An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.
56k only allowed one ad/da from provider to customer.
When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.
You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.
T1 CAS was still around but in channel CID etc… didn’t really work for their needs.
33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.
namibj
15 days ago
56k relied on the TX modem to be digitally wired to the DAC that fed the analog segment of the line.
da_chicken
15 days ago
Confusing baud and bit rates is consistent with actually being there, though.
Jedd
15 days ago
As someone that started with 300/300 and went via 1200/75 to 9600 etc - I don't believe conflating signalling changes with bps is an indication of physical or temporal proximity.
drzaiusx11
15 days ago
I think it was a joke implying you'd be old enough to forget because of age, which in my case is definitely true...
Jedd
14 days ago
Oh, I got the implication, but I think it was such a common mistake back then, that I don't think it's age-related now - it's a bit of a trope, to assume baud and bps mean the same thing, and people tend to prefer to use a more technical term even when it's not fully understood. Hence we are where we are with terms like decimate, myriad, nubile, detox etc, forcefully redefined by common (mis)usage. I need a cup of tea, clearly.
Anyway, I didn't think my throw-away comment would engender such a large response. I guess we're not the only olds around here!
da_chicken
14 days ago
No, just that confusing the two was ubiquitous at the time 14.4k, 28k, and 56k modems were the standard.
Like it was more common than confusing Kbps and KBps.
I mean, the 3.5" floppy disk could store 1.44 MB... and by that people meant the capacity was 1,474,560 bytes = 1.44 * 1024 * 1000. Accuracy and consistency in terminology has never been particularly important to marketing and advertising, except marketing and advertising is exactly where most laypersons first learn technical terms.
drzaiusx11
14 days ago
I started out with a 2400 baud US Robotics modem with my "ISP" being my local university to surf gopher and BBS. When both baud rates and bits per second were being marketed side by side I kinda lost the thread tbh. Using different bases for storage vs transmission rates didn't help.
drzaiusx11
15 days ago
Yeah I got baud and bit rates confused. I also don't recall any hayes commands anymore either...
user
15 days ago
quesera
15 days ago
> I've used links with modems rocking 1200 bps
Yo, 300 baud, checking in.
Do I hear 110?
+++ATH0
robflynn
15 days ago
Ah, the good old days. I remember dialing up local BBSes with QMODEM.
AT&C1&D2S36=7DT*70,,,5551212
codazoda
15 days ago
PoiZoN BBS Sysop chiming in. I ran the BBS on a free phone line I found in my childhood bedroom. I alerted the phone company and a tech spent a day trying to untangle it, but gave up at the end of his shift. He even stopped by to tell me it wouldn’t be fixed.
I didn’t know the phone number, so I bought a Caller ID box, hooked it to my home line, and phoned home. It wasn’t long before every BBS in town had a listing for it.
quesera
15 days ago
That's awesome.
I had to wait til I was old enough to get a phone line in my own name before running a BBS. And also til I had a modem that would auto-answer, which was not a given back then!
But I confess my first question for a working but unassigned phone line would be: who gets the bill for long distance calls?
I had access to no-cost long distance calling through other administrative oversights, but they were a bit more effort to maintain! :)
nwellinghoff
15 days ago
Man that tech was cool and did you a solid.
bigfatkitten
15 days ago
Many techs went to work for the phone companies for a reason.
ochrist
15 days ago
My first modem (from 1987) was 300 baud, but it could be used in a split mode called 75/1200.
Before that I used 50 baud systems in the military as well as civil telex systems.
quesera
15 days ago
Mine was 300 baud, probably 1982?
And I felt privileged because the configuration for my TI-99/4A Terminal Emulator (which I believe was called Terminal Emulator) had options for 110 or 300 baud, and I felt lucky to be able to use the "fast" one. :)
My first modem (you always remember your first) had no carrier detection (and no Hayes commands, and no speaker...), so I would dial the number manually, then flip a switch when I heard the remote end pick up and send carrier to get the synchronization started.
It was incredibly exciting at the time.
guiambros
15 days ago
Ha, same! On a TRS-80 Color, nonetheless. But I think I used four times, because no one else in the country had a BBS at the time (small city in Latin America).
It took a couple of years until it would catch on, and by then 1200 and 2400 bps were already the norm - thankfully!
bandrami
15 days ago
Same year, I tried this cool new "Mosaic" software and thought it was a cool proof of concept, but there was no way this web thing could ever displace gopher
egeozcan
15 days ago
Which was right, today gopher has more users than ever! :)
reincarnate0x14
16 days ago
It sort of already is. This behavior is only applied to sessions with a TTY and then the client can disable it, which is a sensible default. This specific use case is tripping it up obviously since the server knows ahead of time that the connection is not important enough to obfuscate and this isn't a typical terminal session, but in almost any other scenario there is no way to make that determination and the client expects its ObscureKeystrokeTiming to be honored.
CaptainNegative
15 days ago
What's a concrete threat model here? If you're sending data to an ssh server, you already need to trust that it's handling your input responsibly. What's the scenario where it's fine that the client doesn't know if the server is using pastebin for backing up session dumps, but it's problematic that the server tells the client that it's not accepting a certain timing obfuscation technique?
reincarnate0x14
15 days ago
The behavior exists to prevent a 3rd party from inferring keystrokes from active terminal sessions, which is surprisingly easy, particularly with knowledge about the user's typing speed, keyboard type, etc. The old CIA TEMPEST stuff used to make good guesses at keystrokes from the timing of AC power circuit draws for typewriters and real terminals. Someone with a laser and a nearby window can measure the vibrations in the glass from the sound of a keyboard. The problem is real and has been an OPSEC sort of consideration for a long time.
The client and server themselves obviously know the contents of the communications anyway, but the client option (and default behavior) expects this protection against someone that can capture network traffic in between. If there was some server side option they'd probably also want to include some sort of warning message that the option was requested but not honored, etc.
TruePath
13 days ago
To clarify the point in the other reply -- imagine it sent one packet per keystroke. Now anyone sitting on the network gets a rough measurement of the delay between your keystrokes. If you are entering a password for something (perhaps not the initial auth) it can guess how many characters it is and turns out there are some systemic patterns in how that relates to the keys pressed -- eg letters typed with the same finger have longer delays between them. Given the redundancy in most text and especially structured input that's a serious security threat.
BoppreH
16 days ago
Yes, but I wouldn't be surprised if the change is rejected. The crypto library is very opinionated, you're also not allowed to configure the order of TLS cipher suites, for example.
mystraline
16 days ago
[flagged]
throawayonthe
16 days ago
that's the point of opinionated crypto libraries, yes
JTbane
16 days ago
Personally I like that it's secure by default.
otabdeveloper4
16 days ago
Those same security guys also think that "just hope that no bad guy ever gets root access, lol" is a valid threat model analysis, so whatever.
anonymous908213
16 days ago
That is a completely valid threat model analysis, though? "Just hope no bad guy ever gets into the safe" is rather the entire point of a safe. If you have a safe, in which you use the contents of the safe daily, does it make sense to lock everything inside the safe in 100 smaller safes in some kind of nesting doll scheme? Whatever marginal increase in security you might get by doing so is invalidated by the fact that you lose all utility of being able to use the things in the safe, and we already know that overburdensome security is counterproductive because if something is so secure that it becomes impossible to use, those security measures just get bypassed completely in the name of using the thing. At some level of security you have to have the freedom to use the thing you're securing. Anything that could keep a bad guy from doing anything ever would also keep the good guy, ie. you, from doing anything ever.
otabdeveloper4
15 days ago
> That is a completely valid threat model analysis, though?
No it isn't. Here in 2026 timesharing accounts aren't a thing anymore and literally everyone who ever logs into your server has root access.
"Just make sure all those outsourced sysadmins working for a contractor you've never met are never bad guys" is not a valid security threat model.
KAMSPioneer
15 days ago
> literally everyone
Perhaps figuratively? I manage several servers where the majority of (LDAP) accounts have no special privileges at all. They get their data in the directories and can launch processes as their user, that's...pretty much it.
Though the upstream comment is gone and I am perhaps missing some important context here.
fwip
15 days ago
When the question is "how do I communicate securely with a third party," there's nothing you can do if the third party in question gets possessed by a demon and turns evil. (Which is what happens if an attacker has root.)
otabdeveloper4
15 days ago
Incorrect.
Random sysadmins who have access to your server have the permissions to steal whatever is communicated between third parties unrelated to this sysadmin.
Just because some random outsourced nightshift dude has the permissions to do "sudo systemctl restart" shouldn't mean he gets to read all the secret credentials the service uses.
As it is now, the dude has full unfettered access to all credentials of all services on that machine.
fwip
15 days ago
I guess if your org usually gives the keys to the castle to random idiots, then yeah, I can see why you'd wish the master key didn't exist.
user
15 days ago
pseudohadamard
15 days ago
It's not just the pointless chaff, the SSH protocol is inherently very chatty, and SFTP even more so. The solution, for a high-performance game, is don't use SSH. Either run it over Wireguard or grab some standard crypto library and encrypt the packets yourself. You'll probably make a few minor mistakes but unless the other player is the NSA it'll be good enough.
For that matter, why does it need to be encrypted at all? What's the threat model?
If there really is a genuine need to encrypt and low latency is critical, consider using a stream cipher mode like AES-CTR to pregenerate keystream at times when the CPU is lightly loaded. Then when you need to encrypt (say) 128 bytes you peel off that many bytes of keystream and encrypt at close to zero cost. Just remember to also MAC the encrypted data, since AES-CTR provides zero integrity protection.
tracker1
15 days ago
Serious question, why not just use websockets? AFAIK, it's effectively a TLS socket with a little bit of handshake overhead when starting.
I'm literally working on a web interface I want to use for classic BBS door play... currently working on a DOS era EGA interface, and intend to do similar for PETSCII/Comodore64/128 play as well. I've got a couple rendering bugs to explore for ansis submitted that messed up in the viewer test mode.
https://github.com/bbs-land/webterm-dos-ansi
It's been an opportunity to play with AI dev as well... spent as much time getting the scrollback working how I want as it took on the general rendering.
pseudohadamard
14 days ago
Websockets is just another layer on top of TLS, so you've got the size explosion and complexity/latency of TLS and then another layer on top of that. The OP hasn't provided additional info on what the requirements are but if it's a realtime game then they'll probably be "as close to zero latency and size increase as possible (compared to unencrypted messaging)", which websockets over TLS isn't.
tracker1
12 days ago
Unless I'm completely misunderstanding, once you "upgrade" the connection to a websocket connection, it's pretty much a bog standard TLS socket... I'm not sure what you mean by a size explosion, compared to what? As to latency or overhead, yeah there's some, but generally very minimal on anything resembling modern hardware, there are literally trillions of bytes transported over HTTPS/TLS every day from watches to super computers.
Beyond this, there are libraries and tunnels for everything under the sun, and it's one of the least likely options to see mass breakages in general given it handshakes over 443 (https). Assuming you want encryption... if you don't then use raw sockets, or websockets without https and/or raw sockets... You can use whatever you like.
Calvin02
16 days ago
Threats exist in both trusted and untrusted environments though.
This feels like a really niche use case for SSH. Exposing this more broadly could lead to set-it-and-forget-it scenarios and ultimately make someone less secure.
smallmancontrov
16 days ago
Resource-constrained environments might be niche to you, but they are not niche to the world.
eikenberry
16 days ago
+1... Given how much SSH is used for computer-to-computer communication it seems like there really should be a way to disable this when it isn't necessary.
mkj
16 days ago
It looks like it is only applied for PTY sessions, which most computer-computer connections wouldn't be using.
https://github.com/openssh/openssh-portable/blob/d7950aca8ea...
jacquesm
16 days ago
In practice I've never felt this was an issue. But I can see how with extremely low bandwidth devices it might be, for instance LoRa over a 40 km link into some embedded device.
geocar
16 days ago
Hah no.
Nobody is running TCP on that link, let alone SSH.
Rebelgecko
16 days ago
Once upon a time I worked on a project where we SSH'd into a satellite for debugging and updates via your standard electronics hobbiest-tier 915mhz radio. Performance was not great but it worked and was cheap.
user
15 days ago
jacquesm
16 days ago
This is still done today in the Arducopter community over similar radio links.
drzaiusx11
15 days ago
I haven't heard much about the ArduCopter (and ArduPilot) projects for a decade, are those projects still at it? I used to run a quadroter I made myself a while back until I crashed it in a tree and decided to find cheaper hobbies...
mardifoufs
15 days ago
Well at least crashing drones into trees has never been cheaper hahaha. So it's super easy to get into nowadays, especially if it's just to play around with flight systems instead of going for pure performance.
jacquesm
15 days ago
They're alive and well and producing some pretty impressive software.
Crashing your drone is a learning experience ;)
Remote NSH over Mavlink is interesting, your drone is flying and you are talking to the controller in real time. Just don't type 'reboot'!
geocar
15 days ago
ELRS?
Rebelgecko
15 days ago
Nope this predated ELRS by a bit. I wasn't super involved with the RF stuff so not sure if we rolled our own or used an existing framework
jacquesm
15 days ago
You can run ELRS on 900 MHz but the bitrate is atrocious.
jacquesm
16 days ago
https://github.com/markqvist/Reticulum
and RNode would be a better match.
dsrtslnd23
16 days ago
In aerial robotics, 900MHz telemetry links (like Microhard) are standard. And running SSH over them is common practice I guess.
BenjiWiebe
15 days ago
Why do you guess? I wouldn't expect SSH to be used on a telemetry link. Nor TCP, and probably not IP either.
nomel
16 days ago
what's wrong with tcp, on a crappy link, when guaranteed delivery is required? wasn't it invented when slow crappy links were the norm?
OhMeadhbh
15 days ago
Because TCP interprets packet loss as congestion and slows down. If you're already on a slow, lossy wireless link, bandwidth can rapidly fall below the usability threshold. After decades of DARPA attending IETF meetings to find solutions for this exact problem [turns out there were a lot of V4 connections over microwave links in Iraq] there are somewhat standard ways of setting options on sockets to tell the OS to consider packet loss as packet loss and to avoid slowing down as quickly. But you have to know what these options are, and I'm pretty sure the OP's requirement of having `ssh foo.com` just work be complicated by TCP implementations defaulting to the "packet loss means congestion" behavior. Hmm... now that I think about it, I'm not even sure if the control plane options were integrated into the Linux kernel (or Mac or Wintel)
Life is difficult sometimes.
direwolf20
15 days ago
It will time out before your packet gets through, or it will retransmit faster than the link can send packets.
KennyBlanken
15 days ago
The guy in charge of Go's security decreed that SSL 1.3 (which he was a contributor to) was so secure that silly programmers should not be able to override what algorithms are allowed or not allowed, because why would they possibly need to do that, because he's such a genius, and even if someone DID find a security vulnerability, well....they can just wait for Google to publicly disclose it and release a patch, compile the new version, update their code to work with that version of Go, rebuild their containers, put stuff through testing, and then release it into production.
Versus...seeing there's a vulnerability, someone adding a one-line change to disable the vulnerable algorithm, compile, image update, test. And a lot less testing because you're not moving to a new version of the language / compiler.
The man has no practical experience in running a production network service, an ego the size of a small moon, and yet was a major contributor to a security protocol now in use by billions of people.
But hey, you can be a handbag designer and end up head of design at Apple soooooooo
TruePath
13 days ago
Lots of the real world vulnerabilities out there exist exactly because of people choosing to support a range of crypto algorithms.
Sure, if it's an internal tool you can recompile both ends and force a universal update. But anything else and you need to stay compatible with clients and anytime you allow negotiation of the cryptosuit you open yourself up to quite a few subtle attacks. Not saying that choice about go is clearly a good one but i don't think it's obviously wrong.
PunchyHamster
15 days ago
Relying on not advertising some feature for it is very janky way to do it.
The proper fix would be adding option server-side to signal client it's not needed and have client side have option to accept or warn about that